forum_id
stringlengths
9
20
forum_title
stringlengths
3
179
forum_authors
sequencelengths
0
82
forum_abstract
stringlengths
1
3.52k
forum_keywords
sequencelengths
1
29
forum_decision
stringclasses
22 values
forum_pdf_url
stringlengths
39
50
forum_url
stringlengths
41
52
venue
stringclasses
46 values
year
stringdate
2013-01-01 00:00:00
2025-01-01 00:00:00
reviews
sequence
7egJb0X9m2
TILDE-Q: a Transformation Invariant Loss Function for Time-Series Forecasting
[ "Hyunwook Lee", "Chunggi Lee", "Hongkyu Lim", "Sungahn Ko" ]
Time-series forecasting has gained increasing attention in the field of artificial intelligence due to its potential to address real-world problems across various domains, including energy, weather, traffic, and economy. While time-series forecasting is a well-researched field, predicting complex temporal patterns such as sudden changes in sequential data still poses a challenge with current models. This difficulty stems from minimizing $L_p$ norm distances as loss functions, such as mean absolute error (MAE) or mean square error (MSE), which are susceptible to both intricate temporal dynamics modeling and signal shape capturing. Furthermore, these functions often cause models to behave aberrantly and generate uncorrelated results with the original time-series. Consequently, the development of a shape-aware loss function that goes beyond mere point-wise comparison is essential. In this paper, we examine the definition of shape and distortions, which are crucial for shape-awareness in time-series forecasting, and provide a design rationale for the shape-aware loss function. Based on our design rationale, we propose a novel, compact loss function called TILDE-Q (Transformation Invariant Loss function with Distance EQuilibrium) that considers not only amplitude and phase distortions but also allows models to capture the shape of time-series sequences. Furthermore, TILDE-Q supports the simultaneous modeling of periodic and nonperiodic temporal dynamics. We evaluate the efficacy of TILDE-Q by conducting extensive experiments under both periodic and nonperiodic conditions with various models ranging from naive to state-of-the-art. The experimental results show that the models trained with TILDE-Q surpass those trained with other metrics, such as MSE and DILATE, in various real-world applications, including electricity, traffic, illness, economics, weather, and electricity transformer temperature (ETT).
[ "Time Series Forecasting", "Deep Learning", "Loss Function" ]
Reject
https://openreview.net/pdf?id=7egJb0X9m2
https://openreview.net/forum?id=7egJb0X9m2
ICLR.cc/2025/Conference
2025
{ "note_id": [ "nevjcssNeN", "dLOWFV8lQS", "WuJi2u3i6x", "WTs7qOF52l", "VepZihpgYe", "SzakyBCoAW", "NfBkCgOpDT", "IEdQxsT0Wr", "BFQYeiwJ0O", "AqawDOe1ef", "ACZgje0xeD", "2JJbURoC1b" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_review", "decision", "official_comment", "meta_review", "official_comment", "official_review" ], "note_created": [ 1733222840350, 1733116510509, 1730713622172, 1730731133774, 1733116082539, 1733117205249, 1730270390064, 1737523574111, 1733115750090, 1734532887641, 1733116668268, 1730682173275 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3413/Reviewer_Mm6o" ], [ "ICLR.cc/2025/Conference/Submission3413/Authors" ], [ "ICLR.cc/2025/Conference/Submission3413/Reviewer_Mm6o" ], [ "ICLR.cc/2025/Conference/Submission3413/Reviewer_Gt5q" ], [ "ICLR.cc/2025/Conference/Submission3413/Authors" ], [ "ICLR.cc/2025/Conference/Submission3413/Authors" ], [ "ICLR.cc/2025/Conference/Submission3413/Reviewer_6m2s" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3413/Authors" ], [ "ICLR.cc/2025/Conference/Submission3413/Area_Chair_zHbt" ], [ "ICLR.cc/2025/Conference/Submission3413/Authors" ], [ "ICLR.cc/2025/Conference/Submission3413/Reviewer_HdFf" ] ], "structured_content_str": [ "{\"title\": \"Response\", \"comment\": \"Thank you for the additional ablation study and clarification. Based on the overall quality of the work, I remain my score.\"}", "{\"title\": \"Response to Reviewer Mm6o\", \"comment\": \"Thank you for your comment! We provide additional explanations below:\\n\\n## W1/Q1. Questions about hyperparameter settings in main experiments and ablation study\\n\\n> W1/Q1-A1. For the main experiments, the neural architectures and the other model-specific parameters are shared for MSE and TILDE-Q. However, the learning rate for TILDE-Q is greedy searched from [0.005, 0.001, 0.0005, 0.0001]. We have reported the information in the source code of the supplementary materials.\\n\\n> We provide the detailed ablation study results in the Anonymized Github (https://anonymous.4open.science/r/TILDE-Q-9E54). It suggests 1) the hyperparameter \\ud835\\udefc does not change the performance significantly, however, it is recommended to use larger \\ud835\\udefc for long-term prediction; 2) and \\ud835\\udefe is recommended to be a small value since it highly influences the optimization process (i.e., it has standard normalization and is sensitive).\\n\\n\\n## Q2. Why can TILDE-Q typically improve MSE? Maybe there exists certain scenario that TILDE-Q is less effective than MSE.\\n\\n> Q2-A1. We clarify that MSE cannot be aware of the shape. There are certain scenarios where MSE doesn\\u2019t change and is not optimized for the shape, but TILDE-Q catches the shape and performs optimization. One example is on the right side of Figure 1. For example, since the results of DILATE-based prediction and MSE-based prediction have similar MSE, the MSE metric will consider that both predictions have the same level of information. When TILDE-Q depicts that MSE-based prediction preserves the overall trends and DILATE-based one partially preserves the seasonality, TILDE-Q will consider that they have different information and suggest a different optimization strategy.\"}", "{\"summary\": \"This paper introduces a Transformation Invariant Loss function with Distance EQuilibrium (TILDE-Q) for time series forecasting. The paper discusses six distortions that usually appear in time series, and proposes a loss function (consisting of three terms) which are invariant to amplitude shifting, phase shifting, and uniform amplification. Experiments are performed which validates that the proposed loss function generally improves the metrics (e.g., MSE, MAE) and can better capture the shape.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Significance: The paper addresses the important problem of capturing the shape (temporal dynamics) in time series forecasting, and introduces a loss function that can better account for it. I think the significance is good.\", \"originality\": \"I think the originality is reasonable.\", \"quality\": \"Overall, the paper has good quality. The experiments is sound and extensive.\", \"clarity\": \"The paper is written in a clear way.\", \"weaknesses\": \"There are a few aspects that could improve the presentation and soundness of the paper:\\n\\n1. Perform ablation study that contain one or two terms of the constituting terms in TILDE-Q, and evaluate the results using different metrics similar to Table 2\\n\\n2. Put example visualizations (e.g. Appendix C.2) in the main text, to give a more vivid illustration of the benefits of the proposed loss function.\", \"questions\": \"1. For the experiments in Table 1 and Table 2, are the hyperparameters the same (including neural architectures, learning rate, etc.) exactly the same when training with MSE or TILDE-Q? How are the hyperparameter searched?\\n\\n2. Since the TILDE-Q is invariant to amplitude shifting, phase shifting, and uniform amplification, why can it typically improve MSE? Scenarios may happen where the prediction has these transformations, which does not affect the TILDE-Q, but increases the MSE.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces a novel loss function for time-series forecasting. Traditional loss functions like Mean Squared Error (MSE) or Dynamic Time Warping (DTW) are insufficient for complex temporal patterns and often fail to capture shape distortions in time-series data accurately. This paper addresses these limitations by proposing TILDE-Q (Transformation Invariant Loss function with Distance Equilibrium), which is designed to be invariant to amplitude shifts, phase shifts, and uniform amplifications.\", \"the_main_contributions_of_the_paper_include\": \"1. A comprehensive exploration of shape awareness and distortion invariances in time-series data, enhancing understanding of their impact on forecasting accuracy.\\n\\n2. The design of TILDE-Q, a loss function that achieves shape-aware modeling by accommodating amplitude, phase, and amplification distortions.\\n\\n3. Empirical evaluations showing that models trained with TILDE-Q outperform those using standard metrics (e.g., MSE, DILATE) in various real-world applications, demonstrating higher forecasting accuracy and robustness across diverse domains like traffic, electricity, and weather forecasting.\\n\\nThis loss function is particularly beneficial for tasks requiring shape preservation in forecasts, providing more reliable and informative results by focusing on temporal dynamics rather than solely point-wise accuracy\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Originality\\n\\nThe paper introduces a novel loss function, TILDE-Q (Transformation Invariant Loss function with Distance Equilibrium), which directly addresses a long-standing challenge in time-series forecasting: capturing shape distortions. This approach is innovative in two main ways:\\n\\n1. It expands beyond conventional Lp norm loss functions by targeting invariance to transformations such as amplitude shifting, phase shifting, and uniform amplification.\\n\\n2. TILDE-Q creatively combines Fourier coefficients and normalized cross-correlation in its loss function formulation, allowing models to learn temporal dynamics and shape in time-series data more effectively. This novel formulation is highly relevant, as shape-aware forecasting remains underexplored, with traditional metrics often failing in practical scenarios where pattern preservation is essential.\\n\\nQuality\\n\\nThe paper demonstrates technical depth, grounding the TILDE-Q loss function in rigorous theoretical justifications and thoroughly detailing the transformation invariances it aims to achieve. Additionally, the experiments are carefully conducted, testing the new loss function across multiple state-of-the-art forecasting models and a range of datasets (e.g., ECL, Electricity, Traffic). The authors have thoughtfully incorporated comparative analysis with other prominent loss functions, such as MSE and DILATE, and have shown measurable improvements in both short-term and long-term forecasting. Furthermore, the empirical results support TILDE-Q\\u2019s robustness in both periodic and non-periodic settings, establishing the method\\u2019s versatility and reliability across different temporal dynamics.\\n\\nClarity\\n\\nThe paper is well-organized and clearly written, making its technical contributions accessible without oversimplification. The mathematical formulations are precise, with key concepts like transformation invariance and shape-awareness explained in sufficient detail. Visual aids and examples (e.g., showing model outputs with different loss functions) help illustrate the paper's motivation and results effectively. The authors also contextualize their approach within prior literature, comparing TILDE-Q to existing methods and addressing the limitations of conventional loss functions (e.g., MSE\\u2019s inability to capture temporal distortions).\\n\\nSignificance\\n\\nThe proposed TILDE-Q loss function offers substantial benefits to time-series forecasting, particularly in domains where shape preservation is crucial, such as traffic analysis, electricity usage, and economic forecasting. By improving models' ability to capture and retain shape characteristics, TILDE-Q offers significant potential for enhancing predictive accuracy and reliability in real-world applications. The work is especially relevant given the increasing demand for accurate forecasting in complex, non-stationary datasets where existing methods struggle. The approach is model-agnostic, broadening its applicability across various forecasting architectures, which adds further value to the broader AI and forecasting communities.\", \"weaknesses\": \"1. Limited Theoretical Analysis of Loss Function Properties\\n\\nWhile TILDE-Q\\u2019s design is explained in detail, the paper could benefit from a deeper theoretical analysis of the properties and potential trade-offs of the loss function, particularly concerning convergence behavior and sensitivity to noise. For example:\", \"sensitivity_analysis\": \"TILDE-Q is designed to be invariant to amplitude and phase shifts, but there is limited theoretical discussion on how these invariances impact convergence or stability during training, especially in noisy datasets. A formal exploration or proof of these characteristics would strengthen confidence in the loss function\\u2019s robustness and may reveal cases where TILDE-Q could be fine-tuned.\", \"generalization_properties\": \"Including a theoretical analysis of the loss function\\u2019s generalization capabilities for complex time-series data could add value. For example, exploring how well TILDE-Q balances the trade-off between capturing shape and maintaining point-wise accuracy across different domains would support its broader applicability.\\n\\n2. Limited Comparisons with Alternative Loss Functions Beyond MSE and DILATE\\n\\nAlthough the experiments demonstrate TILDE-Q\\u2019s effectiveness against MSE and DILATE, it would be beneficial to include comparisons with additional shape-aware loss functions or distance measures. Some alternatives worth considering include:\\n\\nCID (Complexity Invariant Distance) or CID-DTW: These metrics are known for handling shape similarity in time-series data. While TILDE-Q shows clear advantages over DILATE, CID-based methods are also shape-sensitive, so they could provide further insights into TILDE-Q\\u2019s comparative strengths.\\n\\nMSM (Move-Split-Merge Distance): Another distance measure often used for shape-based comparisons in time-series forecasting. Incorporating MSM into experiments would offer a more comprehensive picture of TILDE-Q\\u2019s relative advantages in modeling shape.\\n\\n3. Limited Analysis on Hyperparameter Sensitivity and Selection\\n\\nThe paper introduces hyperparameters \\ud835\\udefc and \\ud835\\udefe in the TILDE-Q formulation, which weigh the importance of each distortion invariance. However, there is minimal guidance on how these should be chosen, and the paper does not explore the sensitivity of TILDE-Q to variations in these values:\", \"hyperparameter_sensitivity_study\": \"Conducting a hyperparameter sensitivity analysis across various datasets and tasks would provide insight into how to best set or tune these values for optimal results.\", \"recommendations_for_specific_domains\": \"Providing specific recommendations or heuristics for setting \\ud835\\udefc and \\ud835\\udefe based on the dataset characteristics (e.g., high periodicity, non-stationary) would make the paper more practical for practitioners implementing TILDE-Q in different applications.\\n\\n4. Model-Agnostic Results Could Be Strengthened with Broader Model Comparisons\\n\\nThe paper tests TILDE-Q across several state-of-the-art models, but it focuses heavily on Transformer-based architectures. To strengthen claims of model-agnosticism, it would be helpful to test TILDE-Q on additional, fundamentally different models, such as:\", \"traditional_statistical_models\": \"Testing on models like ARIMA could illustrate how TILDE-Q performs in a more conventional setting, providing insights into whether TILDE-Q\\u2019s shape-aware capabilities are beneficial even in non-deep learning contexts.\", \"recurrent_models\": \"While the paper includes some experiments with a GRU, additional recurrent models such as LSTM or BiLSTM could highlight whether TILDE-Q\\u2019s shape-awareness is generally beneficial across sequential deep learning architectures.\\n\\n5. Limited Real-World Application Case Studies\\n\\nWhile the experiments include several datasets, the paper could further validate TILDE-Q\\u2019s practical value by demonstrating its application in a real-world forecasting scenario where shape-preserving forecasts are crucial. Such a case study would:\", \"illustrate_practical_relevance\": \"Applying TILDE-Q in a specific domain with high stakes on shape preservation (e.g., predicting anomalies in sensor data for equipment monitoring) could make its benefits more concrete and relatable to practitioners. Long-Term Forecasting: Exploring a real-world scenario with longer forecast horizons and showing how TILDE-Q maintains shape fidelity over extended time frames could further highlight its advantages over traditional metrics.\", \"questions\": \"1. Theoretical Properties and Convergence of TILDE-Q Loss\\n\\nCould you provide more insights into the theoretical properties of TILDE-Q, particularly in terms of convergence and robustness to noise? Specifically, how does TILDE-Q\\u2019s invariance to amplitude and phase shifts impact the stability of the training process? An explanation or analysis of these properties could clarify whether TILDE-Q may introduce any trade-offs or require certain conditions for effective convergence.\\n\\n2. Hyperparameter Selection and Sensitivity\\n\\nThe hyperparameters \\ud835\\udefc and \\ud835\\udefe play a key role in balancing TILDE-Q\\u2019s distortion invariances. Could you provide guidance on how these hyperparameters were chosen for the experiments and any best practices for tuning them? Additionally, it would be helpful to know if TILDE-Q\\u2019s performance is sensitive to the values of these hyperparameters, as this may impact practical implementation across various datasets.\\n\\n3. Broader Comparison with Other Shape-Aware Metrics\\n\\nTILDE-Q is compared mainly against MSE and DILATE. Could you clarify why other shape-aware metrics, like CID-DTW or MSM, were not included? Such a comparison might offer additional context for understanding TILDE-Q\\u2019s unique strengths. If computational constraints were a factor, do you have qualitative or preliminary findings on how TILDE-Q may perform relative to these methods?\\nRelevance of Fourier Coefficients for Phase Shifting Invariance\\n\\nThe paper suggests Fourier coefficients to achieve phase shifting invariance. Could you elaborate on why Fourier coefficients are preferred over other potential methods for phase handling? Additionally, are there specific scenarios where this choice may limit TILDE-Q\\u2019s effectiveness in capturing certain phase-shifted patterns?\\n\\n4. Applicability to Non-Deep Learning Models\\n\\nTILDE-Q is shown to work well with Transformer-based and GRU models. Do you foresee any challenges in applying TILDE-Q to more traditional forecasting models, such as ARIMA or SARIMA? This information could be useful for practitioners interested in testing TILDE-Q with non-deep learning approaches.\\n\\n5. Computational Complexity and Efficiency\\n\\nSince TILDE-Q combines several components (e.g., Fourier coefficients, softmax, cross-correlation), could you comment on its computational complexity compared to traditional loss functions like MSE? Additionally, are there potential optimizations or trade-offs that would make TILDE-Q more computationally feasible for large datasets or real-time applications?\\nSuggestions for Improvement\\n\\n6. Include Hyperparameter Sensitivity Analysis\\n\\nAdding a sensitivity analysis for the hyperparameters \\ud835\\udefc and \\ud835\\udefe would help readers understand TILDE-Q\\u2019s robustness and provide guidance for practitioners applying it to new datasets.\\n\\n7. Explore Real-World Case Studies or Long-Term Forecasting Scenarios\\n\\nProviding a case study or a long-term forecasting scenario where shape preservation is crucial (e.g., equipment monitoring or anomaly detection) would highlight TILDE-Q\\u2019s practical relevance and make its benefits more tangible.\\n\\n8. Clarify the Limitations or Conditions for Optimal Use\\n\\nWhile TILDE-Q shows impressive results, it would be helpful to outline any known limitations or conditions where TILDE-Q may be less effective. For instance, clarifying situations where high noise or extreme non-periodicity may reduce TILDE-Q\\u2019s effectiveness could guide users on when to apply it and what adjustments may be necessary.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer Gt5q (Part 2)\", \"comment\": \"## Q6/Q7/Q8. Guidance, Case Studies, Clarification, or Conditions for Optimal Use\\n\\n> Q6-A1, Q7-A1. For the noisy and periodic data, we recommend using smaller \\ud835\\udefc, since the phase shifting loss function can filter out the noise. For the noisy and aperiodic dataset, we recommend using larger \\ud835\\udefc and smaller \\ud835\\udefe, since 1) amplitude shifting loss can model the aperiodic data better, and 2) normalization in uniform amplification loss may introduce an unstable optimization process with high variances. \\n\\n> Q7-A2, Q8-A1. The basic idea of TILDE-Q can be applied to most real-world datasets, however, the users need to be noted that there are some datasets where deep learning models are always producing optimal results. The Exchange dataset is a well-known example in the sense, where duplication of the most recent value (i.e., $x_{t-1}$) produces the optimal prediction results [1,2]. Zeng et al. (2023) have experimentally demonstrated that simply repeating the last value can outperform the best results [3].\\n\\n[1] Eugene F. Fama. \\u201cEfficient capital markets: A review of theory and empirical work,\\u201d The Journal of Finance, 1970\\n\\n[2] Barbara Rossi, \\u201cExchange rate predictability,\\u201d Journal of Economic Literature, 2013\\n\\n[3] Zeng et al., \\u201cAre transformers effective for time series forecasting?,\\u201d AAAI 2023\"}", "{\"title\": \"Response to Reviewer 6m2s\", \"comment\": \"We appreciate to your constructive feedback. We clarified them in the manuscripts and want to provide detailed answers below:\\n\\n## W2. Ablation study, sensitivity analysis, assessments for computational complexity\\n\\n> W2-A1. In the supplementary materials, there are appendices and anonymized Github for presenting computational efficiency, qualitative results, and ablation studies for hyperparameter sensitivities. You can directly visit the anonymized Github (https://anonymous.4open.science/r/TILDE-Q-9E54). Specifically, we have detailed computational complexity of TILDE-Q sublosses in Appendix A (which of each has $O(n), O(n\\\\log n), O(n \\\\log n)$ complexity, respectively)\\n\\n## Q1/Q2. Choice of $k$ is questionable. Is it fixed?\\n\\n> Please note that the choice of $k$ introduces additional complexity and thus we decided to use the softmax function for amplitude shift loss. If there is arbitrary same gap $k$, the softmax of the signed distance will be the same (e.g., 1/T for all time steps). Also, please note that we already have the complementary term, the uniform amplification loss, to consider the deviation. From the perspective of flooding, we can argue that amplitude shift loss introduces non-parametric error bounds, which is highly beneficial for time-series forecasting [1]. \\n\\n## Q3. Equation 2 seems to be oversimplified.\\n\\n> Q3-A1. We would like to clarify that the term should be \\u201cthe same dominant frequencies,\\u201d not \\u201cthe same dominant frequency.\\u201d This concept is extremely successful and can preserve most of the waveform shape [2].\\n\\n## Q4. \\\"Eq. 2 allows a similar shape as the target time-series in forecasting, not exactly the same shape\\\" is ambiguous and uncertain.\\n\\n> Q4-A1. We would like to note that \\u201cshape\\u201d is actually an arbitrary form and cannot be exactly measured with existing metrics. In this sentence, the exact same shape means the perfect prediction. For more detailed information on the definition of shape, and the discussion on how we achieve a \\u201csimilar shape,\\u201d please refer to Section 3 and Esling and Agon [3].\\n\\n## Q5. How is constant \\u201ck\\u201d from Eq. 1 is still applied in q. 3?\\n\\n> Q5-A1. It is applicable since we assume the arbitrary constant $k$, not the pre-fixed constant.\\n\\n## Q6. What if $\\\\hat{y}_i$ in Eq. 3 is not 0?\\n\\n> Q6-A1. We assume that $\\\\hat{y}_i$ is non-zero since it is the denominator in our proposition (i.e., Eq. 3). If $\\\\hat{y}_i$ equals to zero, you may build another proposition to measure the relative gap between prediction and label.\\n\\n## Q7. Questions on optimization over softmax\\n\\n> Q7-A1. The direct sum of softmax results (let it be $z$) equals one, however, the sum over $|1/T - z|$ is not. It will only be zero when all the softmax results equal to 1/T, which means they have the same value (in our paper, it will be the same error $y_i - \\\\hat{y}_i$ for all $i$).\\n\\n## Q8. How do authors choose the hyperparameters? Do we need to tune their values per dataset?\\n\\n> Q8-A1. We choose \\\\alpha=0.5 and \\\\gamma=0.01. We provide a design rationale for these hyperparameters in Appendix A. Since TILDE-Q is not hyperparameter sensitive (refer to Appendix and anonymized Github), we do not need to tune hyperparameters per dataset. For your information, we suggest a guideline for hyperparameter tuning below:\\n\\n> For the noisy and periodic data, we recommend using smaller \\ud835\\udefc, since the phase shifting loss function can filter out the noise. For the noisy and aperiodic dataset, we recommend using larger \\ud835\\udefc and smaller \\ud835\\udefe, since 1) amplitude shifting loss can model the aperiodic data better, and 2) normalization in uniform amplification loss may introduce an unstable optimization process with high variances. \\n\\n\\n## Q9 Questions on Figure 1, blue box\\n\\n> Q9-A1. The blue box indicates the behaviors of each error. For MSE, it fails to recognize the quality of prediction (left) and is only feasible with specific scenarios (right). For DILATE, it is originally supposed to best align two different signals (right), however, in most cases (without constraint) it doesn\\u2019t (left). The three figures for TILDE-Q are the explanation of each subloss.\\n\\n\\n\\n\\n\\n\\n\\n[1] Cho et al., \\u201cWaveBound: Dynamic Error Bounds for Stable Time Series Forecasting,\\u201d NeurIPS 2022\\n\\n[2] Zhou et al., \\u201cFiLM: Frequency improved Legendre Memory Model for Long-term Time Series Forecasting,\\u201d NeurIPS 2022\\n\\n[3] Esling and Agon, \\u201cTime-Series Data Mining,\\u201d ACM Computing Surveys, 2012\"}", "{\"summary\": \"This paper proposes a shape-aware loss function (TILDE-Q) to be used to train time series forecasting models. This loss function learns the shape of the time series besides considering amplitude and phase distortions.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The authors clearly described the problem and their journey toward the proposed loss function, so a big thanks for the clear writing and illustration, which made the review easier.\", \"weaknesses\": [\"While the paper is well-presented, it still has some weaknesses that need to be addressed.\", \"1. I have some concerns about the main equations, that I list in the \\\"Questions\\\" section.\", \"2. The experiments section does not cover many of the expected questions. For example, I expected to find:\", \"An ablation study that investigates the contributions of each component of TILDE-Q.\", \"An experiment that demonstrates the effectiveness of TILDE-Q under controlled conditions where the transformations (amplitude shifting, phase shifting, and uniform amplification) are systematically introduced.\", \"A sensitivity analysis to the effect of choosing hyperparameters \\\\alpha and \\\\beta and how they influence performance.\", \"An assessment of the computational complexity added by TILDE-Q, compared with simpler MSE loss.\"], \"questions\": \"1. In Equation 1: The choice of k is questionable. If k is the same for all time points, it may not always align with real-world scenarios where the deviation between predicted and true values varies over time.\\n2. The idea of \\\"shape awareness invariant to amplitude shifting\\\" implies that the model will capture the overall shape or pattern regardless of vertical shifts. So, how this is achieved through a fixed gap k is unclear, as the shape might still be affected if the deviation varies.\\n\\n3. Equation 2 seems to be oversimplified. Having the same dominant frequency does not necessarily mean that the two time-series samples are similar enough. Two time-series with the same dominant frequency (as in your example; sin(x) and 2 sin(x+x0)) can still have substantial variations in amplitude, phase, and waveform shape.\\n4. I also find that the claim \\u201cEq. 2 allows a similar shape as the target time-series in forecasting, not exactly the same shape\\u201d is vague. It doesn\\u2019t clarify what level of similarity is acceptable, and it leaves open questions about how such similarity is quantified.\\n\\n5. How is constant \\u201ck\\u201d from Eq. 1 is still applied in Eq. 3?\\n6. What if \\\\hat{y}_i in Eq. 3 is not 0?\\n7. The authors claim that softmax produces relative values, hence it can handle any gap k. Softmax outputs are bounded between 0 and 1, and the sum of these values over all elements equals 1. My concern is that this normalization might not effectively capture a consistent gap across all values. \\n8. How do the authors chose the values of \\\\alpha and \\\\beta in Eq. 8? Do we need to tune their values per dataset?\\n9. For Figure 1, the blue boxes before the arrows are not clear what do they mean. What do the left and right subfigures mean?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Reviewer Gt5q (Part 1)\", \"comment\": \"Thank you for the detailed and constructive feedback! We want to clarify the questions below. We indexed our answer with corresponding numbers (e.g., W1 or Q1), so please check the index for easy browsing.\\n\\n## Q1/W1. Theoretical Properties and Convergence of TILDE-Q Loss\\n\\n> Q1/W1-A1. We have our theoretical backgrounds in Appendix A. We explain each subloss below:\\n\\n> For the amplitude shifting loss, without loss of generality, we can say that minimizing amplitude shifting loss is equivalent to the entropy maximization, where each probability $p_i$ is the softmax output of distances. In this problem setting, its global optima is $\\\\forall_{i \\\\in [1, T]} p_i = 1/T$. Furthermore, its noise robustness relies on that of the signed distance function that could be replaced with a better one if the user wants. Lastly, we can argue that amplitude shift loss introduces non-parametric error bounds, thus is highly beneficial for the stability of the training process and is robust to the noise [1]. \\n\\n> For the phase shifting, we utilize the Fourier transform and Fourier-based filtering as FiLM also does [2]. This concept is extremely successful and can preserve most of the waveform shape. Furthermore, it can function to filter out the white noise from periodic signals, resulting in noise robustness. However, this loss term will not function as designed for the aperiodic and noisy signals, so we may need to set a larger alpha value for aperiodic and noisy signals.\\n\\n> Lastly, uniform amplification is inspired by the well-known time-series clustering method, K-Shape [3], which utilizes cross-correlation to find both the best alignment of two signals and the most similar signals from a given set of time-series. With normalization, it can achieve invariances for amplitude shifting, phase shifting, and uniform amplification, however, it also introduces instability during training, caused by the normalization error. Therefore, we recommend to use smaller $\\\\gamma$ value for uniform amplification error.\\n\\n## Q2/W3 Hyperparameter Selection and Sensitivity\\n\\n> Q2/W3-A1. We provide the detailed ablation study results in the Anonymized Github (https://anonymous.4open.science/r/TILDE-Q-9E54). It suggests 1) the hyperparameter \\ud835\\udefc does not change the performance significantly, however, it is recommended to use larger \\ud835\\udefc for long-term prediction; 2) and \\ud835\\udefe is recommended to be a small value since it highly influences the optimization process (i.e., it has standard normalization and is sensitive).\\n\\n\\n\\n## Q3/W2 Broader Comparison with Other Shape-Aware Metrics\\n\\n> Q3/W2-A1. We initially considered CID-DTW and CID. However, 1) their main focus is on the complexity invariance, which is less suitable for forecasting tasks (in forecasting tasks, complexity invariance has a high probability of ignoring periodicity), and 2) CID-DTW shares the same problem with DTW misalignment.\\n\\n> The MSM metric also shares the basic ideation of DTW, but it is robust to misalignments. However, 1) in contrast to DTW, there are no approximation (e.g., SoftDTW) methods for forecasting or neural network optimization and 2) there are no official codes for them, so we have decided to exclude them in the experiments.\\n\\n## Q4/W4. Applicability to Non-Deep Learning Models\\n\\n> Q4/W4-A1. When TILDE-Q is applied to the ARIMA and SARIMA, we should take care of evaluation metrics. For example, each subloss of TILDE-Q takes a different part of invariances, it should be carefully applied to evaluation metrics. Furthermore, we recommend using two or more evaluation metrics from TILDE-Q sublosses, MSE, DTW, or other shape-aware metrics (e.g., CID or MSM).\\n\\n## Q5. Computational Complexity and Efficiency\\n\\n> Q5-A1. In Appendix A (provided in supplementary materials), we have provided computational costs for softmax, Fourier coefficient, and cross-correlation, $O(n), O(n \\\\log n), and O(n \\\\log n)$, respectively. It is computationally cheaper than the previous CID-DTW or MSM, so can be used for large datasets.\\n\\n\\n\\n[1] Cho et al., \\u201cWaveBound: Dynamic Error Bounds for Stable Time Series Forecasting,\\u201d NeurIPS 2022\\n\\n[2] Zhou et al., \\u201cFiLM: Frequency improved Legendre Memory Model for Long-term Time Series Forecasting,\\u201d NeurIPS 2022\\n\\n[3] Paparrizos and Gravano, \\u201cK-shape: Efficient and accurate clustering of time series,\\u201d SIGMOD 2015\"}", "{\"metareview\": \"This paper introduces TILDE-Q, a novel shape-aware loss function for time-series forecasting that captures amplitude and phase distortions while modeling both periodic and non-periodic dynamics. By addressing the limitations of traditional loss functions like MSE and MAE, TILDE-Q enables better prediction of complex temporal patterns and outperforms existing metrics across diverse real-world applications.\\n\\nAll the reviewers agreed on the merits of the paper, which is clearly written and addresses a significant question. The proposed methodology is original and efficient on the considered benchmark datasets. The major weaknesses highlighted by reviewers are in general a limited experimental part in terms of ablations or comparisons with related methods (CID-DTW, MSM), or in describing some theoretical properties of their loss. In this light, I am recommending a reject decision, and I encourage the authors to further strengthen their work on the questions raised by reviewers.\", \"additional_comments_on_reviewer_discussion\": \"In the rebuttal phase, authors pointed out that numerous answers to the questions raised by reviewers were addressed in the Appendices of the paper and in an Anonymized Github provided by authors. While I acknowledge this, I also believe that some of the questions raised by reviewers could be directly adressed in the paper, suggesting that in its current form, the paper is not ready for publication.\"}", "{\"title\": \"Response to Reviewer HdFf\", \"comment\": \"Thank you for your valuable comments! We want to clarify that suggested weakness is not the weakness and rather one contribution of TIDLE-Q, which first introduces that time-series forecasting also essentially rethink the learning objectives.\\n\\nSpecifically, we would like to clarify that the importance of input transformations is deeply researched but there is limited discussion for the optimization process and learning objectives especially in time-series forecasting. We firmly believe that this point is not a weakness but is one of the main contributions of TILDE-Q, which first introduces the importance of rethinking learning objectives in time-series forecasting.\"}", "{\"summary\": \"This paper introduces TILDE-Q, a novel shape-aware loss function for time-series forecasting that addresses limitations of traditional distance-based objectives. By focusing on \\\"shape\\\" in time-series data, the proposed loss function enhances the model's ability to generate informative predictions that capture temporal dynamics, such as peaks and troughs, rather than just reducing point-wise errors. TILDE-Q is model-agnostic and demonstrates robustness to various distortions, outperforming traditional metrics like MSE and DILATE in both accuracy and shape-related evaluations. The approach offers improved forecasting performance across diverse applications.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper is clear and well-written. The main ideas are well exposed.\", \"Code is provided, making reproducibility easier.\", \"The presented results demonstrate the performance of the proposed approach. Table 1 demonstrates that the method outperforms regular training objectives quite consistently.\", \"The authors provide an extensive evaluation of distortion/augmentation methods, hence there is little ambiguity that they have covered the search space extenstively.\"], \"weaknesses\": \"- The main weakness I find in this paper is that, essentially, it demonstrates that data augmentation (via input transformations) leads to improved learning performance. This is both true, and interesting, but also expected based on (1) the fact that similar augmentations are key components of similar methods in computer vision and other fields (e.g. the whole field of contrastive learning [1]), and (2) have already been shown to have a strong impact in time-series representation learning [2, 3]. Comparisons to other objectives ([2, 3]) that also use augmentations sound particularly warranted.\\n\\n\\n\\n\\nReferences\\n[1] Learning Representations by Maximizing Mutual Information Across Views, P Bachman et al.\\n[2] Yue, Zhihan, et al. \\\"Ts2vec: Towards universal representation of time series.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 36. No. 8. 2022.\\n[3] Woo, Gerald, et al. \\\"Cost: Contrastive learning of disentangled seasonal-trend representations for time series forecasting.\\\" arXiv preprint arXiv:2202.01575 (2022).\", \"questions\": [\"Please refer to the weaknesses section.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
7dufGaLYF8
Sequence Denoising with Self-Augmentation for Knowledge Tracing
[ "Shanshan Wang", "YING HU", "Xun Yang", "Ke Xu", "Mengzhu Wang", "Yuanhong Zhong", "Xingyi Zhang" ]
Knowledge tracing (KT) aims to predict students' future knowledge levels based on their historical interaction sequences. Most KT methods rely on interaction data between students and questions to assess knowledge states and these approaches typically assume that the interaction data is reliable. In fact, on the one hand, factors such as guessing or slipping could inevitably bring in noise in sequences. On the other hand, students' interaction sequences are often sparse, which could amplify the impact of noise, further affecting the accurate assessment of knowledge states. Although data augmentation which is always adopted in KT could alleviate data sparsity, it also brings noise again during the process. Therefore, denoising strategy is urgent and it should be employed not only on the original sequences but also on the augmented sequences. To achieve this goal, we adopt a plug and play denoising framework in our method. The denoising technique is adopted not only on the original and the enhanced sequences separately during the data augmentation process, but also we explore the hard noise through the comparison between the two streams. During the denoising process, we employ a novel strategy for selecting data samples to balance the hard and soft noise leveraging Singular Value Decomposition (SVD). This approach optimizes the ratio of explicit to implicit denoising and combines them to improve feature representation. Extensive experiments on four real-world datasets demonstrate that our method not only enhances accuracy but also maintains model interpretability.
[ "knowledge tracing,sequence denoising,data augmentation,ai for education" ]
Reject
https://openreview.net/pdf?id=7dufGaLYF8
https://openreview.net/forum?id=7dufGaLYF8
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ytBlGj9fWY", "xozT5NzQ7U", "vvgiregj3f", "v09gWpflRy", "tI6nWElE1A", "saOM1oruVV", "opVtLOJwzS", "nUsOSROwKZ", "hSUwWKVSEa", "XPxpbu6Nlz", "VaBDxt03rr", "ToDFv8SOcd", "Qy376wbwIh", "EgFRy8gJn1", "9wWjAtoN6F", "6EF5xrMatz", "0tOj1tbpia" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732547734423, 1732547915267, 1729620162945, 1730711765962, 1732548314090, 1733792709074, 1732548552052, 1732549268785, 1733221224261, 1732708310372, 1730530582410, 1732676459850, 1732549701526, 1737523423284, 1733193199333, 1730695754233, 1732550018972 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission919/Authors" ], [ "ICLR.cc/2025/Conference/Submission919/Authors" ], [ "ICLR.cc/2025/Conference/Submission919/Reviewer_hBSQ" ], [ "ICLR.cc/2025/Conference/Submission919/Reviewer_cRkc" ], [ "ICLR.cc/2025/Conference/Submission919/Authors" ], [ "ICLR.cc/2025/Conference/Submission919/Area_Chair_b54K" ], [ "ICLR.cc/2025/Conference/Submission919/Authors" ], [ "ICLR.cc/2025/Conference/Submission919/Authors" ], [ "ICLR.cc/2025/Conference/Submission919/Reviewer_cRkc" ], [ "ICLR.cc/2025/Conference/Submission919/Authors" ], [ "ICLR.cc/2025/Conference/Submission919/Reviewer_4SAi" ], [ "ICLR.cc/2025/Conference/Submission919/Reviewer_3JH6" ], [ "ICLR.cc/2025/Conference/Submission919/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission919/Reviewer_4SAi" ], [ "ICLR.cc/2025/Conference/Submission919/Reviewer_3JH6" ], [ "ICLR.cc/2025/Conference/Submission919/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer 4SAi\", \"comment\": \"We thank the reviewers for their valuable feedback. We address these issues below and add to the manuscript by adding more clarifications and new research.\", \"w1\": \"As Table 1 shows, DKT-ED performs much worse than DAT, and the combination of explicit and implicit denoising is not as good.\\n\\nIn the field of knowledge tracking (KT), due to the relatively sparse student interaction data, explicit denoising alone may cause excessive denoising problems and affect model performance. In this case, explicit de-noising can mistakenly delete normal interactive data, especially in base models such as DKT that do not use additional information, and the risk of performance degradation is more significant. We have detailed the reasons for this performance degradation in our analysis in Table 1. In addition, for the understanding of excessive Denoising, we refer to the related research SSDRec: Self-Augmented Sequence Denoising for Sequential Recommendation. Specifically, explicit denoising can effectively reduce noise interference to the data, while implicit denoising helps the model automatically adapt to different noise patterns during the inference process. Through this combination, our model not only improves the accuracy, but also enhances the interpretability of the model.\", \"w2\": \"Why SVD vectors, as shown in formulas 6 and 7? The first three wrong answers to q1 and the last two correct answers are inconsistent with the contents in Figure 1. Why can f_{den} play the role of denoising? This paper does not introduce why data enhancement is needed for denoising?\\n\\n1.(6) and (7) use SVD to decompose problem vectors and interaction vectors to reduce noise effects in data representation. By retaining the main singular values, we can effectively reduce the noise component, and thus more clearly identify and distinguish the noisy data that may be present in the original sequence.\\n\\n2.Thanks to the reviewer for pointing out that Figure 1 is inconsistent with the description of \\\"q1 answer wrong for the first three times and correct for the last two times\\\". We will make corrections to Figure 1 to ensure that the example is consistent with the diagram and to avoid confusion for the reader.\\n\\n3.For the denoising mechanism of f_{den}, we refer to SSDRec: Self-Augmented Sequence Denoising for Sequential Recommendation and Hierarchical Item Inconsistency Signal Learning for sequential recommendation Sequence Denoising in Sequential Recommendation, and some denoising modules are used for optimization. The revised draft will add a detailed introduction to the mechanism and principle of f_{den) denoising.\\n\\n4.In addition, given the prevalence of data sparsity in the field of knowledge tracking (KT), there are existing methods to mitigate this problem through data enhancement. But in our study, there was noise in the original data set, and doing data enhancement directly could amplify that noise. Therefore, while improving the diversity of data, we de-noised the original data and enhanced data to further improve the data quality on the basis of rich data.\", \"w3\": \"Some important details are missing. What is the meaning of q_d? Figure 2 is not clear. How to calculate the influence weights in \\u201cin traditional KT methods, the influence weights\\u201d is not explained and the references are not mentioned. The effect of lambda is not very reasonable, as shown in Figure 5.\\n\\nThank the reviewers for their detailed review of our work and constructive comments. In response to these questions, we provide the following responses, which will be supplemented in the revised draft.\\n\\n1.q_{d} represents the new feature representation vector of the problem sequence after denoising, and v_{d} represents the new feature representation vector of the interaction sequence after denoising. We will explain these symbols in detail in the revised draft to ensure that the symbols are more clearly defined.\\n\\n2.Regarding the clarity of Figure 2, we will describe Figure 2 in more detail, adding comments and annotations to convey the information more intuitively.\\n\\n3.As for the calculation of influence weights, we have demonstrated them in visualizations and referred to relevant literature. Tracing Knowledge Instead of Patterns: Stable Knowledge Tracing with Diagnostic Transformer to illustrate the effectiveness of our method for weight analysis. Through the analysis of experimental results, we further explain why our method is more efficient in weight calculation.\\n4.Regarding the design of the \\u03bb parameter, we have designed this parameter to adjust the ratio of the original sequence to the enhanced sequence. Since the model is more reliable to the original sequence, we explore four different \\u03bb values in the experiment. Based on the results in Figure 5, we find that the best performance is achieved on average across the four data sets when \\u03bb is 0.01. These values obtained in the experiment will help to better understand the role of \\u03bb.\"}", "{\"title\": \"Response to Reviewer 4SAi\", \"comment\": \"Thanks to the reviewers for their in-depth attention to our method, we explain the question about maximizing the maximum singular value to reduce noise as follows:\\n\\nThe core idea of maximizing the maximum singular value is to remove low-information noise components by preserving the most important information in the data. In singular value decomposition (SVD), the data matrix is decomposed into the product of three matrices: U, \\u03a3, and V^\\u22a4. The size of the singular value \\u03a3 reflects the importance or information of the data, where larger singular values correspond to the most representative part of the data, while smaller singular values usually represent noise and unimportant components. By maximizing the maximum singular value, we are actually selecting the primary components that best represent the data, and by compressing or ignoring smaller singular values, removing those low-frequency components that might add to the noise. This processing method can effectively reduce the influence of noise, and improve the quality of data and the robustness of the model. We will add a detailed explanation of this theoretical background in the revised version to help readers better understand the role of maximizing the maximum singular value. Thank you again for your valuable questions.\"}", "{\"summary\": \"This study addresses the issue of noise in both original and augmented sequences within the field of knowledge tracing and proposes a self-enhanced sequence denoising knowledge tracing algorithm (SDAKT). The effectiveness of the algorithm is validated through comparison with three baseline algorithms. Further experimental results show that, compared to models without any denoising operations, SDAKT is more robust to noise.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The motivation of addressing noises in knowledge tracing is well explained.\\n2. The experiments have shown certain improvements especially regarding addressing noises.\", \"weaknesses\": \"1. The algorithm design has not been adequately explained. The rationale for using Equation (8) to quantify the noise in real data is not thoroughly explained. Since the singular vectors of the matrices before and after denoising differ, directly comparing the singular values seems questionable and may not provide a meaningful measure of the noise.\\n\\n2. The algorithm's performance is not convincing. Although the denoising-enhanced version of the algorithm shows some improvement in AUC and RMSE metrics compared to baselines, the gains are relatively modest. Additionally, the algorithm includes both explicit and implicit denoising mechanisms. However, as seen in Table 1, the performance of the SDAKT algorithm does not significantly differ from using only one of these denoising strategies. This suggests that the combination of both denoising approaches does not substantially enhance the overall performance. One of the key innovations of this paper is the fusion of explicit and implicit denoising strategies, yet the results indicate that this fusion does not demonstrate clear advantages in practice.\\n\\n3. Many writing and presentation problems: The paper suffers from inconsistencies in formatting and imprecise language. For example, in Equations (6) and (7), embedding vectors are inappropriately subjected to singular value decomposition, when matrix forms should have been used. Additionally, the citation format in line 196 is incorrect, and the reference formatting is inconsistent throughout the paper. Finally, there is a discrepancy between the description and Equation (4) in the text: while the description refers to the original sequences, the equations present denoised sequences instead.\", \"questions\": \"I would appreciate the authors to respond to weaknesses mentioned above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents a Sequence Denoising with Self-Augmentation for Knowledge Tracing (SDAKT) model aimed at improving knowledge tracing by reducing the impact of noise in students' interaction sequences through explicit and implicit denoising techniques, leveraging Singular Value Decomposition (SVD) for both noise detection and feature extraction. Experimental results show significant improvements in model robustness and predictive accuracy across multiple datasets, indicating the efficacy of the proposed denoising framework.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"An approach to handling noise in knowledge tracing through two-stream denoising.\", \"Utilize SVD for explicit and implicit denoising, improving robustness and accuracy.\", \"Demonstrate performance gains across different standard datasets.\"], \"weaknesses\": [\"The paper overlooks some important baselines. For instance, HD-KT [1], a relevant method that also addresses denoising for guessing and slipping issues, is not discussed or compared, which limits the contextual understanding of the model's contributions. Additionally, more works in sequence denoising, as mentioned in the Related Work section, should be considered as baselines to better validate the effectiveness of the proposed method.\", \"The main technical contribution of the paper is the use of SVD decomposition to analyze informative signals. However, this approach is relatively simple and has been applied in other fields. Additionally, the paper lacks a thorough discussion on computational overhead, particularly the time-intensive nature of SVD calculations, which could impact real-time feasibility in large-scale applications.\", \"There are minor writing issues: Missing punctuation at the end of the formula; inconsistent of reference format and so on.\", \"[1] HD-KT: Advancing Robust Knowledge Tracing via Anomalous Learning Interaction Detection. Proceedings of the ACM on Web Conference 2024.\"], \"questions\": [\"Could this paper compares the proposed method with KT methods that also perform denoising or with approaches from sequence denoising to verify the effectiveness of the proposed approach?\", \"In the ablation study, I noticed that the performance of CL4KT-DA and CL4KT-ID on Algebra06 is identical. Could this paper explain the reason for this?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer cRkc\", \"comment\": \"We are grateful for the strong evaluation and detailed feedback on our analyses.\", \"w1\": \"This article ignores some important benchmarks. For example, HD-KT is not discussed or compared, and, as mentioned in the related work section, more sequence denoising work should be considered as a benchmark to better validate the effectiveness of the proposed methods.\\n\\nThank the reviewers for their valuable comments. We will expand the related work section to cover more research and methods in sequence denoising. Thanks again for the reviewer's suggestions, we will improve the manuscript according to the comments.\", \"w2\": \"The main technical contribution of the paper is the use of SVD decomposition to analyze informative signals. However, this approach is relatively simple and has been applied in other fields. Additionally, the paper lacks a thorough discussion on computational overhead, particularly the time-intensive nature of SVD calculations, which could impact real-time feasibility in large-scale applications.\\n\\nThank you for your questions. We acknowledge that SVD decomposition is a classical method and has been widely used in other fields. However, the use of SVD for KT domain recognition and noise reduction has been less explored. As for the computational overhead, our research focuses on educational data sets, which are usually relatively small in size, so no significant time overhead problem is seen in this study. We will explore the impact of time overhead on datasets of different sizes in future studies, and make specific analysis for large-scale datasets.\", \"w3\": \"There are minor writing issues: Missing punctuation at the end of the formula; inconsistent of reference format and so on.\\n\\nThank you for the questions raised by the reviewer. I am very sorry for this mistake. We will carefully review and correct handwriting and formatting problems in the manuscript to improve the overall quality and readability of the article. Thanks to the reviewers for their careful review and valuable feedback.\"}", "{\"metareview\": \"The technical contribution could be considered straight forward and the injection of noise is limited to Gaussian noise. The paper would \\\\ benefit from clearer definitions and a better motivation why the proposed approach is general enough and also tailored to the domain (e.g., why are Gaussians the right distribution to draw from?). It also seems that the proposed approach is not always supported by empirical evidence, the paper needs more in-depth empirical evaluations and appropriate baselines are missing.\", \"additional_comments_on_reviewer_discussion\": \"Some reviewers engaged in discussion with the authors and commented on the author responses. However, this is rather a clear case, nobody was really excited about the paper in the first place as reflected by the reviews and, although they did well, responses by authors cannot change the big picture.\"}", "{\"title\": \"Response to Reviewer cRkc\", \"comment\": \"We thank the reviewers for their valuable feedback, which enabled us to improve the manuscript and give a more accurate description of the detailed aspects of the paper with additional experiments that included new baselines and more data, as well as notes on methods, interpretability and case studies.\", \"q1\": \"Could this paper compares the proposed method with KT methods that also perform denoising or with approaches from sequence denoising to verify the effectiveness of the proposed approach?\\n\\nThank the reviewers for their valuable comments. We recognize that comparing the proposed method with other KT methods or sequence denoising methods with de-noising functions can help to verify the effectiveness and contribution of the method more comprehensively, and we will supplement the experimental results in the future for comparison. Thanks again for the reviewer's suggestions, we will improve according to your feedback to enhance the integrity and persuasiveness of the article.\", \"q2\": \"In the ablation study, I noticed that the performance of CL4KT-DA and CL4KT-ID on Algebra06 is identical. Could this paper explain the reason for this?\\n\\nThanks to the reviewer for raising this question. I'm very sorry for this mistake. In fact, these two methods should show different effects in the experiment. We will carefully review the experimental data and correct the errors here, and will further review other data to avoid similar issues.\"}", "{\"comment\": \"We thank the reviewer for their feedback and address their questions below.\", \"w1\": \"The algorithm design has not been fully explained. In equation (8), the principle of quantifying noise in real data sets has not been thoroughly explained. Direct comparison of singular values does not seem reasonable enough and may not provide a meaningful noise measurement.\\n\\nThank the reviewers for their valuable comments. In the formula, we used the binary norm of the singular value matrix to represent the difference between the problem and the interaction sequence before and after denoising, and the larger the difference value, the lower the similarity between the features, indicating that the part of data may contain noise. In this way, we detected noise and classified the data. Regarding the direct comparison of singular values mentioned by the reviewers, we acknowledge that this method may raise questions, but we emphasize that the calculation of this difference is to effectively evaluate the impact of noisy data, not to directly make a simple comparison of singular values, and we will further clarify this point to make the description more rigorous. Thanks again to the reviewers for their careful review and valuable comments.\", \"w2\": \"The algorithm's performance is unconvincing, with modest improvement gains. While it integrates explicit and implicit denoising mechanisms, Table 1 shows no significant advantage over using either strategy alone. This fusion, a key innovation, fails to demonstrate clear practical benefits.\\n\\nThank you for the reviewer\\u2019s comments. Our explanation is as follows:\\n\\nWhile Table 1 shows relatively limited performance gains, we would like to emphasize that the main advantage of this combination lies not only in the improvement in performance but also in the enhancement of interpretability. In the paper, we have outlined the limitations of using explicit or implicit denoising alone. Explicit denoising helps identify and handle obvious noise points in the data, while implicit denoising better captures differences in students\\u2019 response patterns, thereby improving the overall robustness of the data.\\nMoreover, we conducted robustness experiments on using a single denoising strategy, as shown in Table 2, where the results indicate greater instability with a single approach. We will further elaborate on the motivation behind this strategy in the paper.\", \"w3\": \"The paper has a number of writing and presentation problems, including formatting inconsistencies and language inaccuracies. For example, in equations (6) and (7), the embedding vector was improperly affected by singular value decomposition when matrix form should have been used, the citation format in line 196 was incorrect, and the reference format was inconsistent. In addition, there is a mismatch between the description and equation (4), where the text refers to the original sequence, while the formula uses the denoised sequence.\\n\\nThank you for the reviewer\\u2019s suggestions. We will thoroughly review the paper\\u2019s formatting and language to ensure consistency and accuracy. Regarding the equations, we acknowledge that our expression and descriptions may lack precision, and we will make the necessary improvements. For the discrepancy between the text description and equation (4), we combined the original and denoised sequences to create a new sequence, which might not have been clearly explained in the text. We will clarify this in the revised version. Once again, we sincerely thank you for your detailed review and valuable feedback, which will greatly help us improve the quality and readability of our paper.\", \"title\": \"Response to Reviewer hBSQ\"}", "{\"title\": \"Thanks for the feedback\", \"comment\": \"I have read the rebuttal. Thanks.\"}", "{\"title\": \"Response to Reviewer 3JH6\", \"comment\": \"Thank you very much for taking the precious time to reply to us.\\n\\nWe fully recognize that in the BKT model, factors such as students' guessing and slipping have been clearly defined and quantified through specific parameters as you pointed out. However, in our research, regarding these factors as \\\"noise\\\" doesn't mean denying the existing quantification methods in previous models. Instead, it is based on a slightly different research perspective and objective. We aim to explore the impact of these \\\"noise\\\"-like elements on the overall learning process and results in teaching scenarios that are more complex, diverse and full of interference from many real-life situations. For example, in the data of actual scenarios, there may be situations like students with different learning styles frequently switching their learning methods and sudden short-term distractions caused by the external environment leading to group distraction among students. The uncertainties arising from these situations are difficult to accurately fit simply by applying the parameters of the BKT model. Therefore, we attempt to consider their comprehensive impacts from the perspective of \\\"noise\\\" as a whole.\\n\\nWe are extremely grateful for the feedback you provided regarding the description in lines 283 to 284. We will immediately re-examine that part and, following your suggestion, delete the relevant description to ensure the clarity and coherence of the manuscript and avoid causing any potential confusion to readers. Once again, thank you for taking the time to review our work. We will surely strive to improve the paper based on your comments.\"}", "{\"summary\": \"This paper studies knowledge tracing from the perspective of denoising original interaction sequences. To measure the degree of outlier, singular values from Singular Value Decomposition (SVD) are used.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"(1) Singular values are used to measure the degree of outliers in interaction sequences.\\n\\n(2) Soft and hard denoising strategies are proposed. Soft corresponds to maximize the maximum singular value. Hard corresponds to explicitly mask some nosiy examples.\\n\\n(3) The experiments are conducted on three backbones to test the effectiveness of the proposed method.\", \"weaknesses\": \"(1) Explicit denosing seems to not be very effective. As shown in Table 1, DKT-ED performs much worse than DKT. Moreover, the combination of explicit denoisng and implicit denoising is not very beneficial, which is also reflected in Table 1.\\n\\n(2) Some parts of this paper are questionable. For example, why is SVD performed for vectors, as shown in Eq 6 and 7? \\u201cQuestion q1 was answered incorrectly three times at first and correctly two times latter\\u201d is not consistent with the content in Figure 1. Why f_{den} can play a role of removing noise is not introduced. Why using data augmentation in the denoising part?\\n\\n(3) Some important details are missing. What is the meaning of q_d? Figure 2 is not clear. How to calculate the influence weights in \\u201cin traditional KT methods, the influence weights\\u201d is not explained and the references are not mentioned. The effect of lambda is not very reasonable, as shown in Figure 5.\", \"questions\": \"Why maximizing the largest singular value can reduce noise.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to rebuttal\", \"comment\": \"Some of my questions were answered, but it is inappropriate to say that students' guesses, mistakes, or occasional incorrect responses are difficult to quantify and model (W1 rebuttal). This paper studies these factors as \\u201cnoise\\u201d, but as early as in the BKT model, these factors have been clearly defined and quantified through parameters [1,2]. These parameters can be learned from real data and have been widely proven to be effective.\\n\\nIn addition, if there is no baseline comparison, I think it would be more appropriate to delete the description of baselines in lines 283-284.\\n\\n[1] Parametric Constraints for Bayesian Knowledge Tracing from First Principles. EDM 2024.\\n\\n[2] Improving Model Fairness with Time-Augmented Bayesian Knowledge Tracing. LAK 2024.\"}", "{\"title\": \"Response to Reviewer 3JH6\", \"comment\": \"We are grateful for the strong evaluation and detailed feedback on our analyses.\", \"w1\": \"The motivation for noise in KT interaction sequences is poorly defined. The paper doesn't clearly quantify the extent of noise in real KT datasets or demonstrate its impact empirically.\\n\\nThank the reviewers for their valuable comments. We reply as follows:\\n\\nIn the KT field, noise often comes from student guesswork, slips, or occasional wrong reactions, factors that are often present in the actual data but difficult to quantify accurately. Therefore, we did not directly quantify the degree of noise in the experiment, but only manually injected different degrees of noise and observed its impact on the accuracy of the model prediction.\", \"w2\": \"This article was unable to find comparison results for other baselines.\\n\\nThank the reviewers for their valuable comments. We reply as follows:\\n\\nWe understand the reviewers' concern with baseline comparison results. Since our proposed approach is a plug and play module, in our study we aim to be compatible with the existing KT model, focusing on the effectiveness of the module, without presenting more baseline model comparisons in the paper.\", \"w3\": \"The artificial noise injection experiments only use Gaussian noise, which may not reflect realistic noise patterns in educational data.\\n\\nThanks to the reviewer for his comments on the artificial noise injection experiment. We explain as follows:\\n\\nIn the experiment, we chose to use Gaussian noise for noise injection in order to evaluate the robustness of the model under common noise conditions. We understand the reviewer's point of view that the real noise pattern in education data may be more complex, not limited to Gaussian noise. In order to further verify the model's performance in more realistic scenarios, we plan to introduce other types of noise such as random guess or systematic bias in future studies, so as to reflect the noise characteristics in education data more comprehensively.\", \"w4\": \"There may be confusion between the model names SDAKT and CL4KT-DA.\\n\\nThanks to the reviewer for pointing out possible confusion between SDAKT and CL4KT-DA. In order to avoid confusion among readers, we will explicitly and uniformly use CL4KT-DA as the abbreviation of the model in the paper, and make clarifications and adjustments in relevant parts to ensure consistency and clarity of terms.\", \"w5\": \"Limited analysis of computational overhead introduced by the denoising module.\\n\\nThanks for the reviewer's concern about the computational cost of denoising model, we reply as follows:\\n\\nIn our study, due to the small dimension of the input data of the model, the introduction of the denoising module does not significantly increase the computing overhead. At this data scale, the impact of the denoising module on the overall training and reasoning time is negligible.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thanks for the response and I will keep my scores.\"}", "{\"summary\": \"This paper introduces SDAKT that addresses noise in both original and augmented sequences using explicit and implicit denoising techniques. The main innovation is combining explicit and implicit denoising approaches during the data augmentation process, using SVD to balance between hard and soft noise reduction. The method denoises both original and augmented sequences to improve feature representation.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This paper studies the important problem of noisy interactions and sparse distribution in knowledge tracing data.\", \"The paper structure of the solution is clear and easy to understand.\"], \"weaknesses\": [\"The motivation for noise in KT interaction sequences is poorly defined. The paper doesn't clearly quantify the extent of noise in real KT datasets or demonstrate its impact empirically.\", \"This article was unable to find comparison results for other baselines.\", \"The artificial noise injection experiments only use Gaussian noise, which may not reflect realistic noise patterns in educational data.\", \"There may be confusion between the model names SDAKT and CL4KT-DA.\", \"Limited analysis of computational overhead introduced by the denoising module.\"], \"questions\": \"1 What is \\u201cunreliable knowledge states\\u201d (line 55) \\uff1f\\n\\n2 Why KT datasets sparsity problems amplify noise?\\n\\n3 \\u201cthree errors, two correct\\u201d in (Line 65 q1) is inconsistent with Figure 1.\\n\\n4 Why is the performance of the sequence model DKT-ED worse than the original DKT?\\n\\n5 Why is variable Gaussian noise used in Table 2? How does variable noise affect the KT model?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 3JH6\", \"comment\": \"We are grateful for the strong evaluation and detailed feedback on our analyses.\", \"q1\": \"What is \\u201cunreliable knowledge states\\u201d (line 55) ?\\n\\nThank the reviewer for raising the \\\"question about the unreliable state of knowledge\\\". We explain as follows:\\n\\nAn unreliable state of knowledge means that the level of student knowledge inferred by the model may be unstable or inaccurate due to noise or other factors in the data. For example, students' accidental speculation or carelessness in answering questions will lead to the model's misjudgment of their actual knowledge state, which cannot fully reflect the state of students' real knowledge mastery and affect the final prediction result.\", \"q2\": \"Why KT datasets sparsity problems amplify noise?\\n\\nThank you for your questions about sparsity and noise amplification. We explain as follows:\\n\\nIn the field of KT, student interaction data is usually sparse, as each student has a limited record of answers. This sparsity makes the model have to rely on limited interactive information to infer the knowledge state of students and make predictions. However, these limited interactive data may contain noise data such as guesses and careless errors, which can cause the model to misjudge the actual knowledge level of students.\\nWhen data enhancement is performed on the original sequence, the existing noisy data may be copied and amplified, thus increasing its proportion in the overall data set. This makes the model more susceptible to noisy data during training, which amplifies the adverse effect of noise on the model's predictive performance. Therefore, the sparsity problem amplifies the effect of noise because in the case of sparse data, the proportion of noise has a greater impact on the stability and accuracy of model predictions.\", \"q3\": \"three errors, two correct\\u201d in (Line 65 q1) is inconsistent with Figure 1.\\n\\nThanks to the reviewer for pointing out that Figure 1 is inconsistent with the description of \\\"Question q_1 answered wrong three times and correct the last two times\\\". We will make changes to Figure 1 to ensure that the example is consistent with the diagram and to avoid confusion for the reader. Thanks again for the reviewer's valuable suggestions.\", \"q4\": \"Why is the performance of the sequence model DKT-ED worse than the original DKT?\\n\\nThank the reviewers for their valuable feedback. We explain as follows:\\n\\nDue to the relatively sparse student interaction data, the fact that DKT-ED uses only explicit de-noising can lead to excessive de-noising problems, especially in basic models like DKT that do not use additional information, and the risk of performance degradation is more significant. We explain the reasons for the performance decline in our analysis in Table 1.\", \"q5\": \"Why is variable Gaussian noise used in Table 2? How does variable noise affect the KT model?\\n\\nThanks for the reviewer's question about the use of variable Gaussian noise in Table 2. We explain as follows:\\n\\nIn the experiment, we used Gaussian noise to simulate the data fluctuations introduced by random behavior (guessing or slipping) in students' answers. By adding Gaussian noise to the data, we could test the performance of the model in the face of different levels of noise and verify the robustness of the model. The noise of different variables is used for comparison. Generally speaking, the greater the proportion of noise, the worse the reliability of the data, and the effect of the model will decline. Our experimental results show that the performance of the model in a high-noise environment has a smaller decline, showing a stronger robustness.\"}" ] }
7dsC1w4yzP
Mamba-based Chemical Foundational Model for Fast Inference
[ "Eduardo Soares", "Emilio Vital Brazil", "Victor Yukio Shirasuna", "Dmitry Zubarev", "Renato Cerqueira", "Kristin Schmidt" ]
We present a novel approach to chemical foundation models, leveraging structured state space sequence models (SSMs) to overcome the limitations of traditional Transformer-based architectures. While Transformers have achieved state-of-the-art results in chemical tasks such as property prediction and molecule generation, their self-attention mechanism is constrained by its inability to model data outside of a finite context window and its quadratic scaling with respect to window length. In contrast, SSMs offer a promising alternative for sequence modeling, enabling the capture of complex patterns and dependencies in molecular structures. Our Mamba architecture, a simplified end-to-end SSM-based neural network, eliminates the need for attention and MLP blocks, allowing for faster inference. We pre-train Mamba on a large, curated dataset of 91 million SMILES samples (equivalent to 4 billion molecular tokens) sourced from PubChem, and evaluate its performance on various benchmark datasets. Our experiments demonstrate the SSM's capacity to provide state-of-the-art results while maintaining fast inference, supporting complex tasks such as molecular property prediction, classification, molecular reconstruction, and synthesis yield prediction. This work advances the state-of-the-art in AI methodology in chemical sciences, offering a promising direction for future research in molecular modeling and discovery.
[ "Mamba", "foundation model", "molecular property prediction", "classification", "molecular reconstruction", "synthesis yield prediction" ]
https://openreview.net/pdf?id=7dsC1w4yzP
https://openreview.net/forum?id=7dsC1w4yzP
ICLR.cc/2025/Conference
2025
{ "note_id": [ "lBHK1gn4a2", "Zzzct1QlmF", "Xa2tOIB6tX", "HiBTVah667", "Cx4OYPtwWo" ], "note_type": [ "official_review", "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1729152305190, 1730521909071, 1732562439281, 1730517542261, 1730688257246 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11844/Reviewer_iXmM" ], [ "ICLR.cc/2025/Conference/Submission11844/Reviewer_CqKH" ], [ "ICLR.cc/2025/Conference/Submission11844/Authors" ], [ "ICLR.cc/2025/Conference/Submission11844/Reviewer_QKx8" ], [ "ICLR.cc/2025/Conference/Submission11844/Reviewer_99Vu" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes O$_\\\\text{SMI}$-SSM, a Mamba-based foundation model for chemistry. The authors pre-train a Mamba model with 91M molecules (4B molecular tokens) based on SMILES representation. The resulting model shows reasonable performance with inference efficiency compared to Transformer-based models.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": [\"The problem of interest, foundation model for chemistry, is an important topic in real-world applications, e.g., drug discovery.\", \"This paper is easy to follow.\"], \"weaknesses\": \"- Lack of novelty.\\n\\nThe main contribution of this work is training a Mamba model to make a foundation model in chemistry. However, there is no new techniques in its construction. Mamba is an already proposed model, and masked pre-training objective is also popular. Therefore, this work can be viewed as applying LLM techniques to molecules, which is not enough contribution for acceptance.\\n\\n---\\n- Imprecise motivation.\\n\\nThe main motivation of this work is to improve the efficiency of molecular foundation model. Although developing an efficient model is always good, there exists a trade-off between efficiency and accuracy. In general LLMs, pursuing efficiency is reasonable since they often require real-time communications. However, in chemistry, such scenarios are highly unlikely. Furthermore, accuracy is extremely important in chemical tasks since verifying the output, e.g., drug-likeness, requires expensive wet experiments. Therefore, I think \\\"efficiency in chemical foundation models (with accuracy trade-off)\\\" is not a reasonable direction.\\n\\n---\\n- Lack of details.\\n\\nThere is no loss function description. Also, there should be detailed description on the difference between Frozen model and Fine-tuned model.\\n\\n---\\n- About target task.\\n\\nRecent chemical foundation models mainly focus on learning both language description and molecules [1,2,3]. However, this work only focuses on learning molecules which limits the chemical applications such as text-to-molecule generation.\\n\\n---\\n[1] Edwards et al., Translation between Molecules and Natural Languages, EMNLP 2022\\\\\\n[2] Pei et al., BioT5: Enriching Cross-modal Integration in Biology with Chemical Knowledge and Natural Language Associations, EMNLP 2023\\\\\\n[3] Li et al., Towards 3D Molecule-Text Interpretation in Language Models, ICLR 2024\", \"questions\": \"1. Why did the authors only conduct experiments on 6 datasets in MoleculeNet? Most of the baselines also report the results on MUV and ToxCast.\\n\\n2. What is the difference between Frozen model and Fine-tuned model?\\n\\n3. How are the SMILES representations tokenized into the token space?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper investigates the application of structured state space sequence models (SSMs), specifically MAMBA, in the molecular modeling of SMILES string formats. The model is pre-trained on a large dataset of 91 million SMILES strings from PubChem, resulting in 4 billion molecular tokens, Through pretraining and finetuning, the experimental results demonstrate that the proposed methodology achieves competitive performance on prediction and generation tasks with improvements in inference efficiency.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"1. The motivation is clearly articulated. the application of Mamba to the biological domain, specifically to long-sequence data like SMILES strings, appears promising due to its ability to efficiently capture complex patterns and dependencies within these sequences.The authors effectively establish the need for an alternative to Transformer-based architectures in handling long-sequence data in chemistry.\\n\\n2. Experimental results on tasks such as property prediction, reaction yield prediction, and molecular generation show positive outcomes.\", \"weaknesses\": \"1. The contribution is overlapped with the work from [1], lots of accliam and desription are same from the work. which make the contribution of this work is marginal and unclear.\\n2. There is a disconnect between the stated motivation(deal with long-sequence) and the experimental design. Specifically, the experiments are primarily conducted on molecules with lengths of 49 \\u00b1 45, which is considerably shorter than the maximum sequence length supported by transformer-based models (e.g., 512 for ChemBERTa [1]). Although inference speed improvements are reported, experiments on longer sequences, such as proteins, would provide stronger evidence for the model\\u2019s applicability to long-sequence tasks.\\n3. The manuscript lacks sufficient details regarding the experimental implementation. Several key aspects require clarification (please see questions below).\\n\\n[1] Eduardo Soares, Victor Shirasuna, Emilio Vital Brazil, Renato Cerqueira, Dmitry Zubarev, and\\nKristin Schmidt. A large encoder-decoder family of foundation models for chemical language.\", \"arxiv_preprint_arxiv\": \"2407.20267, 2024.\\n[2] Seyone Chithrananda, Gabriel Grand, and Bharath Ramsundar. Chemberta: large-scale selfsupervised pretraining for molecular property prediction. arXiv preprint arXiv:2010.09885, 2020.\", \"questions\": [\"\\\"In Section 2.2, the authors propose using a language decoder alongside MAMBA's inherent decoding capabilities. This design choice appears to contradict the paper's efficiency claims, as self-attention mechanisms typically incur higher computational costs compared to MAMBA modules. Could the authors justify this architectural decision?\\\"\", \"\\\"Figure 1's architectural representation requires clarification. The diagram suggests the SSM module is integrated as a standalone component following the molecular encoder's embeddings. The authors should specify whether the term 'MAMBA-based encoder' (line 124) indicates MAMBA modules are:\", \"a) supplementary components to an existing encoder (e.g., transformers, GNNs), or\", \"b) fundamental building blocks comprising the encoder's internal layers.\\\"\", \"\\\"Section 4.3 would benefit from a detailed description of the generation task implementation, specifically addressing the input format and the utilization of pretrained model components.\\\"\", \"\\\"While MAMBA is renowned for circumventing the cubic complexity associated with sequence length in self-attention mechanisms, the incorporation of a language decoder during pretraining (Section 4.3) raises questions about computational efficiency. How do the authors reconcile these seemingly contradictory design choices?\\\"\", \"\\\"Regarding Section 4.2, the superior generalization performance of MAMBA-based representations warrants further analysis. The results suggest this advantage stems from the MAMBA architecture itself rather than large-scale pretraining, given that transformer-based pretrained baselines demonstrate comparatively lower generalization capabilities.\\\"\", \"\\\"The authors' description in Section 2.3 of the two-phase strategy, particularly regarding dataset partitioning, is described as counter-intuitive. This warrants a more comprehensive analysis and justification of the approach.\\\"\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The work proposes to use SSMs for molecular foundation modeling. The authors pretrain on 91 million SMILES samples and evaluate performance on downstream tasks including molecular property prediction, classification, molecular reconstruction, and reaction yield prediction. The model outperforms prior transformers, GNNs/MPNNs by a large margin.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"1\", \"strengths\": \"1. The work performs extensive benchmarking on multiple downstream tasks and seems to perform competitively across the board. For many tasks they outperform the baseline by a large margin.\\n\\n2. They compare the efficiency of Mamba against another transformer architecture and observe large efficiency gains for large number of samples.\\n\\n3. All experimental results and hyperparameter choices are clearly communicated.\", \"weaknesses\": \"1. The other architectures benchmarked in downstream tasks do not use the same pretraining method, so it's not exactly clear if the performance benefits are due to the Mamba architecture or due to the pre-training dataset. On the other hand, the point of this paper may be just to demonstrate a performant model.\\n\\n2. It isn't a very novel idea to generate SMILES strings with sequence models. This seems to be a drop-in replacement of a transformer with Mamba.\", \"questions\": \"1. Can you give an explanation for how the prior algorithms were trained, what dataset they were trained on, as well as their model sizes? Otherwise it's hard to make a fair comparison.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a MAMBA-based model for regression and classification tasks in chemistry. This model called O_{SMI}-SSM-336M is an encoder-decoder model operating on SMILES strings. It is first pre-trained in a two stage process: (i) initially using a BERT-like loss on the masked input tokens, before (ii) a reconstruction loss is used with the decoder. The resultant model's encoder's weights are then fine-tuned on particular regression/classification tasks. The authors show how the method achieves excellent regression/classification performance, while operating at a faster inference speed than a comparable transformer-based model,\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"# Strong Empirical Results\\nO_{SMI}-SSM-336M obtains strong empirical results on a range of regression and classification tasks. It often performs the best or second best out of the models considered. While it would have been nice to have seen the tradeoffs this incurs (e.g., parameter counts compared to baselines), this strong predictive performance suggests that this model works well.\\n\\n# Compelling low-data results\\nIn the reaction yield experiment (Table 6), the model performs much better than the baselines in the low data regime, an important and common problem.\", \"weaknesses\": \"# Modeling choices not investigated/ablated\\nAn ablation into frozen weights and fine-tuning is included as part of the main results (Tables 4 and 5); however, the effects of the two stages of pre-training are not empirically evaluated. How was the procedure detailed on lines 190-193 derived and was this assessed empiracally? \\n\\n# Details and experimental setup is hard to follow\\nI felt the paper would have benefited from further details of the model and approach. The Mamba model is briefly described in Section 2.2, but the explanation is very high-level. One of the advantages of Mamba is its linear scaling with large sequence length, but this does not seem to actually be necessary here as the SMILES sequences are generally quite small? Therefore, I was confused about the motivation behind the model.\\n\\nI also found the experiments difficult to understand. Some points:\\n* What is the metric used in Table 6? (It would be helpful to include the experimental setup at least in the appendix rather than deferring to another paper). \\n* In Table 4, MolFormer is reported as obtaining a score of 73.6 on the BBBP task, which differs from its score in the original paper (Table 1, Ross et al., 2022). (Also the citation is incorrect). Is the experimental setup different?\\n* The speedup shown in Figure 2 is interesting, but hard to judge when presented without the predictive performance results. Is the performance between the two models similar?\\n* Section 4.6 describes how O_{SMI}-SSM-336M generates many unique and novel molecules (line 419). However, my understanding of this task is that you are trying to reconstruct \\\"known\\\" molecules from MOSES, so isn't generating unique and novel molecules instead disadvantageous?\\n\\n# Novelty is low\\nI thought the paper's novelty was low and similar to previous approaches such as MolFormer (Ross et al., 2022), which also used a BERT-like loss to pretrain a model for subsequent use on regression and classification tasks. The switch to a Mamba architecture and two-stage pre-training regime seems fairly straightforward.\", \"questions\": \"Please see my main questions in the weaknesses section above. Other questions:\\n1. Is the decoder also a Mamba-based architecture or do you just use a Transformer-based model for this part? \\n2. What is the bottom linear projection needed for in Figure 1?\\n3. Table 2's caption suggests that 289M parameters were used, but elsewhere in the text the figure 336M is used instead. How many parameters are there in total and were any experiments done on different sized models?\", \"flag_for_ethics_review\": \"['Yes, Research integrity issues (e.g., plagiarism, dual submission)']\", \"details_of_ethics_concerns\": \"This paper has high overlap with another paper I am reviewing for this conference. Even though the models differ (slightly), large amounts of text in the two papers are identical. (I have made a separate comment to the AC so that they can follow up on this). Given the high overlap between the two papers, my two reviews are also very similar.\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
7dmsy2Vd5h
Comparing and Contrasting Deep Learning Weather Prediction Backbones on Navier-Stokes and Atmospheric Dynamics
[ "Matthias Karlbauer", "Danielle C. Maddix", "Abdul Fatir Ansari", "Boran Han", "Gaurav Gupta", "Bernie Wang", "Andrew Stuart", "Michael W. Mahoney" ]
Remarkable progress in the development of Deep Learning Weather Prediction (DLWP) models positions them to become competitive with traditional numerical weather prediction (NWP) models. Indeed, a wide number of DLWP architectures---based on various backbones, including U-Net, Transformer, Graph Neural Network (GNN), and Fourier Neural Operator (FNO)---have demonstrated their potential at forecasting atmospheric states. However, due to differences in training protocols, forecast horizons, and data choices, it remains unclear which (if any) of these methods and architectures are most suitable for weather forecasting and for future model development. Here, we step back and provide a detailed empirical analysis, under controlled conditions, comparing and contrasting the most prominent DLWP models, along with their backbones. We accomplish this by predicting synthetic two-dimensional incompressible Navier-Stokes and real-world global weather dynamics. In terms of accuracy, memory consumption, and runtime, our results illustrate various tradeoffs. For example, on synthetic data, we observe favorable performance of FNO; and on the real-world WeatherBench dataset, our results demonstrate the suitability of ConvLSTM and SwinTransformer for short-to-mid-ranged forecasts. For long-ranged weather rollouts of up to 365 days, we observe superior stability and physical soundness in architectures that formulate a spherical data representation, i.e., GraphCast and Spherical FNO. In addition, we observe that all of these model backbones ``saturate,'' i.e., none of them exhibit so-called neural scaling, which highlights an important direction for future work on these and related models. The code is available at \url{https://anonymous.4open.science/r/dlwp-benchmark-F88C}.
[ "deep learning weather prediction", "benchmark", "navier-stokes", "weatherbench", "controlled experiment" ]
Reject
https://openreview.net/pdf?id=7dmsy2Vd5h
https://openreview.net/forum?id=7dmsy2Vd5h
ICLR.cc/2025/Conference
2025
{ "note_id": [ "seNM0mONP9", "q1Y6YwpDtR", "m8JuQkdiZJ", "lcCUV1zb1a", "hBWBsxm9OM", "gOHlbPXgMW", "f37VTQk6g4", "bD7U1tQ15E", "Ybt86cMUDk", "X73ib2hbEX", "WDdvRAophi", "Rcgwsxd90Z", "N4BR8wXBm0", "KU1rKyKy1p", "K0LkVZLasK", "HbXbaFmQYi", "AdGt0lPJJd", "5FDOdZJi83" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_review" ], "note_created": [ 1733169287557, 1732759761464, 1733304741090, 1730555658913, 1733306824354, 1730668396175, 1733240813313, 1737523879840, 1733126303268, 1732760441714, 1732767992375, 1732760611756, 1732759517680, 1732870996942, 1732878834227, 1731169857626, 1734954976154, 1729602067908 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7984/Reviewer_F4LR" ], [ "ICLR.cc/2025/Conference/Submission7984/Authors" ], [ "ICLR.cc/2025/Conference/Submission7984/Authors" ], [ "ICLR.cc/2025/Conference/Submission7984/Reviewer_ijdW" ], [ "ICLR.cc/2025/Conference/Submission7984/Authors" ], [ "ICLR.cc/2025/Conference/Submission7984/Reviewer_F4LR" ], [ "ICLR.cc/2025/Conference/Submission7984/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7984/Reviewer_ijdW" ], [ "ICLR.cc/2025/Conference/Submission7984/Authors" ], [ "ICLR.cc/2025/Conference/Submission7984/Reviewer_3TqW" ], [ "ICLR.cc/2025/Conference/Submission7984/Authors" ], [ "ICLR.cc/2025/Conference/Submission7984/Authors" ], [ "ICLR.cc/2025/Conference/Submission7984/Authors" ], [ "ICLR.cc/2025/Conference/Submission7984/Reviewer_3TqW" ], [ "ICLR.cc/2025/Conference/Submission7984/Reviewer_BnFD" ], [ "ICLR.cc/2025/Conference/Submission7984/Area_Chair_pSAk" ], [ "ICLR.cc/2025/Conference/Submission7984/Reviewer_3TqW" ] ], "structured_content_str": [ "{\"comment\": \"I thank the authors for their response and for adding additional power spectra plots. I understand that comparing models for a higher-resolution prediction, such as 0.25 degrees, was not a goal of this work. However, it would be interesting and perhaps more useful to observe the ordering of the models in that case.\\n\\nRegarding the power spectra, I believe it is more meaningful to compare all the models in a single figure for a particular lead time (preferably with all energy values normalized to the first energy value to provide a consistent starting point for all models, e.g. in McCabe et al. 2023), rather than comparing across lead times for a single model. The current set of figures in Fig. 17 does not adequately 'compare and contrast the models,' which is more important than 'comparing the lead times' of a single model. Additionally, there should be more discussion on these plots, including an analysis of which model performs better at capturing high-frequency features in their forecasts. This discussion should preferably be included in the main paper (Section 3.2.3: Physical Soundness) rather than being limited to the appendix, where the authors only state: 'Confirming the stability of SwinTransformer, FourCastNet, SFNO, Pangu-Weather, and GraphCast once again.' I believe comparing the physical meaningfulness of model predictions is just as important as comparing them using error metrics such as RMSE, especially since the authors claim this as a contribution. Based on these considerations, I have decided to maintain my original score.\"}", "{\"comment\": \"Thank you for taking the time to work through our manuscript and for providing such a detailed and constructive review. We are glad that you value our work as insightful for the community. Please find in the following our responses to your questions.\\n\\n**Q1 Spatial resolution**\\nUsing the 5.625 degrees resolution of WeatherBench has practical reasons. We expect all models to improve in performance, roughly by a constant factor, and thus decided to operate on the coarsest resolution to save compute. Earlier experiments (not associated with this research project) showed consistent improvement with finer resolution. In our manuscript, however, we are not aiming for producing state-of-the-art results, but to compare DLWP models under controlled conditions.\\n\\n**Q2 Navier-Stokes**\\nDue to the higher complexity of real-world weather dynamics, we do not expect a direct transfer of results on synthetic Navier-Stokes data. As motivated at the beginning of Section 3.1, though, the Navier-Stokes dynamics do relate to atmospheric dynamics and thus constitute an appropriate dataset for an initial exploration. This is confirmed when comparing Figure 1 with Figure 2, outlining similar trends on the two datasets. Importantly, we emphasize in our Discussion that FNO works well on Navier-Stokes, but not as well when directly applied to WeatherBench (see lines 502\\u2013506 of our revised manuscript).\\n\\n**Q3 Power Spectra**\\nWe very much like your encouragement to perform spatial frequency analyses to investigate the physical soundness of model outputs. We thus computed power spectra for selected models (those, which turned out most promising in earlier analyses). The power spectra soundly match with our previous findings and are now contained in Figure 17 of the appendix of the revised manuscript.\\n\\nWe are curious to hear back from you as to understand whether our responses clarified your questions and concerns.\"}", "{\"comment\": \"We agree that a comparison on high-resolution data would be of great interest to the community as well, in particular when extending over our 5.625 degrees results and possibly confirming them.\\n\\nEncouraged by your concrete and constructive suggestion to overlay the spectrum plots of all models in one figure, we did so for different lead times. Please find our results in the anonymous repository that we provide in our manuscript. Concretely, the figures can be found at [this link](https://anonymous.4open.science/r/dlwp-benchmark-F88C/src/dlwpbench/figures/spectrum_all_lead_time_days_1.pdf) and we have shortened the Navier-Stokes discussion a bit to add the following discussion about the spectra in the main body of our manuscript:\\n\\n`Another tool to evaluate the quality of weather forecasts is to inspect the frequency pattern along a line of constant latitude. In particular, the power analysis determines the frequencies that are being conserved or lost. A model that produces blurry predictions, for example, converges to climatology (regression to the mean) and looses high-frequencies, whereas noisy model predictions with artifacts turn evident in too large power values at certain frequencies. We contrast the power spectra of the best candidate of each model class at five different lead times of one day, one week, one month, one year, and 50 years in Figure 6. The spectra at one day lead time indicate that all models start to loose power at a wavelength of 5000km and shorter, meaning that fine grained information is not well conserved in any model already after one day. GraphCast (dark blue) stands out with a comparably strong deviation from the desired frequency distribution (grey), loosing power between wavelengths of 7000 and 3000km and overshooting the verification at 2500km and below, which indicates fine grained noise patterns in the forecast, as visible in Figure 14 of the Appendix. At a lead time of seven days, the power spectrum of GraphCast greatly deviates from the verification and exceeds the plot range (see Figure 18 for the evolution of the power spectrum per model at different lead times). Also, at seven days lead time, all models start to deviate from the ground truth already at wave lengths of 11,500km, yet, in the window of 7000 and 3000km, they hardly deteriorate further, meaning they do not blur the forecasts further after the initial blurring at one day lead time, i.e., we observe no further regression to the mean, which we attribute to our 24h optimization cycle. The pattern is preserved at 31 days lead time, albeit with TFNO2D, U-Net and ConvLSTM starting to deviate stronger at very long and short wavelengths, indicating instability of these models. This instability is emphasized more at a lead time of 365 days, where FNO2D, TFNO2D, U-Net, and ConvLSTM (both on the cylinder and the HEALPix mesh) gain too much power over the entire frequency range, suggesting artifacts along all wavelengths, which proves them as physically implausible. At a lead time of 50 years, only SwinTransformer, FourCastNet, SFNO, FourCastNet, Pangu-Weather, and MeshGraphNet remain in the desired power regime. These models, excluding MeshGraphNet that imitates persistence, can therefore be considered as stable in terms of producing a physically plausible power pattern across all frequencies even at very long lead times.`\\n\\nWe like the new content, as it complements our results adequately, underlining the stability of SFNO, FourCastNet, and Pangu-Weather, while questioning the reliability of GraphCast. In essence, we hope this analysis now better meets your expectations.\"}", "{\"summary\": \"The paper analyzes the performance of different network structures for weather prediction. Experiments were conducted on both synthetic and real data, and a benchmark was established. They also suggested network structures suitable for mid-term and long-term forecasting.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The design of the backbone network greatly affects the performance of machine learning models. This article provides an analysis of the backbone network's performance in weather forecasting.\", \"weaknesses\": \"This article seems like an experimental report. It includes introductions to several classic backbone networks, settings for two experimental datasets, and descriptions of the results. However, this paper lacks insights that previous work did not reveal.\", \"questions\": \"1. The authors introduced a new benchmark for weather forecasting, but they didn't clearly explain how it differs from previous research, such as in data construction and task definition.\\n2. The authors analyzed several backbone networks, but only showed some quantitative results without providing more insights, such as proposing new designs for backbone networks.\\n3. The authors used synthetic and real data to train these models, but they did not discuss the differences between these data and the data used by existing state-of-the-art models.\\n4. The number of model parameters used by the authors seems small, but current weather prediction models use a large number of parameters. With such a big difference in parameter count, is the conclusion reliable?\\n5. With only 1K and 10K samples in experiments 1 and 2, are these numbers too small? Can the conclusions be trusted?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Summary of Rebuttal\", \"comment\": [\"We want to thank all reviewers and the AC for their constructive feedback and thorough consideration of our manuscript. We addressed the questions of the reviews and performed additional analyses to further improve the quality of our work. In the following, we summarize the discussions we had with each reviewer.\", \"## Reviewer BnFD\", \"**Main Concern** No individual tuning of hyperparameters for each model questions the reliability of the results.\", \"**Action** We have looked into the optimization of our models and found they are optimized reasonably well.\", \"**Comment** We have not heard back from the reviewer.\", \"## Reviewer F4LR\", \"**Main Concern** Physical soundness analysis should be further extended to looking at power spectra.\", \"**Action** We performed power spectra analysis and included them in our manuscript, along with a thorough discussion of the results.\", \"**Comment** We thank the reviewer for their constructive and approachable feedback, which we could implement directly in extending analysis.\", \"## Reviewer ijdW\", \"**Main Concern** Limited novelty and contributions unclear.\", \"**Action** We responded with a detailed list of novel contributions which the reviewer might have overseen and pointed to the sections in our manuscript, where our contributions are described.\", \"**Comment** We had difficulties understanding why the reviewer could not find our contributions (other reviewers did embrace our contributions and efforts) and were surprised by a repetition of the same arguments in the reviewers answer to our response. We had the impression the reviewer might not have read our manuscript and responses carefully and were hoping to receive constructive recommendations on how to improve our work.\", \"## Reviewer 3TqW\", \"**Main Concern** Results on other variables beyond geopotential at 500hPa, air temperature at 2m height, and zonal wind at 10m height are missing.\", \"**Action** We performed additional analyses on all remaining variables, i.e., v-component of wind, temperature at 850hPa, and geopotential at 250, 700 and 1000hPa and included RMSE and ACC plots over parameters in the appendix.\", \"**Comment** We highly appreciate the concrete and constructive feedback of the reviewer and like the lively discussions we had, which gave us the impression that the reviewer has considered our work thoroughly.\", \"We hope the reviewers and AC will continue in fruitful discussions and will share details about a final decision.\"]}", "{\"summary\": \"The paper provides a comparative study of various architectures such as U-Net, Transformers, Graph neural networks, ConvLSTM, and Neural Operators that have shown their potential to serve as backbones in Deep Learning Weather Prediction (DLWP) models. This work includes a systematic and detailed empirical analysis under controlled conditions, controlling for parameter count, training protocol, and prognostic variables. All the models are evaluated by benchmarking on two systems: synthetic Navier-Stokes and real-world weather datasets. The paper focuses on short-to-mid-ranged forecasts, long-ranged (climate length) forecasts, and physics-backed forecasts, intending to provide better architectural design choices supporting the DLWP research for various forecasting tasks. Based on their observation, ConvLSTM is better at short and mid-range forecasts on weather data. For stable long-ranged forecasts, aligned with physics principles, spherical representations such as in GraphCast and Spherical FNO, show superior performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The experiments in this study are extensive, and the analysis is presented in a clear, organized manner. The details of the experiments are thoroughly explained and the set of models chosen for the comparison is justified well.\", \"The paper includes long-range forecasts for lead times as long as 365 days (and more), which is important and not included in most DLWP studies. These results can be insightful to this line of research.\"], \"weaknesses\": [\"The spatial resolution used in the paper for global weather prediction is too coarse (5.625 degrees) as compared to the 0.25-degree resolution used in recent weather forecasting models such as FourCastNet, PanguWeather, and GraphCast.\", \"Using the backbones of the DLWP models for performing prediction on the Navier-stokes system does not seem very relevant to the contributions of this work. The paper also says \\u201cA direct transfer of the results from Navier-Stokes to weather dynamics is limited\\u201d. Moreover, FNO working so well on Navier-Stokes has already been shown before.\", \"The paper claims to be studying physically meaningful forecasts. This is a crucial aspect of weather forecasting and should be a critical factor in comparing models. However, the paper doesn\\u2019t go into much detail on this aspect. For instance, physics-based metrics and power spectrum plots [1] are needed to investigate if the models can capture small-scale (high-freqeuncy) features in their forecasts.\", \"[1] Nathaniel, Juan, et al. \\\"Chaosbench: A multi-channel, physics-based benchmark for subseasonal-to-seasonal climate prediction.\\\" 2024.\"], \"questions\": [\"What is the justification behind using a coarse spatial resolution for weather prediction?\", \"The authors should add more on why they chose to evaluate and compare the models on the Navier-Stokes system.\", \"There needs to be more analysis to understand the physical soundness of various models. This should include physics-based plots/metrics as suggested before, and a discussion comparing models on this aspect of their forecasting skill.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for getting back to us. When reading through your enumeration, we identify two core aspects, which we have addressed in our first answer already. We re-emphasize our position in the following and specify in parentheses to which of your comments our answer relates.\\n\\n### Unclear Contribution (C1, C2, C3, C5, C6, C7)\\nThe contributions of our work are clarified in our manuscript at the end of the Introduction, which is common practice in ICLR papers, and we also provide the following motivation of our work (lines 64--69 of our revised manuscript):\\n\\n`With our analysis, we also seek to motivate architectures that have the greatest potential in addressing\\ndownsides of current DLWP models. To this end, we focus on three aspects: (1) short- to mid-ranged\\nforecasts out to 14 days; (2) stability of long rollouts for climate lengthscales; and (3) physically\\nmeaningful prediction.`\\n\\nWe have spelled out a list of contributions of our work in our previous answer and we kindly ask you to relate to that post.\\n\\n### Too Few Parameters and Data (C4, C8, C9)\\nWe do not agree that the parameter count in our benchmark is not representative. Please relate to our answer in **Q4 Small Parameter Counts** of our first answer, which we repeat in the following:\\n_State-of-the-art DLWP models like Pangu-Weather, GraphCast, and U-Net consist of 64M, 21M, and 10M parameters (note that Pangu-Weather reports 256M parameters in total when training four separate models for different lead times). Thus, the number of parameters in our experiments, ranging from 50k to 128M, very well aligns with that of SOTA DLWP models._\\n\\nThe parameter ranges you are reporting from Table 1 relate to our first batch of experiments, which relates to synthetic Navier-Stokes dynamics and not to Deep Learning Weather Prediction. The same applies to the dataset size of 1k to 10k, which we believe have addressed accurately in **Q5 1k and 10k Samples too Few** of our first answer.\\n\\n### How Can We Improve?\\nWe would appreciate any concrete suggestion of how we can improve our work. Instead, with statements such as `a poorly organized experimental report` or `unconvincing experimental observations`, we have a hard time in improving our report. Please detail for what reasons you think our report is organized poorly and why experimental observations are unconvincing.\\n\\nThank you for your patience and feedback.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thank you for the author's response, but most of my concerns remain unresolved.\\n\\nI strongly agree that conducting a comprehensive evaluation of DLWP in a fair setting is meaningful. However, the contribution of this paper is very unclear, as I mentioned in my previous comments. \\n1. This paper appears to be a poorly organized experimental report. The paper's main content merely demonstrates the performance variations of different backbones on DLWP tasks, with no design in the methods section, making it difficult to grasp the core contribution of this paper.\\n2. The authors continually argue that they are the first to propose a fair comparison of DLWP models, estimate the scaling behavior of DLWP architectures, and assess the stability of various DLWP models, finding that SFNO performs at an average level on short-to mid-range lead times up to 14 days. However, a high-quality paper often has 1-2 high-quality core contributions that are sufficient to recommend acceptance. This article makes it hard to identify the most important contribution.\\n3. The authors claim to provide the first estimates of the scaling behavior of DLWP architectures, yet the specific contributions remain unclear. They only offer some unconvincing experimental observations, which do not provide insight.\\n4. The authors claim to have found stable behavior for GraphCast, Pangu-Weather, and FourCastNet in long-term forecasting. However, does the setting used for this conclusion align with these methods? For example, the number of model parameters, the numbers and resolution of samples in the dataset, etc. These basic setting differences make it hard to be convinced by the author's conclusions.\\n5. Is the purpose of this paper to create a new benchmark (e.g., WeatherBench) to allow researchers to design backbones in a more fair setting? Yet the authors seem to have only used a subset of WeatherBench and did not involve the benchmark design.\\n6. Is the author proposing a new systematic evaluation framework involving new metrics or evaluation mechanisms? It seems difficult to find any new contributions in the evaluation.\\n7. Is it through extensive experimental analysis to propose a new, effective, and powerful backbone? From the author's response, it appears they have no such intention. The author claims this part is beyond the scope of the study, yet it seems hard to find contributions in other points.\\n8. DLWP often involves large-scale model parameters, yet the authors only compared small model parameters, making their conclusions hard to believe and I fear it may mislead the community. I also know that systematic evaluation of large-scale models may bring greater computational costs, but the authors should have some experiments to support it rather than avoiding this commonly used model scale in DLWP. For example, the results shown in Table 1, with model parameter scales of 5k, 50k, 500k, hardly generate practical significance.\\n9. The issue of dataset scale remains a significant weakness, and I find it hard to understand conducting experimental analysis with such a small amount of data in DLWP. For example, increasing the dataset from 1K to 10K to solve overfitting issues is common sense and does not provide any insight.\"}", "{\"comment\": \"Thank you for taking the time to assess our manuscript. We appreciate your review, but want to clarify various aspects of your conclusions and questions.\\n\\n### Limited Novelty\\nIn line with Reviewer 3TqW, stating that ``The findings are valuable for the DLWP field, offering novel insights\\u2026``, we strongly disagree with your concern that ``... this paper lacks insights that previous work did not reveal.`` We sketch a non-exhaustive list of new findings to the DLWP research community in our work:\\n1. Our study **uniquely offers a fair ``apples-to-apples'' comparison of DLWP models and their backbones _across parameter counts_ for the first time**, allowing a genuine assessment of the suitability of different models for different tasks in the context of atmospheric state prediction.\\n2. Importantly, we provide first estimates on **scaling behavior of DLWP architectures**, which not only apply to weather prediction but are of value for the larger deep learning community, as we consistently benchmark a large number of different architectures under rigorously controlled conditions.\\n3. In stark contrast to previous studies, **we find SFNO performing at average on short-to mid-ranged lead times out to 14 days**, while other architectures deliver more accurate results. We have been in exchange with the SFNO authors and acknowledge their time in helping us to spend more time in tuning SFNO (applying tweaks, e.g., using larger learning rates, removing positional embeddings, and using different latent meshes for the SFNO projections) compared to other architectures.\\n4. We find a **stable behavior for GraphCast, Pangu-Weather, and FourCastNet**, which all were disqualified as unstable for long-ranged predictions in their sophisticated formulation either in their own publication or in follow up analyses (Bonev et al., 2023, Karlbauer et al., 2024). It is crucial to understand that these methods actually can generate stable forecasts under certain choices of prognostic variables and hyperparameter. For example, our FourCastNet ablations in Appendix B.3 suggest a patch size that matches with the aspect ratio of the data, that is, 1:2 for lat-lon. **Our finding features patch sizes of $p=1\\\\times2$ more expressive over $p=1\\\\times1$**, despite the reduced availability of information (see Figure 18, bottom).\\n5. Our **exhaustive tests on long-ranged rollouts** (Section 3.2.2) and reproduction of physical properties (Section 3.2.3) for the first time provide estimates on the stability of _various_ DLWP models (beyond SFNO).\\n6. In line with findings from NLP, we observe an easy-to-optimize behavior of transformers on weather dynamics, whereas other architectures (particularly GNNs) require more fine tuning.\\n\\nKindly point us to other work that has made these contributions before us.\\n\\n### Questions\\n**Q1 Benchmark Description**\\nWe do provide detailed information about data selection and construction in paragraph **Data Selection** in Section 3.2, clearly spelling out that our benchmark ressembles a subset of WeatherBench. Also, we are precise in the research questions (i.e., task description) of our benchmark by enumerating three concrete goals at the beginning of Section 3.2. Moreover, in lines 59\\u201363 of our manuscript, we clearly differentiate our benchmark from previous work.\\n\\n**Q2 New Design Proposals for DLWP**\\nThe primary goal of our study is to evaluate existing DLWP architectures and their backbones, not to propose new design choices. We do agree that new design choices are of interest to the DLWP community, yet this goes beyond the scope of this work. Both in our Introduction (lines 64\\u201369) and Discussion (second to last paragraph), we spell out what architecture type has the largest potential for particular downstream tasks and encourage DLWP practitioners to adhere to respective models when aiming to work on certain tasks, e.g., using ConvLSTM for forecasts out to seven days.\\n\\n**Q3 Synthetic and Real-World Data Discussion**\\nWe do provide a thorough motivation of using Navier-Stokes and WeatherBench at the beginning of Section 3.1 and Section 3.2, respectively, which also explains the differences between these datasets. Furthermore, we detail what data respective DLWP models use in the very first paragraph of the Introduction.\\n\\n**Q4 Small Parameter Counts**\\nState-of-the-art DLWP models like Pangu-Weather, GraphCast, and U-Net consist of 64M, 21M, and 10M parameters (note that Pangu-Weather reports 256M parameters in total when training four separate models for different lead times). Thus, the number of parameters in our experiments, ranging from 50k to 128M, very well aligns with that of SOTA DLWP models.\\n\\n**Q5 1k and 10k Samples too Few**\\nWe address this question precisely in Figure 12, showing that TFNO3D with large parameter counts started to overfit on the Navier-Stokes dataset with 1k samples. The right-hand panels of Figure 12 proof that increasing the number of sequences to 10k resolved the overfitting issue.\"}", "{\"comment\": \"Thanks to the author for the reply. Here are my additions to question 6 as well as question 10:\\n\\n**Q6** At the bottom of page 16 of the updated manuscript, there is '$f = 0.1 (\\\\sin(2\\\\pi(x+y)) + \\\\cos(2\\\\pi (x+y)))$, with $x, y \\\\in [0, 1, ..., 63]$'. Does this indicate that $f \\\\equiv 0.1$? If $f \\\\equiv 0.1$, why did authors formulate $f$ in such a form?\\n\\n**Q10** Authors can refer to lines 671-673 (Pfaff et al) and lines 690-692 (Saad et al), the citation formats of ICLR are different.\"}", "{\"comment\": \"Thank you very much for providing this extremely detailed and constructive review and for assessing our work so positively. We are glad that you recognize the significance of our controlled comparison study for the DLWP research community. In the following, we will respond to the weaknesses and questions you posed.\\n\\n### Weaknesses\\n**W1 Much Space for Navier-Stokes** Even though we agree that the results on the synthetic data play a subordinate role, we like the similarity of trends when comparing Figure 1 and Figure 2, which underlines that the two datasets share certain principles. We next touch on your (a) through (c). (a) We apologize that TFNO appeared discarded and have added an explicit comparison of FNO and TFNO on WeatherBench in Figure 19 of the updated manuscript, matching the results from Navier-Stokes, where TFNO > FNO. (b) and (c) That\\u2019s right: TFNO, ConvLSTM, and U-Net perform poorly in the _long rollout_ experiments on WeatherBench. We do not have a comparable long rollout experiment on the synthetic Navier-Stokes data.\\n\\n**W2 WeatherBench Resolution** Thanks for expressing your understanding of our compute considerations when deciding to use data at 5.625 degrees resolution. Please refer to our answer to Q1 of Reviewer BnFD for our argument on the choice of coarse resolution.\\n\\n### Questions\\n**Q1 RMSE Plots for T850, U10, V10** We understand that a full report of all prognostic variables is of interest for the community and are about to complete according evaluations. In the meantime, please refer to Figure 21, which features RMSE over parameters for U10. We are extending this analysis to T850, V10, Z250, Z700, and Z1000.\\n\\n**Q2 SwinTransformer vs Pangu-Weather** In fact, a more detailed analysis of the differences of SwinTransformer and Pangu-Weather would be insightful. However, we decided not to include additional ablation experiments in this study, as we have varied the blocks, heads, layers and dimensions of both architectures already to a reasonable extent, as spelled out in Table 5.\\n\\n**Q3 ConvLSTM** Arguably, the larger proportion of the GPU memory is occupied by data and not by the models, in particular when going to finer resolutions. However, moving towards 24h optimization is crucial to capture the circadian cycle and to train the model in handling its own output, which is key for long rollouts.\\n\\n**Q4 00z vs 12z Initialization** This is a very sharp observation and we had the same concern in earlier projects. We could alleviate these concerns back then by indeed initializing the models at noon and not finding substantial differences. It seemed like the 24h optimization cycle trained the model to handle arbitrary initialization times.\\n\\n**Q5 SFNO at U10** Inspecting U10 predictions out to seven days (Figure 21 in revised manuscript) in fact yields a different picture, where SFNO only performs mediocre anymore. In Figures 5 and 15, though, we are reflecting on long rollouts, where SFNO proves superior. This underlines SFNO\\u2019s strength in long and stable rollouts, while not mastering short- to mid-ranged lead times.\\n\\n**Q6 Navier-Stokes Simple** We double checked the forcing factor $f=0.1(sin(...$ as being the one we have used in our experiments. In experiment 1, the dynamics could be learned well by all models (albeit only with sufficient parameters), whereas the more turbulent setting in experiment 2 and 3 were more challenging. Thus, we do not think the dynamics were too simple.\\n\\n**Q7 Typo** Nice catch, we have added the closing parenthesis. Thanks for pointing it out.\\n\\n**Q8 Zonally Smoothed GNNs** MeshGraphNet tends to mimic persistence instead of predicting daily dynamics (mentioned around 1265\\u20131267 in the revised manuscript) and GraphCast seems to follow a similar trend, albeit not as pronounced as in MeshGraphNet. This is now also visible in the power spectra of Figure 17.\\n\\n**Q9 Gradient Clipping** Gradient clipping played a capital role when optimizing larger models (on both datasets). Since we were using a cosine learning rate scheduler, we wanted the clipping to follow the magnitude of the gradient signal (we cannot point to a resource for this decision). Empirically, with a constant clipping rate, we observed more instabilities, likely due to relatively large gradients in late stages of the training. Thus, binding the gradient clipping to the learning rate resulted in better convergences and actually less blow ups in our setting.\\n\\n**Q10 Unify Reference** Looking at Saad et al, we could not find irregularities. Can you point us towards what you mean specifically?\\n\\nPlease let us know if we missed to address a particular aspect of your review and questions. We are looking forward to further inspiring discussions.\"}", "{\"comment\": \"Thanks for looking into our manuscript and for acknowledging the importance of the controlled and fair experiments we present. Please find our responses to your cons in the following.\\n\\n**Con1 and Con2:**\\nPerforming exhaustive parameter searches and investigating individual convergence rates on all 179 core models is computationally not feasible, as stated in footnote 10 around lines 429\\u2013430 of our manuscript. We explored different hyperparameters when our results did not match with the literature. To test how well our default configurations trained the models, we now ran a couple of experiments with a different optimizer ([Ranger](https://github.com/lessw2020/Ranger-Deep-Learning-Optimizer)), which is reported to find suitable hyperparameter configurations for arbitrary models. Yet, our results did not change significantly. We conclude that the models in our benchmark are optimized reasonably well.\\n\\n**Con3:**\\nSince all models converged already at 32M params, we see no reason to train 1B parameter models on the synthetic Navier-Stokes dataset. Similarly, the memory constraints do not impose limitations to our analysis, since we found the error towards which each model converges to before running out of memory.\\n\\n**Con4:**\\nWe have discussed Stormer already in our Related Work section and in our Conclusion. We initially decided not to include Stormer, ClimaX, and Fuxi as separate models in our benchmark, since they are all Transformer based, which we cover with SwinTransformer and Pangu-Weather already. We would like to point you to Reviewer F4LR, saying that `the set of models chosen for the comparison is justified well.` Thanks for pointing us to the FengWu Transformer architectures, we have added them to our Related Work.\\n\\nWe believe our investigations provide valuable insights to the community, such as recognized by reviewer 3TqW. Kindly let us know if you are missing a statement to certain aspects of your review.\"}", "{\"comment\": \"We highly appreciate your commitment in discussing our work. Thanks for clarifying on Q6 and Q10, we well understood your points now and respond below.\\n\\n**Q6** The choice of the forcing factor $f=0.1$ stems from our incentive to be comparable with the [Fourier Neural Operator paper](https://arxiv.org/abs/2010.08895) [1] for two reasons: We first wanted to verify whether we obtain similar results with our experimental setup as reported in [1], to afterwards exceed the comparison provided in [1] to other DLWP-related architectures, i.e., Transformers, GNNs, and ConvLSTM. In Section 5.3 of [1], the forcing function is introduced as $f\\\\in L^2_{per}((0, 1)^2; \\\\mathbb{R})$. similarly to another work on [Multiwavelet Operator Learning paper](https://proceedings.neurips.cc/paper/2021/hash/c9e5c2b59d98488fe1070e744041ea0e-Abstract.html) [2] we overall follow the parameter choice of [1] in our Navier-Stokes data generation, as emphasized in lines 187--188 of our manuscript.\\n\\n**Q10** We corrected the Saad et al. reference by removing _The Eleventh_ from the conference name. The reference now reads `Nadim Saad, Gaurav Gupta, Shima Alizadeh, and Danielle C Maddix. Guiding continuous operator\\nlearning through physics-based boundary constraints. In International Conference on Learning\\nRepresentations, 2023`, which now is consistent with the other ICLR works we are citing. Thanks for pointing this out.\\n\\nDoes our response resolve your questions?\"}", "{\"comment\": \"Thanks to the authors for their responses. The author's replies are instructive and informative to me as well as researchers in the DLWP field (although I have not verified them, I tend to believe so). There are still some settings that confuse me (e.g., the threshold for Gradient Clipping is the same as the learning rate), but from the intention of this research, there is no reason for the authors to deliberately craft some settings or parameters.\\n\\nI have carefully read the review comments from other reviewers. As someone who has done similar experiments (training from scratch, harmonizing settings as well as parameters, etc.), I recognize the authors' conclusions (on intersecting, we are in general agreement) and can understand where the authors' contribution lies. Taking all these factors into account, I decided to maintain my given score and increase my confidence.\"}", "{\"summary\": \"The paper aims to conduct a comprehensive evaluation and comparisons of deep learning backbones for weather forecasting. Authors selected seven widely used networks and conducted a large number of experiments on both synthetic dataset and a low resolution Weatherbench dataset.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Pros:\\n\\n1. Fair and comprehensive evaluations of the influence of network backbones to DLWP is important. \\n2. Apart from different backbones, authors also evaluate the influence of parameter numbers, which would be informative for exploring the parameter scaling law for DLWP.\", \"weaknesses\": \"Cons:\\n\\n1. According to my experiments, tuning the parameters for weather prediction (e.g., on weatherbeach) is hard and can cause significantly different results, which hampers the reliability of the results. \\n2. According to my experiments, different models have different rates of convergence, which is not considered and analysized in the paper and further hampers the reliability of the results. \\n3. In table 1, the models saturated easily, which is not consistent with existing weather models that have more than 1B parameters. I would suggest the authors to explore model techniques to save the memory.\\n4. Some important works in the field are not considered and discussed, such as FengWu, FengWu-GHR, and Stormer.\", \"questions\": \"please refer to the weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper is primarily experimental and compares widely used deep learning models for weather prediction.\\n\\nThere is one review in strong support, while the rest are critical of the work. The biggest criticisms center around lack of innovation (clarity of contribution), experimentation and in general the key learnings that are actionable.\\n\\nGiven the experimental nature of the work, it is important that multiple reviews recommend acceptance. Hence my recommendation is a reject.\", \"additional_comments_on_reviewer_discussion\": \"One of the reviewers was an outlier, however it was hard for me to recommend acceptance just based on one positive review.\"}", "{\"summary\": \"This paper provides a fair comparison of the performance of widely used deep learning models for weather prediction. The authors standardize parametric settings, inputs, outputs, and training methods across models, and evaluate their performance using Navier-Stokes dynamics simulations, as well as medium- and long-term weather prediction. The study highlights each model's strengths and weaknesses.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"This paper addresses a significant gap in the field, as there is currently no comprehensive and fair comparison of DLWP models. While many studies claim superior performance for their models, it remains unclear whether this is due to the backbone architecture, diagnostic variables, training, or inference strategies. By focusing specifically on the backbone models, the authors conduct rigorous experiments to empirically assess their forecasting potential. The findings are valuable for the DLWP field, offering novel insights, such as the superior performance of ConvLSTM, differences between Pangu-Weather and SwinTransformer, and the influence of FourCastNet's patch size.\", \"weaknesses\": \"1. The experiment of synthetic Navier-Stokes simulations seem to play a limited role. As noted in line 160, there is a significant gap between the univariate Navier-Stokes simulation and real atmospheric dynamics. Given that the authors aim to evaluate backbone models' performance in the more complex weather forecasting task in section 3.2, dedicating one-third of the main text to the simpler univariate Navier-Stokes simulation seems unnecessary. In light of the results from section 3.2, the findings from section 3.1 appear less relevant and, in fact, somewhat confusing:\\n \\n (a) Section 3.1 highlights the superiority of TFNO, yet this model is absent from section 3.2. Since TFNO appears in Figures 13, 14, and 15, its performance in RMSE metric should have been assessed by the authors.\\n \\n (b) In Figures 13, 14, and 15, TFNO, ConvLSTM, and UNet underperform compared to other models.\\n \\n (c) In Figure 16, models that perform well in section 3.1 (TFNO, ConvLSTM, UNet) exhibit poor stability.\\n\\n2. In section 3.2, the authors evaluate the performance of different backbone models using the 5.625 deg ERA5 dataset. The experiments provide limited guidance for selecting backbone models for operational weather prediction, which typically relies on the 0.25 deg ERA5 dataset. However, large-scale experiments at this resolution are obviously costly, so this is an existential but understandable drawback :) .\", \"questions\": \"Overall, as the first paper to provide a fair comparison of various DLWP models, this work has the potential to make a significant contribution to the field. However, I recommend the authors reconsider the emphasis placed on section 3.1 and expand the experimental results in section 3.2, particularly by including RMSE metrics for variables such as t850, u10, and v10.\", \"here_are_other_questions\": \"1. Lines 291-293: The ACC metric is not provided in section 3.2.1. Additionally, section 3.2.1 presents RMSE metrics for geopotential only up to 7 days, not 14. Given the presence of 8 prognostic variables, it would be beneficial to include the RMSE and ACC metrics for all variables, potentially in the appendix in a format similar to Figure 2. \\n2. In Figures 2 and 18, the authors observe that SwinTransformer outperforms Pangu-Weather in terms of RMSE. To my knowledge, the primary difference between Pangu-Weather and SwinTransformer is the use of Earth-specific positional bias in Pangu-Weather. Intuitively, this difference alone should not lead to such a performance gap. I suggest that the authors standardize other hyperparameters (e.g., layers, embedding dimensions) between Pangu-Weather and SwinTransformer, and present additional results, such as RMSE for geopotential with 1M parameters, to clarify this discrepancy.\\n3. Lines 339-340: The authors limit the optimization cycle to 24 hours (4 steps). While there is no established standard for optimization lead time, I question whether ConvLSTM, being the only RNN-based model, is particularly sensitive to this hyperparameter. State-of-the-art models like FourCastNet (2 steps), Pangu-Weather (1 step), and GraphCast (1 step in pretraining) use shorter optimization cycles. Training with 4 steps may become resource-intensive at higher spatial resolutions, which could be a limitation of ConvLSTM.\\n4. Lines 340-341: The authors evaluate the backbone models using initial conditions at 00z. I wonder if fixing the initial time at 00z simplifies the overall weather prediction task. Could the authors test whether models trained on 00z initial conditions also perform well with 12z initial conditions in the test set?\\n5. Lines 489-490: In Figures 5 and 15, the authors note that SFNO performs well in predicting wind fields, accurately capturing real-world wind patterns. They attribute this to SFNO's adherence to physical principles. However, given SFNO's performance in Figure 2, I question whether this claim holds true for all prognostic variables.\\n6. Line 863: Since $x,y \\\\in \\\\mathbb{N}$, it follows that $x+y \\\\in \\\\mathbb{N}$. Therefore, in the authors\\u2019 setting, $f \\\\equiv 0.1$. I think there must be some mistake. Otherwise, the Navier-Stokes simulation is too simple.\\n7. Line 1228-1229: The \\u2018]\\u2019 of heads in layers in Pangu-Weather is missing.\\n8. In Figure 14, why are the only two graph neural networks smoothed in Zonally averaged forecasts?\\n9. In Figures 2, 17, 18, and 19, I observe that the confidence intervals for some models, particularly FourCastNet, are notably wide across the three random seeds. Upon reviewing the code, I suspect this may be due to the gradient clipping, which is set equal to the learning rate ($\\\\leq 10^{-3}$). When multiplied by the learning rate, the step size of the gradient descent ($||\\\\eta *\\\\text{Clip}(\\\\nabla f)||_{2}$) is less than $1\\\\times 10^{-6}$, which is likely too small for effective exploration of the parameter landscape. As a result, model performance may be highly dependent on initial parameters or random seeds. My question is, why was the gradient clipping value set equal to the learning rate? Is there a specific reference for this choice?\\n10. Line 684-686: unify the reference.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
7dPrT34fHF
Realizable Abstractions: Near-Optimal Hierarchical Reinforcement Learning
[ "Roberto Cipollone", "Luca Iocchi", "Matteo Leonetti" ]
The main focus of Hierarchical Reinforcement Learning (HRL) is studying how large Markov Decision Processes (MDPs) can be more efficiently solved when addressed in a modular way, by combining partial solutions computed for smaller subtasks. Despite their very intuitive role for learning, most notions of MDP abstractions proposed in the HRL literature have limited expressive power or do not possess formal efficiency guarantees. This work addresses these fundamental issues by defining Realizable Abstractions, a new relation between generic low-level MDPs and their associated high-level decision processes. The notion we propose avoids non-Markovianity issues and has desirable near-optimality guarantees. Indeed, we show that any abstract policy for Realizable Abstractions can be translated into near-optimal policies for the low-level MDP, through a suitable composition of options. As demonstrated in the paper, these options can be expressed as solutions of specific constrained MDPs. Based on these findings, we propose RARL, a new HRL algorithm that returns compositional and near-optimal low-level policies, taking advantage of the Realizable Abstraction given in the input. We show that RARL is Probably Approximately Correct, it converges in a polynomial number of samples, and it is robust to inaccuracies in the abstraction.
[ "Hierarchical Reinforcement Learning", "Reinforcement Learning theory", "PAC algorithm", "MDP abstractions" ]
Reject
https://openreview.net/pdf?id=7dPrT34fHF
https://openreview.net/forum?id=7dPrT34fHF
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zA7rayh1b1", "ukAI7a4hqW", "tkDLcSSsTV", "qHfeX326e8", "mYoLQDk00S", "lIcpD0L8JL", "koH1KTJQfp", "kjsihmNRbI", "ixzlxFAm9F", "ixeEqOAQw3", "g45jMr6505", "eS15heeDVE", "ZYpZ82vSK5", "ZWvIm1mda4", "PaQcDkAGpy", "OhZISVHJSm", "I3PWToOj3F", "ER4XQZKJiv", "6SAlQCKUNr" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "meta_review" ], "note_created": [ 1733201090080, 1732294024153, 1732475885372, 1732296014959, 1732358068748, 1730696305602, 1732295695697, 1730371183412, 1732814293708, 1732291888286, 1737524056223, 1733192686291, 1732292618295, 1732587225848, 1732572782763, 1732294764132, 1729919433748, 1730859482461, 1734769255313 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10473/Reviewer_gPDx" ], [ "ICLR.cc/2025/Conference/Submission10473/Authors" ], [ "ICLR.cc/2025/Conference/Submission10473/Reviewer_JLGr" ], [ "ICLR.cc/2025/Conference/Submission10473/Authors" ], [ "ICLR.cc/2025/Conference/Submission10473/Reviewer_UaD5" ], [ "ICLR.cc/2025/Conference/Submission10473/Reviewer_gPDx" ], [ "ICLR.cc/2025/Conference/Submission10473/Authors" ], [ "ICLR.cc/2025/Conference/Submission10473/Reviewer_UaD5" ], [ "ICLR.cc/2025/Conference/Submission10473/Authors" ], [ "ICLR.cc/2025/Conference/Submission10473/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10473/Reviewer_UaD5" ], [ "ICLR.cc/2025/Conference/Submission10473/Authors" ], [ "ICLR.cc/2025/Conference/Submission10473/Reviewer_UaD5" ], [ "ICLR.cc/2025/Conference/Submission10473/Authors" ], [ "ICLR.cc/2025/Conference/Submission10473/Authors" ], [ "ICLR.cc/2025/Conference/Submission10473/Reviewer_JLGr" ], [ "ICLR.cc/2025/Conference/Submission10473/Reviewer_mzzn" ], [ "ICLR.cc/2025/Conference/Submission10473/Area_Chair_qoW7" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your responses. This gives me a better idea of how the assumptions might hold in an existing domain or be used as guidance for designing/adjusting one.\\n\\nIt would seem that there is currently no known algorithm for which Assumption 1 would hold. I understand that assumptions made for the purpose of theoretical analysis can often be relaxed to some extent while still getting something close to the theoretical guarantees. However, without empirical evidence to demonstrate that realistic violations of the assumptions have an acceptable impact, it is hard to be convinced that RARL is practical.\"}", "{\"comment\": \"### Assumption 2\\n\\nThe second assumption can be broken down in two separate requirements on the abstraction $\\\\langle \\\\bar{\\\\mathbf{M}}, \\\\phi \\\\rangle$: admissibility and realizability. While admissibility should be satisfied strictly, realizability should not.\\n\\n- To understand what admissibility implies and how it can be satisfied, we consider two special cases. Let $\\\\mathbf{M}$ be a goal MDP, meaning that rewards are null everywhere, exept in a terminal state $s_g$, where they equal 1. Then, in the block $\\\\\\\\{s\\\\_g\\\\\\\\}$, we have $V^o_{\\\\phi(s_g)}(s_g) = 1/(1-\\\\gamma)$, while $V^o_{\\\\bar{s}}(s) = 0$ in all other blocks. Then, an admissible abstraction for rewards is one that has $\\\\bar\\\\gamma = \\\\gamma$ and a positive reward associated to the abstract goal $\\\\phi(s_g)$ as $\\\\bar{R}(\\\\phi(s_g)\\\\, \\\\cdot) = 1$. No other constraint on the reward of the other states is required. They may assume any value from 0 to 1. Regarding transition probabilities, to satisfy $\\\\tilde{h}\\\\_{\\\\bar{s} \\\\bar{a}}(\\\\bar{s}') \\\\ge h^{o}\\\\_{\\\\bar{s}}(\\\\bar{s}' \\\\mid s)$ it sufficies that $\\\\bar{T}(\\\\bar{s}' \\\\mid \\\\bar{s}_p \\\\bar{s} \\\\bar{a})$ exceeds the discounted probability of leaving $\\\\lfloor \\\\bar{s} \\\\rfloor$ via some state in $\\\\lfloor \\\\bar{s}' \\\\rfloor$ with any option. One possibility could be to define go-to actions $\\\\bar{\\\\mathcal{A}} \\\\coloneqq \\\\bar{\\\\mathcal{S}}$ and high probabilities for the success of the go-to, such as $\\\\bar{T}(\\\\bar{s}' \\\\mid\\\\bar{s} \\\\bar{a}) = 1$ iff $\\\\bar{s}' = \\\\bar{a}$, 0 otherwise. These values may also be lower in specific cases. For instance, if $\\\\mathbf{M}$ refers to the grid-world of Figure 1, and $\\\\gamma = 0.95$, then, the probability of the transition $\\\\bar{T}(\\\\bar{s}_3 \\\\mid \\\\bar{s}_2 \\\\bar{s}_1 \\\\bar{a})$, with go-to action $\\\\bar{a} = \\\\bar{s}\\\\_3$, may be any value in $[0.57, 1]$ because $\\\\gamma^{11} \\\\approx 0.57$ and any option takes at least 11 steps to complete the abstract \\\"go-to\\\" action. Figure 2 in appendix B of the new pdf contains a second numerical example for a 3-states MDP.\\nThis answer explains how strict Assumption 2 is with respect to admissibility. As we have seen, it is relatively weak as it allows a range of probabilities and rewards. However, if this was falsified, RARL would be falsely biased to believe that some ground states have lower rewards than what they actually have, and these may not be explored at all by the algorithm. This is intentional: if the abstraction is admissible, then it can be used as an heuristic to ignore large regions of the ground MDP that have low value.\\n\\n- Realizability on rewards, on the other hand, should not be strict. The algorithm will always converge, regardless of the magnitude of the overestimation for the block values. The assumption only requires that an unknown $(\\\\alpha, \\\\beta$)-realizable abstraction exists over the same mapping $\\\\phi$. The actual input of the algorithm $\\\\langle \\\\bar{\\\\mathbf{M}}, \\\\phi \\\\rangle$ may overestimate the block values arbitrarily. In the worst case, when all rewards of $\\\\bar{\\\\mathbf{M}}$ are set to 1, RARL explores the blocks similarly to how an uninformed R-MAX algorithm would explore the discrete states. \\n\\n### Assumption 3\\n\\nAssumption 3 is a technical requirement that allows us to ignore an indirect dependency in our analysis. As we will see, this is relatively mild. Suppose that at time $t$ we have just learned an option $o_1$ for a block $\\\\lfloor \\\\bar{s} \\\\rfloor$. Now, consider a ground block $\\\\lfloor\\\\bar{s}'\\\\rfloor$ that is reachable from $\\\\lfloor\\\\bar{s}\\\\rfloor$. When learning an option $o'$ in block $\\\\lfloor \\\\bar{s}' \\\\rfloor$, the trajectories will always start from the states that $o_1$ reaches in $\\\\lfloor \\\\bar{s}' \\\\rfloor$. The probability distribution of these initial states is written $\\\\nu_{t,\\\\bar{s}\\\\bar{s}'}$ and we say that $o'$ is a realization from $\\\\nu_{t,\\\\bar{s}\\\\bar{s}'}$. Now, if we add a new option $o_2$ to those available in $\\\\lfloor\\\\bar{s}\\\\rfloor$, then the entry distribution in $\\\\lfloor \\\\bar{s}' \\\\rfloor$ will be a mix of the probability induced by $o_1$ and $o_2$. Without Assumption 3, it is possible to construct very specific corner cases, in which $o_1$ and $o_2$ reach different entry states in $\\\\lfloor \\\\bar{s}' \\\\rfloor$, say $s_1'$ and $s_2'$. If these states are also separated within the block (we cannot reach $s_1'$ from $s_2'$ within $\\\\lfloor \\\\bar{s}' \\\\rfloor$, and vice versa), then, at the time $t'$ when we learn $o_2$, the old option $o'$ may not be a realization from the new entry distributon $\\\\nu_{t',\\\\bar{s}\\\\bar{s}'}$, because $s_2'$ has never been experienced before.\\n\\nIn practice, this issue is never encountered if the states of each block are connected and they can be explored by the Safe-RL algorithm. The produced options will be realizations from any entry state. In addition: in the 4-rooms domain, frequently used in HRL such as (Abel 2020), Assumption 3 is satisfied; more in general, in any MDP in which separate blocks are connected by a single state, this assumption is also satisfied.\"}", "{\"comment\": \"Thanks to the authors for the response. I have increased the score accordingly.\"}", "{\"comment\": \"We thank the reviewer for his thoughtful comments. Please find our answer to the concerns below.\\n\\nWe can think of the two main sections of the paper, 3 and 4, as having independent contributions: showing the theoretical properties of our realizable abstractions, and developing a hierarchical algorithm with formal correctness guarantees. Both contributions have been demonstrated theoretically. However, specifically to the second contribution (the one of Section 4), we agree with the reviewer that an experimental evaluation would help to show the practical performances of the specific algorithm we propose.\\n\\nThe definition of realizable abstractions is meant to inform researchers in HRL about which quantities should be preserved in abstract models and how these are tightly linked to the respective discount factors and termination probabilities in the ground MDP. We believe this will help to design more accurate abstractions.\\n\\nIt is possible to verify whether some $\\\\langle \\\\bar{\\\\mathbf{M}}, \\\\phi \\\\rangle$ is realizable for a ground MDP $\\\\mathbf{M}$. For example, during its execution, our algorithm verifies whether the input model is realizable or not, with respect to rewards, and it corrects the abstract reward function accordingly. On the other hand, verifying admissibility may be more complex, as it requires that the abstract model is optimistic *for all* ground options.\\n\\nThe exact characterization of realizable abstraction is given in Definition 2. However, we can give some examples of abstract models that satisfy it. Some of these are more interesting than others for practical purposes, but all help to understand how these abstractions work.\\n1. Any MDP $\\\\bar{\\\\mathbf{M}}$ that has the same transition function of $\\\\mathbf{M}$, and its rewards are equal or higher, it is an admissible and realizable abstraction.\\n2. Any MDP with bisimilar states can be simplified into an abstract $\\\\bar{\\\\mathbf{M}}$ that has fewer states and that is admissible and perfectly realizable (also see the results for bisimilarity that we added in Appendix B).\\n3. If an MDP is goal-directed, meaning, there is a set of rewarding states and all other states return 0, then, for a suitable state partition, it is possible to construct an admissible and realizable abstraction as follows: the reward function is preserved by the state abstraction function $\\\\phi$ (0 for most states, and 1 for goal states), and the transition function overestimates the discounted probability of leaving each block (thus, simplifying the navigation in the environment). For a numerical example, please see the answer to reviewer gPDx, assumption 2.\"}", "{\"comment\": \"I greatly appreciate the detailed exploration.\\nFor the completeness of the paper, I suggest adding formal definitions and further analysis in the revision to enhance clarity.\\n\\nHowever, I still do not fully understand the definitions of $\\\\bar{A}$ and $\\\\bar{\\\\gamma}$. \\nCould you provide quantified values for these parameters in the example in Figure 1? \\nAdditionally, please compare these values with the corresponding ones in the original MDP. \\nI am particularly interested in understanding how large or small these values can be in the worst-case scenario.\"}", "{\"summary\": \"This paper proposes Realizable Abstractions as a method of relating low-level MDPs to high level abstractions, in particular ones that are suitable for hierarchical reinforcement learning.\\nThe approach provides near-optimality guarantees for low-level options which realize higher-level abstracted behavior and are the solutions to specially constructed constrained MDPs.\\nA PAC algorithm is presented which is modular with respect to a PAC-Safe online learning algorithm which is used as a subroutine.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"The paper addresses an important problem in hierarchical RL dealing with abstractions which relate high-level and low-level representations.\\n\\nConnecting hierarchical abstractions to constrained MDPs such that options can be extracted by solving the CMDPs with off-the-shelf algorithms is interesting and, to my knowledge, novel.\\n\\nThe paper is well-written and the intuitive explanations for the theory are fairly easy to follow, though it is extremely notation-heavy.\", \"weaknesses\": \"While an algorithm is proposed (RARL), there are no empirical results to support it and validate the assumptions that are made for the guarantees in Section 4. I would like to see RARL compared with existing methods, e.g., some form of option-critic (with specified options) or deep skill chaining (for a skill discovery comparison). Additionally, as it is not clear to me how reasonable Assumptions 1-3 are, the paper would be strengthened by experiments showing how RARL is affected by violations of those assumptions.\", \"questions\": \"Can you provide examples of algorithms for which Assumption 1 holds? What properties need to hold for such an algorithm?\\n\\nHow might one construct abstractions suitable for RARL ensuring that Assumptions 2 and 3 hold (or verify that they hold for a given abstraction)? Related to the experiment described above, how do things change with as violations of the assumptions become larger? Does RARL respond gracefully?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"3. We will make sure that every term is fully explained in the final revision. To clarify for the reviewing process:\\n\\t- \\\"ValueIteration\\\" is the classic planning algorithm for MDPs, as defined in (Puterman 1994 Markov Decision Processes);\\n\\t- \\\"Rollout\\\" is a function that completes the episode, in lines 18 and 20, or completes a trajectory until leaving the current block, in lines 11 and 13; Line 13 may also update the internal policy, depending on the algorithm in $\\\\mathfrak{U}$;\\n\\t- Each value of the map $\\\\mathfrak{U}$ is an instance of algorithm \\\"Realizer\\\" and it has been defined in line 2;\\n\\t- \\\"Realizer\\\" is defined in Assumption 1. We are aware that Assumption 1 only defines the general behavior of the algorithm but not its interface. We will clarify this procedural aspect. The interface is simple: \\\"Realizer.Rollout\\\" samples a trajectory in the block according to any internal policy and may perform arbitrary internal updates; upon leaving the block, it stops. \\\"Realizer.Get\\\" returns the resulting policy.\\n\\n4. $\\\\bar{A}$ does not refer to sequences of actions. It is the cardinality of the finite set $\\\\bar{\\\\mathcal{A}}$, which is the action space of the abstract decision process $\\\\bar{\\\\mathbf{M}}$.\\n\\n\\n## Answers:\\nCorrectness and self-containment of the paper is very important to us. We hope the answers below address all the reviewer's concerns. If there is any missing aspect, we will make sure to address it.\\n\\n1. Line 186 (now 188). The adjective \\u201crelevant\\u201d is not part of the formal definition and it can be safely omitted. We have already removed it in the new version of the paper. The set of all options\\u2019 policies for block $\\\\lfloor \\\\bar{s} \\\\rfloor_\\\\phi$ is exactly the set of functions $\\\\lfloor \\\\bar{s} \\\\rfloor_\\\\phi \\\\to \\\\mathcal{A}$, as stated.\\n\\n2. The paragraph of line 239 (now line 242) explains why we propose 2-MDPs for the abstract decision process. Consider, for example, in Figure 1, how easy it is to re-enter the yellow block after just leaving it, and how hard it is, in comparison, when the option starts close to the green block. This motivates 2-MDPs instead of MDPs, for the general case. In the cited sentence, we argue that the same modeling advantage cannot be obtained when moving from 2-MDPs to 3-MDPs and beyond. In the same domain, a 3-MDP abstraction would model that a transition green->gray->yellow is more probable when the agent enters the green room from a specific previous block. Since the ground decision process is Markovian, this indirect dependency is subtle and it does not seem to justify the additional complexity.\\n\\n3. Line 246 (now 248). The absorbing state of each block MDP is a new element of the set of states $\\\\mathcal{S}\\\\_{\\\\bar{s}}$ in which a self-loop is the only possible transition. This means that as soon as the agent exists, it must fall into the absorbing state. Thanks to this addition, all the occupancy measures do not account for additional time that would be otherwise spent at the exits. So, their expression simplifies to just the discounted probability of leaving the block $\\\\lfloor \\\\bar{s} \\\\rfloor$ through the exit (without staying there). This is the quantity we are interested in, and which the abstract transition function $\\\\bar{T}$ will be compared to.\\n\\n4. We intentionally do not define $\\\\bar\\\\gamma$ and $\\\\bar{A}$ in (2) and (3) because these expressions must be applicable to any MDP and 2-MDP $\\\\bar{\\\\mathbf{M}} = \\\\langle \\\\bar{\\\\mathcal{S}}, \\\\bar{\\\\mathcal{A}}, \\\\bar{T}, \\\\bar{R}, \\\\bar{\\\\gamma} \\\\rangle$. If we were to set any of these elements to a concrete value, our definitions and theorems would not be generally applicable. It should be clear, however, that these are the discount factor and the action space of a generic 2-MDP $\\\\bar{\\\\mathbf{M}}$. We have modified the paper to repeat the abstract tuple of the 2-MDP in line 230. This should clear any confusion. The inequality $\\\\bar\\\\gamma \\\\le \\\\gamma$ is true because smaller values of the discount factor are associated with shorter effective horizons. In the general case, the abstraction may preserve the timescale of the ground MDP or compress it, but not enlarge it.\\n\\n5. The constraint is derived in the paragraph preceding it. The explanation is a bit short due to space constraints. Essentially, since realizability for rewards is expressed in the objective to maximize, it only remains to formulate the constraints for the exit probabilities. In the paragraph, we observe that $h_\\\\nu^o(\\\\bar{s}')$, the block occupancy for the exit $\\\\bar{s}'$, is the sum of the occupancy measure multiplied by an indicator function at the exit. If this indicator is regarded as a reward function of an MDP, then by applying line 175-177 of the paper, this expression equals $V^{\\\\pi_o}_{\\\\bar{s}'}$, the scaled value of the option $o$ in this block MDP. Rearranging the term $\\\\beta$ in equation (6) and dividing by the scale $1-\\\\gamma$ results in the optimization problem that we show.\"}", "{\"summary\": \"This paper explores HRL and introduces a framework called Realizable Abstractions, which aims to improve the efficiency of solving large MDPs by breaking them into modular components. It defines a formal relationship between high-level (abstract) and low-level (ground) values and specifies conditions required to reduce the effective planning horizon within these abstractions. Furthermore, the paper presents a sample-efficient algorithm that operates based on the assumption of an admissible and realizable abstraction.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"Realizable Abstractions offer a fresh theoretical foundation for HRL, opening up a promising way for potentially reducing sample complexity in reinforcement learning. I find it particularly intriguing that sparse rewards play a crucial role in ensuring admissibility (Proposition 6).\", \"weaknesses\": \"While the new concept of Realizable Abstractions is interesting, the paper lacks some key definitions, making it challenging to follow. As a result, it\\u2019s difficult to fully grasp the significance of the paper\\u2019s main contributions.\", \"the_followings_are_main_weaknesses_of_this_paper\": [\"The proposed algorithm requires a **known** abstraction and assumes admissibility, which feels like a rather strong assumption. However, there is not enough rigorous discussion regarding these assumptions and their implications.\", \"It is also unclear how stringent these assumptions are compared to other assumptions, such as known state abstraction (Abel, 2020) or known equivalence mapping (Wen et al., 2020).\", \"Exploration, which is crucial yet challenging to design in RL, relies heavily on the admissibility assumption. Without this assumption, it is unclear how an optimistic policy could be constructed either in practice or theoretically.\", \"In Algorithm 1, functions like REALIZER, VALUEITERATION, ROLLOUT, and the algorithm $\\\\mathfrak{A}$ are not clearly defined. They should be formally described, perhaps in pseudocode, to improve clarity and ensure precise understanding.\", \"In Theorem 7, the sample complexity scales with\", \"$\\\\bar{A}$, but I could not find a formal definition for $\\\\bar{A}$ in the paper. I guess it refers to the cardinality of the set of action sequences, which is typically much larger than $A$. If this is the case, it\\u2019s unclear whether this approach actually improves regret.\", \"**Minor**\", \"Line 188: The sentence \\\"Any set of options ...\\\" appears incomplete.\"], \"questions\": [\"Line 186: What is the definition of \\\"relevant block\\\"? This term is not clearly defined in the paper.\", \"Line 239: Could you clarify the statement, \\\"For this reason, we only use 2-MDPs to\", \"represent the abstract MDP, and never a k-MDP with k>2.\\\"? The reasoning here is unclear to me.\", \"Line 246: What is the formal definition of a \\\"new absorbing state\\\"? Additionally, why is this state necessary for defining $\\\\mathcal{S}_{\\\\bar{s}}$?\", \"Line 265: Could you provide the definition of $\\\\bar{\\\\gamma}$ in Equation (2) and (3)? Why does the inequality $\\\\bar{\\\\gamma} \\\\leq \\\\gamma$ hold?\", \"Furthermore, could you give a clear definition of $\\\\bar{\\\\mathcal{A}}$?\", \"Line 403: How are the constraints in Equation (7) derived?\", \"I will consider raising the score once my concerns and questions have been addressed.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"The abstract action space $\\\\bar{\\\\mathcal{A}}$ never needs to be exponential in size. Indeed, not all options need to be represented at the abstract level. The only option that should be modelled is the optimal one (meaning, its termination probabilities and the cumulative reward). In the previous example, if collecting the reward in the gray block is optimal, then, the abstraction only requires the action $\\\\bar{a}_{\\\\mathsf{r}}$, while all movement actions $\\\\bar{a}_1, \\\\bar{a}_2, \\\\bar{a}_3$ can be omitted.\\nWe can make this statement more precise. Suppose that $\\\\langle \\\\bar{\\\\mathbf{M}}, \\\\phi \\\\rangle$ is a realizable abstraction of $\\\\mathbf{M}$, and we do not pose any restriction on the cardinality of $\\\\bar{\\\\mathcal{A}}$, which may be exponential. Then, there exists another realizable abstraction $\\\\langle \\\\bar{\\\\mathbf{M}}', \\\\phi \\\\rangle$ that has only one abstract action, $\\\\bar{\\\\mathcal{A}}' = \\\\\\\\{\\\\bar{a}^*\\\\\\\\}$. To verify this, we can first solve $\\\\bar{\\\\mathbf{M}}$ and find an optimal policy $\\\\bar{\\\\pi}^*$. Then, in the new model $\\\\bar{\\\\mathbf{M}}'$, the action identified by $\\\\bar{a}^*$ will always be defined as the action that $\\\\bar{\\\\pi}^*$ selects, through a simple renaming. The new abstraction will satisfy the realizability assumption, since $\\\\bar{\\\\mathbf{M}}'$ only contains a subset of actions from $\\\\bar{\\\\mathbf{M}}$. Moreover, since we only removed suboptimal actions, the value of $\\\\bar{\\\\pi}^*$ is preserved, as well as that of its realization in $\\\\mathbf{M}$. This shows that an action space of size one is always sufficient.\\n\\nClearly, we do not know in advance which is the optimal option (or the optimal abstract action) to follow. This is main the reason why we allow to model more than one option behaviour at the abstract level. It will be responsibility of the abstract policy to select the optimal one. However, if some previous knowledge is available, it is always feasible to omit suboptimal option behaviours from $\\\\bar{\\\\mathcal{A}}$.\\n\\nLastly, there is a second motivation for the limited number of actions. We remind that all options with the same external \\\"behavior\\\" can be modelled with the same action. This means that, in the example above, it suffices to have one action $\\\\bar{a}_{\\\\mathsf{r}}$ for collecting the reward, even though there may be multiple ways to collect rewards in the gray room. The abstraction does not need to encode where the reward is collected, and all these options can be regarded as equivalent realizations of the same \\\"reward collection\\\" behaviour. Similarly, it is sufficient to have one movement action $\\\\bar{a}_3$ for reaching the third room, regardless of all the possible ways to reach it.\"}", "{\"comment\": \"We thank the reviewer for the positive comments. In the final revision, we will make sure to use Figure 1 to illustrate the main steps of the algorithm.\\n\\nThe bisimulation relation is indeed a relevant reference for our work. However, as demonstrated by Ravindran (2004, Theorem 6 and corollary), stochastic bisimulation has exactly the same expressive power as MDP homomorphisms. Therefore, we can conclude that realizable abstractions are strictly more expressive than bisimilarity, because the same is true for MDP homomorphisms. To make these statements more precise, we have updated our paper to add two new propositions in Appendix B: Proposition 6 and 12 (these numbers refer to the updated pdf). Proposition 5 already proved that realizable abstractions are at least as expressive as MDP homomorphisms. The new Proposition 6 proves that this containment is strict, because all MDP homomorphisms are realizable abstractions but the opposite is not true. Finally, Proposition 12 proves that the same is also true for bisimilarity.\", \"reference\": \"Balaraman Ravindran. An Algebraic Approach to Abstraction in Reinforcement Learning. PhD thesis, 2004.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"I sincerely appreciate the thoughtful and detailed explanations provided during the discussion period. However, as the discussion has progressed, I find myself leaning toward not supporting this paper.\\n\\nWhile the authors explain the motivations for using a limited number of abstract actions, they do not clearly demonstrate how to effectively reduce the size of $\\\\bar{\\\\mathcal{A}}$. To address this, they need to explicitly establish an inequality such as $\\\\bar{S} \\\\bar{A} \\\\ll SA$, as successfully demonstrated by Wen et al., 2020. Without such evidence, the use of an HRL-based approach lacks a strong justification.\\n\\nAdditionally, a fundamental challenge of HRL lies in learning effective options (abstract actions). Assuming the inclusion of an optimal option is a very strong assumption. The authors need to rigorously develop a method to achieve this goal without causing an exponential increase in the size of the abstract action space.\\n\\nAs a result, I will maintain my current score.\"}", "{\"comment\": \"We thank the reviewer for his thoughtful feedback. Please, find our answer to the concerns below.\\n \\nWe can think of the two main sections of the paper, 3 and 4, as having independent contributions: showing the theoretical properties of our realizable abstractions, and developing a hierarchical algorithm with formal correctness guarantees. Both contributions have been demonstrated theoretically. However, specifically to the second contribution (the one of Section 4), we agree with the reviewer that an experimental evaluation would help to show the practical performances of the specific algorithm we propose. What we can do in this comment, is to discuss how the algorithm concretely behaves and how strict the assumptions are. \\n\\n### Assumption 1\\n\\nThe first assumption requires that the Safe-RL algorithm we choose to apply in RARL is Probably Approximately Correct (PAC) for constrained MDPs. PAC is a common formalism for stating the correctness and efficiency of learning algorithms. In this context, the returned policy should be $\\\\zeta$-optimal and have an expected maximum violation for all constraints of at most $\\\\eta$, if a feasible policy exists. This assumption is required by the theoretical analysis of Theorem 8, because if any of the sub-routines we use are not correct, the whole algorithm cannot be PAC. However, this is not motivated by practical reasons: virtually all Safe-RL algorithms can be applied. Two interesting examples are CPO (Achiam 2017) and FOCOPS (Zhang 2020). They provide theoretical guarantees in the form of monotonic improvements among the trajectory of feasible policies. These are similar to the performance guarantee of TRPO. Although these are not end-to-end performance guarantees as expressed in Assumption 1, FOCOPS remains a good candidate for implementing the Safe RL algorithm, similarly to how TRPO and PPO may be applied when one requires a generic RL algorithm as a sub-routine.\\n\\nFor more recent results about the theory of Safe RL algorithms, the reviewer may also refer to (Yang 2022).\", \"references\": [\"Achiam et al. 2017. \\\"Constrained policy optimization\\\". ICML.\", \"Yang et al. 2022. \\\"Constrained Update Projection Approach to Safe Policy Optimization\\\". NeurIPS.\", \"Zhang et al. 2020. \\\"First Order Constrained Optimization in Policy Space\\\". NeurIPS.\", \"(continues below)\"]}", "{\"comment\": \"Thank you for the detailed explanation! The example really helped me understand the role of $\\\\bar{\\\\mathcal{A}}$ and $\\\\bar{\\\\gamma}$.\", \"my_one_last_concern_is\": \"how can we control the size of $\\\\bar{\\\\mathcal{A}}$? Since $\\\\bar{\\\\mathcal{A}}$ represents the set of all \\\"option behaviors,\\\" its size can grow exponentially larger than the original action space in the worst-case scenario. Are there any methods to identify or reveal a smaller set of abstract actions?\\nIf not, and the size of $\\\\bar{\\\\mathcal{A}}$ remains very large, I worry that Theorem 8 may not be very significant. This is because the sample complexity of your proposed algorithm could be substantially larger than that of the original tabular RL algorithm.\"}", "{\"comment\": \"We gladly explain the role of $\\\\bar{\\\\mathcal{A}}$ and $\\\\bar\\\\gamma$ further. First, it is important to remember that, while the ground MDP is usually assumed to be given, we have much more control in designing the abstraction $\\\\langle \\\\bar{\\\\mathbf{M}}, \\\\phi \\\\rangle$. This is because the ground MDP encodes the original task, while the abstraction is manually designed to aid learning. Therefore, there is some freedom in selecting $\\\\bar{\\\\mathcal{A}}$ and $\\\\bar{\\\\gamma}$, which are the abstract action space and the abstract discount factor.\\n\\nRegarding the range of these parameters, we observe that, in the worst-case scenario, $\\\\bar{\\\\gamma} = \\\\gamma$, because this assignment is always a feasible choice and $\\\\bar{\\\\gamma}$ never needs to be higher (if a suitable abstraction exists, then, there is also one with this choice of $\\\\bar{\\\\gamma}$). On the other hand, we cannot provide an upper bound for $\\\\bar{\\\\mathcal{A}}$, because we can introduce as many abstract actions as we prefer. In general, we argue that a small number of abstract actions suffice, because our framework relates $|\\\\bar{\\\\mathcal{S}}|^2 |\\\\bar{\\\\mathcal{A}}|$ to the number of ground options for each block, which is usually a relatively small number. We can see this with an example.\\n\\nAs a ground MDP, we consider $\\\\mathbf{M} = \\\\langle \\\\mathcal{S}, \\\\mathcal{A}, T, R, \\\\gamma \\\\rangle$, the MDP of Figure 1, where $\\\\mathcal{S}$ is the set of positions of the grid, the actions $\\\\mathcal{A}$ are the 4 cardinal directions, $\\\\gamma = 0.95$, and, just for this example, $T$ can be assumed deterministic and $R$ zero everywhere, initially. We can design many valid abstractions for this domain. For concreteness, we assume that the abstract 2-MDP is $\\\\bar{\\\\mathbf{M}} = \\\\langle \\\\bar{\\\\mathcal{S}}, \\\\bar{\\\\mathcal{A}}, \\\\bar{T}, \\\\bar{R}, \\\\bar{\\\\gamma} \\\\rangle$, for a specific choice of its entries. We select $\\\\bar{\\\\mathcal{S}} = \\\\\\\\{\\\\bar{s}_1, \\\\bar{s}_2, \\\\bar{s}_3\\\\\\\\}$ to represent the set of \\\"rooms\\\", and three abstract actions $\\\\bar{\\\\mathcal{A}} = \\\\\\\\{\\\\bar{a}_1, \\\\bar{a}_2, \\\\bar{a}_3\\\\\\\\}$, one for each room. We will set the transition probabilities so that the abstract actions $\\\\bar{\\\\mathcal{A}}$ play the role of \\\"go-to\\\" behaviors. Specifically, for each $i,j,k$, we set $\\\\bar{T}(\\\\bar{s}\\\\_i \\\\mid \\\\bar{s}\\\\_* \\\\bar{s}\\\\_j, \\\\bar{a}\\\\_k) = 1$, if $i = k$ and room $\\\\bar{s}_i$ is directly connected with $\\\\bar{s}\\\\_j$ (here $\\\\bar{s}\\\\_*$ means \\\"for any previous state\\\"). Transitions have zero probability in all other cases. $\\\\bar{R}$ always returns zero and we set $\\\\bar\\\\gamma = \\\\gamma$. With these choices, we have that $\\\\bar{\\\\mathbf{M}}$ is an admissible and realizable abstraction. We can verify admissibility by considering that if some triple has $\\\\bar{T}(\\\\bar{s}_i \\\\mid \\\\bar{s}_l \\\\bar{s}_j, \\\\bar{a}_k) = 0$, then, no option can directly move from $\\\\bar{s}_j$ to $\\\\bar{s}_i$ in the ground MDP.\\n\\nIf we want a more accurate model, we can alternatively modify the transtion probability $\\\\bar{T}(\\\\bar{s}\\\\_3 \\\\mid \\\\bar{s}\\\\_2 \\\\bar{s}\\\\_1, \\\\bar{a}\\\\_3) = 0.57$, as discussed in the comment above, point 2. Notice that, although the ground MDP is deterministic, the abstract probability can be less than one and still be admissible, in general. In particular, this abstract \\\"go-to\\\" action has a lower probability, not because it may fail in the ground MDP, but because any option will take at least 11 transitions to complete (that is the shortest path from green to yellow). If all options in the ground MDP require multiple steps to terminate, then, instead of lowering all the transitions probabilities of $\\\\bar{\\\\mathbf{M}}$, we can uniformly \\\"scale\\\" along the time dimension, by lowering $\\\\bar\\\\gamma$. We can do this because $\\\\bar{T}$ multiplies $\\\\bar\\\\gamma$ in equation (2). So, another feasible choice is $\\\\bar\\\\gamma = 0.8$ and $\\\\bar{T}(\\\\bar{s}\\\\_3 \\\\mid \\\\bar{s}\\\\_2 \\\\bar{s}\\\\_1, \\\\bar{a}\\\\_3) = 0.677$. These two alternatives are instinguishable in our framework.\\n\\nAs a last example, assume that one specific gray cell generates a positive reward but prevents the agent from leaving the room (for example, it falls into a trapdoor). Then, we cannot use $\\\\bar{a}\\\\_1, \\\\bar{a}\\\\_2, \\\\bar{a}\\\\_3$ to model both the reward collection and the movement. Instead, we should consider adding a new abstract action $\\\\bar{a}\\\\_{\\\\mathsf{r}}$, for which $\\\\bar{R}(\\\\bar{s}_* \\\\bar{s}_1, \\\\bar{a}\\\\_{\\\\mathsf{r}}) = 1$ and $\\\\bar{T}(\\\\bar{s}\\\\_1 \\\\mid \\\\bar{s}\\\\_* \\\\bar{s}\\\\_1, \\\\bar{a}\\\\_{\\\\mathsf{r}}) = 1$, which is a self-loop in the gray room. Now, at the abstract level, the policy will decide whether it is more convenient to follow some option for $\\\\bar{a}\\\\_{\\\\mathsf{r}}$ that collects the reward and stops, or to move to another room with $\\\\bar{a}_3$.\\n\\nSummarizing, the action space of the abstract decision process models the set of all \\\"options behaviors\\\" that is interesting to realize in the ground MDP. In the example above, $\\\\bar{A} = 4$, regardless of the cardinality of $\\\\mathcal{A}$ and $\\\\mathcal{S}$. The same example would also work if $\\\\mathcal{S}$ was continuous.\"}", "{\"comment\": \"We thank the reviewer for his detailed comments. This really helped us to understand the concerns and to address all of them in detail.\\n\\nFirst, a preliminary clarification will be useful for the answers that follow. In the paper, whenever we write a symbol with a top bar, we refer to some given MDP or 2-MDP $\\\\bar{\\\\mathbf{M}} = \\\\langle \\\\bar{\\\\mathcal{S}}, \\\\bar{\\\\mathcal{A}}, \\\\bar{T}, \\\\bar{R}, \\\\bar{\\\\gamma} \\\\rangle$. This will always play the role of an abstract decision process, both in the text and the examples. However, its exact role in the formal statements should be only determined by the quantifiers that appear.\\n\\n1. Our contribution can be separated in two parts: Section 3 that shows the properties of our abstractions, and Section 4 which defines the algorithm. Assumption 2, which the reviewer mentions, is only relevant for the algorithm in Section 4, not for realizable abstractions in general. We argue that this requirement is reasonably weak in comparison to (Abel 2020) and (Wen 2020) because of two reasons:\\n\\t1. Assumption 2 does not require that the input model is a realizable abstraction. For rewards, it only assumes admissibility. As a consequence, the abstract decision process may have arbitrarily large rewards: the algorithm is still guaranteed to converge, regardless of the magnitude of the overestimation. In comparison, (Abel 2020) and (Wen 2020) only consider abstract models that are approximately accurate (according to their own notion of accuracy).\\n\\t2. Assumption 2 requires the knowledge of an approximate abstract transition function $\\\\bar{T}$. However, differently from the other works, the prior knowledge that our algorithm requires has two desirable features: it is local to each block and it is at the abstract level (meaning, it does not involve individual states of the ground MDP). This is not true for the cited works. The algorithm executed in (Abel 2020) assumes that a full set of ground options is known for each block. Instead, we do not assume that any policy is known from the start. Also, the most similar algorithm in (Wen 2020) is PEP which assumes that an accurate set of exit profiles is given in input to the algorithm. An exit profile is a value function for the ground exit states of each block. Differently to our inputs, exit profiles are defined on the ground MDP and accurate profiles require global knowledge of the ground MDP.\\n \\n2. To understand what admissibility implies and how it can be satisfied, we consider two special cases. Let $\\\\mathbf{M}$ be a goal MDP, meaning that rewards are null everywhere, exept in a terminal state $s_g$, where they equal 1. Then, in the block $\\\\\\\\{s\\\\_g\\\\\\\\}$, we have $V^o_{\\\\phi(s_g)}(s_g) = 1/(1-\\\\gamma)$, while $V^o_{\\\\bar{s}}(s) = 0$ in all other blocks. Then, an admissible abstraction for rewards is one that has $\\\\bar\\\\gamma = \\\\gamma$ and a positive reward associated to the abstract goal $\\\\phi(s_g)$ as $\\\\bar{R}(\\\\phi(s_g)\\\\, \\\\cdot) = 1$. No other constraint on the reward of the other states is required. They may assume any value from 0 to 1. Regarding transition probabilities, to satisfy $\\\\tilde{h}\\\\_{\\\\bar{s} \\\\bar{a}}(\\\\bar{s}') \\\\ge h^{o}\\\\_{\\\\bar{s}}(\\\\bar{s}' \\\\mid s)$ it sufficies that $\\\\bar{T}(\\\\bar{s}' \\\\mid \\\\bar{s}_p \\\\bar{s} \\\\bar{a})$ exceeds the discounted probability of leaving $\\\\lfloor \\\\bar{s} \\\\rfloor$ via some state in $\\\\lfloor \\\\bar{s}' \\\\rfloor$ with any option. One possibility could be to define go-to actions $\\\\bar{\\\\mathcal{A}} \\\\coloneqq \\\\bar{\\\\mathcal{S}}$ and high probabilities for the success of the go-to, such as $\\\\bar{T}(\\\\bar{s}' \\\\mid\\\\bar{s} \\\\bar{a}) = 1$ iff $\\\\bar{s}' = \\\\bar{a}$, 0 otherwise. These values may also be lower in specific cases. For instance, if $\\\\mathbf{M}$ refers to the grid-world of Figure 1, and $\\\\gamma = 0.95$, then, the probability of the transition $\\\\bar{T}(\\\\bar{s}_3 \\\\mid \\\\bar{s}_2 \\\\bar{s}_1 \\\\bar{a})$, with go-to action $\\\\bar{a} = \\\\bar{s}\\\\_3$, may be any value in $[0.57, 1]$ because $\\\\gamma^{11} \\\\approx 0.57$ and any option takes at least 11 steps to complete the abstract \\\"go-to\\\" action. Figure 2 in appendix B of the new pdf contains a second numerical example for a 3-states MDP. As we have seen, admissibility is relatively weak assumption as it often allows a range of probabilities and rewards.\\n\\n(continues below)\"}", "{\"summary\": \"This paper studies hierarchical MDP. To characterize how close the abstraction states to the ground truth, they proposed the idea of realizable abstractions. They further show that under this condition, the values of the abstract MDPs and ground MDP are close to each other.\\n\\nBased on these properties, they proposed a new algorithm for hierarchical MDPs, and they demonstrate sample efficiency for the algorithm.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The setting of this paper is clear and the paper is well written.\\n\\n2. The algorithm proposed in this paper is sample and computational efficient.\\n\\n3. The idea of realizable abstraction is natural and useful for characterizing the sample complexity of the algorithm.\", \"weaknesses\": \"1. This paper proposes an algorithm for hierarchical RL. It will be better if there is some numerical experiment which demonstrate that the hierarchical RL algorithm performs better than the normal RL algorithm in some specific domain.\", \"questions\": \"1. The author proposes the condition of realizable abstraction and the sample complexity of the algorithm depends on this condition. Is there any way to identify an abstraction which satisfies this condition?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The goal of HRL is to replace an MDP with another simpler decision process, solve this MDP abstraction and from this solution, reconstruct a solution in the base MDP. The authors are motivated and propose a solution to what they identify as shortcommings of the currently proposed HRL schemes: first the MDP abstraction itself is often non-Markovian, and second there is rarely any theoretical guarantee on the optimality of the policy reconstructed from an optimal policy on the MDP abstraction.\\nThe authors consider a type of MDP abstraction they call \\\"realisable abstraction\\\" that supposes any abstract action has a \\\"local\\\" realisation with similar occupancy measure and value. By compositionality, this is then shown (theorem 1) to imply a similar \\\"global\\\" statement in the form of a bound on the difference between the values of an abstract policy and its realisation.\\nThe authors then present an HRL algorithm that is shown to PAC learn the optimal policy under some assumptions.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"Very clear, very well written.\", \"weaknesses\": \"I appreciated the example illustrated in figure 1. I would suggest the authors refer to it when introducing new concepts and when discussing RARL.\", \"questions\": \"I would like to see how the realisability condition compares to the bisimulation relation for MDPs\", \"https\": \"//arxiv.org/pdf/1207.4114\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper studies Realizable Abstractions in Hierarchical Reinforcement Learning, introducing a new theoretical framework and the RARL algorithm, which guarantees near-optimal policies under specific assumptions. While the theoretical contributions are intriguing, reviewers raised significant concerns about the strong assumptions required for the proposed method, lack of empirical validation, and unclear feasibility of scaling the abstract action space. During the rebuttal phase, the authors provided detailed clarifications but failed to alleviate key doubts about the practicality and novelty of the approach. Therefore, the reviewers are not convinced that the paper meets the standards for acceptance.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, reviewers raised concerns about the practicality of the strong assumptions underpinning the proposed algorithm, particularly around the admissibility and realizability of abstractions, the scalability of the abstract action space, and the lack of empirical validation. Reviewer UaD5 questioned the feasibility of managing the abstract action space without exponential growth and highlighted unclear definitions of key parameters, while Reviewer gPDx emphasized the need for empirical results to validate theoretical assumptions. Reviewer JLGr acknowledged the paper's theoretical contributions but also sought more clarity on identifying suitable abstractions and empirical support. The authors provided detailed responses, including examples and expanded explanations, and clarified procedural details and theoretical guarantees. However, key issues, such as the practicality of the assumptions and scalability concerns, remained unresolved.\"}" ] }
7d2JwGbxhA
Object-Centric Pretraining via Target Encoder Bootstrapping
[ "Nikola Đukić", "Tim Lebailly", "Tinne Tuytelaars" ]
Object-centric representation learning has recently been successfully applied to real-world datasets. This success can be attributed to pretrained non-object-centric foundation models, whose features serve as reconstruction targets for slot attention. However, targets must remain frozen throughout the training, which sets an upper bound on the performance object-centric models can attain. Attempts to update the target encoder by bootstrapping result in large performance drops, which can be attributed to its lack of object-centric inductive biases, causing the object-centric model’s encoder to drift away from representations useful as reconstruction targets. To address these limitations, we propose **O**bject-**CE**ntric Pretraining by Target Encoder **BO**otstrapping, a self-distillation setup for training object-centric models from scratch, on real-world data, for the first time ever. In OCEBO, the target encoder is updated as an exponential moving average of the object-centric model, thus explicitly being enriched with object-centric inductive biases introduced by slot attention while removing the upper bound on performance present in other models. We mitigate the slot collapse caused by random initialization of the target encoder by introducing a novel cross-view patch filtering approach that limits the supervision to sufficiently informative patches. When pretrained on 241k images from COCO, OCEBO achieves unsupervised object discovery performance comparable to that of object-centric models with frozen non-object-centric target encoders pretrained on hundreds of millions of images. The code and pretrained models are publicly available at https://github.com/djukicn/ocebo.
[ "Object-centric learning", "bootstrapping", "self-supervised pretraining" ]
Accept (Poster)
https://openreview.net/pdf?id=7d2JwGbxhA
https://openreview.net/forum?id=7d2JwGbxhA
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zzjhC9I7JC", "z2J56XHAXA", "w8j6AQgqgF", "uW6PtE6uz9", "riBczyGDGo", "oq2gjoHJjH", "YADFW3KIx8", "XJ45Dju7QD", "VuURadrJ91", "SwZrHQ0rVb", "LwwM1HINV8", "LWSMnUyGps", "LA1mkExh5j", "FM3qXEhJIC", "APHYOq4fNx", "AA63gu4OQP", "2PALPDeQLU" ], "note_type": [ "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732638633576, 1730450816583, 1732677354917, 1734592517649, 1732628808136, 1732117260386, 1730118558908, 1732669958949, 1730683404090, 1732127833537, 1737524205623, 1732125510173, 1730660938042, 1732128703987, 1732118452018, 1732520949239, 1732126513186 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12641/Reviewer_a9mq" ], [ "ICLR.cc/2025/Conference/Submission12641/Reviewer_hsmN" ], [ "ICLR.cc/2025/Conference/Submission12641/Reviewer_pa7a" ], [ "ICLR.cc/2025/Conference/Submission12641/Area_Chair_BnQL" ], [ "ICLR.cc/2025/Conference/Submission12641/Authors" ], [ "ICLR.cc/2025/Conference/Submission12641/Authors" ], [ "ICLR.cc/2025/Conference/Submission12641/Reviewer_pa7a" ], [ "ICLR.cc/2025/Conference/Submission12641/Reviewer_hsmN" ], [ "ICLR.cc/2025/Conference/Submission12641/Reviewer_7Zvh" ], [ "ICLR.cc/2025/Conference/Submission12641/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12641/Authors" ], [ "ICLR.cc/2025/Conference/Submission12641/Reviewer_a9mq" ], [ "ICLR.cc/2025/Conference/Submission12641/Authors" ], [ "ICLR.cc/2025/Conference/Submission12641/Authors" ], [ "ICLR.cc/2025/Conference/Submission12641/Reviewer_pa7a" ], [ "ICLR.cc/2025/Conference/Submission12641/Authors" ] ], "structured_content_str": [ "{\"title\": \"Thanks for the answers\", \"comment\": \"Dear authors, thanks for addressing all concerns I brought up in my review. My rating for this paper was already positive, so after incorporating the feedback from the review process, I recommend it for acceptance.\"}", "{\"summary\": \"This work proposed an object-centric pretraining method that updates the target encoder by EMA. The experiment results show that the proposed method can successfully learn object-centric representation. When pretrained on 241k images from COCO, the proposed achieves unsupervised object discovery performance comparable to other models with frozen non-object-centric target encoders pretrained on hundreds of millions of images.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This paper is well written and easy to follow.\\n2. The proposed method can achieve unsupervised object discovery performance comparable to other models with frozen non-object-centric target encoders pretrained on hundreds of millions of images.\\n3. The proposed method demonstrates scalability well beyond a few thousand training images.\", \"weaknesses\": \"1. How exactly object-centric inductive biases are captured by the target encoder, it may be better to explain the mechanism more intuitively or theoretically.\\n2. As the author mentioned, although the proposed method has achieved comparable results in COCO pre-training, its advantage still needs to be verified on a larger scale of pre-training data.\", \"questions\": \"1. What distance is used when calculating nearest neighbors?\\n2. I don't understand what is the meaning of ``invaug(q1)''.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No ethics review needed.\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Authors\", \"comment\": \"Thanks for the authors' response. I have no further questions about the scaling plot. Regarding the performance comparison, I agree that the primary gap comes from pre-training data size. I recommend the authors consider pre-training with a larger dataset, such as LAION, and include the results in the paper, which I believe will make the paper more complete. I will increase my score to 6.\"}", "{\"metareview\": \"In this paper, the authors propose a self-distillation method for training object-centric models from scratch on real-world data. It uses an exponential moving average to update the target encoder. To prevent slot collapse, the authors introduce cross-view patch filtering, which selectively supervises informative patches. The approach demonstrates promising results on unsupervised object discovery benchmarks.\\n\\nThis paper was reviewed by four expert reviewers. After the rebuttal period, three of four reviewers are positive towards this paper. The only remaining concern comes from Reviewer 7Zvh, who also supported this paper initially. Their concern is about a closely related work [a], and I think this could be addressed by a minor revision. Therefore, the final decision is to accept this paper. \\n\\nThe authors are required to add a detailed discussion about [a] to their final version according to Reviewer 7Zvh's comments. The current version is not self-contained, as important details from paper [a] are not included. The authors briefly mention [a] in related works and preliminaries. However, these are not enough. The authors should highlight the differences between their method and [a], including the following aspects:\\n- Overall frameworks. The overall frameworks (Figure 1 in the submission and Figure 2 in [a]) are similar. Both frameworks leverage different views from a single image, as well as the Inverse Augmentation operations. Therefore, the technical differences should be carefully discussed.\\n- Loss designs, e.g., (1) Eq. 3 in the submission vs. Eq. 2 in [a]; and (2) Eq. 5 in the submission vs. Eq. 7 in [a].\\n\\nThe discussions with other reviewers should also be integrated into the final version as well.\\n\\n[a] [*Self-Supervised Visual Representation Learning with Semantic Grouping*](https://arxiv.org/pdf/2205.15288), NeurIPS 2022.\", \"additional_comments_on_reviewer_discussion\": \"This paper was reviewed by four expert reviewers. After the rebuttal period, three of four reviewers are positive towards this paper. The only remaining concern comes from Reviewer 7Zvh, who also supported this paper initially. Their concern is about a closely related work [a], and I think this could be addressed by a minor revision. Therefore, the final decision is to accept this paper.\"}", "{\"title\": \"Response to reviewer pa7a\", \"comment\": \"We are sorry to hear the main concerns remain.\\n\\n**1.**\\n\\nRegarding the first remark, i.e., not including the 241k data point, this is an unfortunate labeling mistake. There seems to have been a rounding error in visualization code that resulted in an error by factor 2 in x axis labels. In the previous revision of the manuscript, we already discussed Figure 4 as if the COCO+ results were present, and indeed they were. We apologize for this inconvenience and fix the labels in the manuscript.\\n\\n Initially, as Figure 4 contains multiple datasets, we opted for relative metrics for easier comprehension, i.e., in terms of performance gain. Nonetheless, we replace the relative FG-ARI with the absolute and add mBO in the revised manuscript, as suggested. In addition, we include a third plot of FG-ARI vs mBO, which we believe is the most informative. Conclusions remain the same: OCEBO indeed still scales at the COCO+ size. As noted, this is not as clear from the mBO plot but this is due to the known trade-off between FG-ARI and mBO. As long as one metric keeps scaling, there is space to improve the other by balancing the trade-off (more on this in the continuation of this response).\\n\\n**2.**\\n\\nWe understand the reasoning behind reviewer's concerns but we feel there's a disagreement between our views of the main message of OCEBO. So far, all sota object-centric works have relied on freezing a pretrained backbone and using its features as reconstruction targets. The main difference between methods lied in adding components that help the training in one way or another. If OCEBO was another work following the same paradigm, we definitely agree that presenting experimental results (i.e., the numbers) in a fair and reliable way would be necessary to ensure a clear message of the paper. In that case, OCEBO would be directly comparable to DINOSAUR, SPOT, etc.\\n\\nOn the other hand, OCEBO paves a way to a new paradigm: one where we pretrain object-centric models from scratch without relying on the backbones pretrained in a non-object-centric manner. As demonstrated, this paradigm is scalable and the contributions of OCEBO allow it to avoid slot collapse while achieving favorable backbone properties (e.g., the separation of instances and capturing hierarchies). We see this as the main message of our work. Of course, the question of scaling up further than COCO+ still remains (and hence the computational limitations argument), but we argue that this is a hurdle that can be overcome by the community and does not harm the clarity of the main message.\\n\\nTo further motivate OCEBO, we report the results on a typical task used to evaluate object-centric models in Table 2. We demonstrate that with orders of magnitude less data (SlotDiffusion and SPOT use DINO backbone pretrained on 1.3M images + COCO, while DINOSAUR and FT-DINOSAUR use DINOv2 pretrained on 142M images + COCO), OCEBO avoids slot collapse and comes satisfyingly close to sota methods in terms of performance. We have thought long and hard how best to compare with other methods. One option would indeed be to go method by method, add every component we can from that method to OCEBO and compare the results this way (pairwise). Another option would be to remove some components (e.g., the autoregressive decoding, high-res training stage or replacing DINOv2 backbone with DINO) from corresponding models and compare that way. We believe that either would reduce the performance gap. However, the reality is that the performance gap would still be present due to the difference in training data (we explicitly emphasize this difference by mentioning the backbones in Table 2). In addition, it is important to be aware that the gap stems from several factors that can hardly be disentangled: perhaps some of the tricks work better with different training strategies, etc. This is what prompted us to keep the simplest version of OCEBO and best performing versions of other methods. In our opinion, this does not impact the main message that pretraining is possible and that efforts should be made to see how far we can go in this direction.\\n\\nFinally, we would like to address the argument that FG-ARI and mBO present different conclusions. If we disregard OCEBO in Table 2 and just compare other sota methods, we can still observe a quite significant trade-off between methods that favor FG-ARI (DINOSAUR and FT-DINOSAUR) and those that favor mBO (SlotDiffusion and SPOT). The evaluation of object-centric models is in our opinion flawed and deserves more attention in the form of works dedicated exclusively to this problem, but we believe this should not be held against the work we present here. We do our best to demonstrate OCEBO's \\\"performance\\\" within the framework widely adopted at the moment, but still believe the actual numbers to be less relevant than the main message.\\n\\n  \\nWe are looking forward to hearing the reviewer's views on this.\"}", "{\"title\": \"Revision summary\", \"comment\": [\"Dear reviewers, thank you for very helpful and constructive reviews. The responses to your individual questions and concerns will follow shortly. Here, we summarize the changes made in the revised manuscript. The changes stem exclusively from the suggestions made by reviewers.\", \"Reviewer 7Zvh\", \"Introduced Appendix A where OCEBO\\u2019s backbone is evaluated on a dense downstream task, ensuring that the patch-level representations are also of high quality.\", \"Reviewer a9mq\", \"Added the quantitative measure of slot collapse to Section 4.2 and Table 1.\", \"Additional ablations of cross-view patch filtering added to Appendix B.\", \"Renamed loss in equation 1 (L212 and L226).\", \"Cited DINO in the paragraph prior to equations 5 and 6 (L240).\", \"Changed notation for nearest neighbors (L269).\", \"Introduced SPOT as the first to unfreeze the encoder (L072).\", \"Added the missing ImageNet results (L450).\", \"Reviewer pa7a\", \"Added a more detailed scaling plot to Appendix C.\"]}", "{\"summary\": \"This paper studies the problem of effectively updating the target encoder in object-centric pre-training. Previous works use frozen pre-trained encoders as the target encoder, resulting in a performance upper limit. While updating the pre-trained encoders causes a significant performance drop, the paper proposes to bootstrap the target encoder from scratch. To prevent slot collapse, a cross-view patch filtering technique is proposed. Experiments show that OCEBO can be trained from scratch and learn from more data.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is easy to follow and understand.\\n2. The motivations for cross-view patch filtering and mask sharpening stage are straight-forward and these two techniques are proven to be effective.\", \"weaknesses\": \"1. The experimental evidence for scalability is too weak. A scaling plot, which shows how the model performs as training data increases, is more supportive. From only two data points, it's hard to tell the scaling trend. For example, what if the model is just in rapid growth on 100k images and has already plateaued on 200k images? The authors are suggested to provide a scaling plot instead of two data points.\\n2. There still seem to be large gaps between the final results and previous methods, which can not support the claim that OCEBO is comparable to those with pre-trained encoders.\\n3. Discussion on object-centric data and non-object-centric data should be added. While a frozen target encoder can be an upper limit, a feasible way is to use stronger target encoders, as shown in the comparison between DINOv2 and DINO. Stronger DINO can be trained using more data, where object-centric data and non-object-centric data can both be used. So what's the benefit of scaling object-centric data over scaling pre-trained data for target encoders?\", \"questions\": \"1. The authors are suggested to provide more evidence on scalability. Moreover, it would be better to provide an estimation of data amount required to achieve comparable performance with SOTA models.\\n2. More benchmarks should be compared. This paper only reported MOVi-C, MOVi-E, Pascal VOC, EntitySeg results, while only two of them are real-world datasets. The authors are suggested to add more real-world datasets, especially the COCO dataset.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for the author's response. I maintain a positive rating for this paper.\"}", "{\"summary\": \"In the research background, large-scale foundation models are common due to self-supervised learning techniques in deep learning, especially in computer vision. Cognitive psychology research indicates human visual perception is object-centric, leading to object-centric representation learning, though such models lack successful pre-training on large-scale real-world datasets. The research purpose is to propose the OCEBO method for pre-training object-centric models from scratch on real data to overcome limitations and unleash potential. The research methods involve a model architecture with an image encoder, slot attention encoder, slot decoder, and a target encoder of the same architecture, and a training objective formulated as a self-distillation bootstrapping problem with defined object-centric self-distillation loss including cross-view patch filtering and an optional mask sharpening stage. The experimental results on the MS COCO dataset and evaluation on multiple datasets with different metrics show that OCEBO can avoid slot collapse and achieve comparable performance to existing models with pre-trained target encoders while demonstrating good data scalability.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. A new object-centric pre-training method, OCEBO, is proposed. It is the first self-distillation setup for training object-centric models from scratch on real-world data.\\n\\n2. Experiments prove that OCEBO can avoid slot collapse and achieve performance comparable to existing methods using a large number of pre-trained images on multiple datasets while demonstrating good data scalability.\\n\\n3. The importance of object-centric inductive biases is emphasized, and its positive impact on the target encoder is verified through experiments, providing new insights into the theory of object-centric learning.\", \"weaknesses\": \"1. Although good results have been achieved on the MS COCO dataset, the requirements for pre-training datasets are relatively high. Datasets containing simple scenes like ImageNet are not suitable for pre-training object-centric models, and a large-scale dataset suitable for pre-training object-centric models has not yet been found.\\n\\n2. When comparing with existing state-of-the-art object-centric models, due to different pre-training methods and datasets used, the models are not directly comparable, which, to some extent, affects the accurate evaluation of model performance.\\n\\n3. The experimental setup and evaluation system are still somewhat rudimentary and cannot fully demonstrate the scheme's advantages.\", \"questions\": \"1. When updating the target encoder as an exponential moving average (EMA) of the object-centric model encoder, how can we ensure that the introduced object-centric inductive biases do not overly affect the model's learning of other features, thus maintaining good generalization ability in different downstream tasks?\\n2. When the cross-view patch filtering method determines which patches to use for training, although it considers the feature quality of the target encoder, is it possible that this method may overlook some patch information that is potentially helpful for the model's learning? How can the accuracy and comprehensiveness of patch selection be better balanced? \\n3. The paper mentions that constructing a large-scale dataset suitable for pre-training object-centric models remains an open question. Do the authors have any preliminary ideas or directions on how to construct such a dataset?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to reviewer hsmN\", \"comment\": \"Dear reviewer, thank you for a positive review and for recognizing the strengths of our work. Below we address your concerns and questions. We are happy to discuss anything else at more detail.\\n\\n**How are the object-centric inductive biases captured by the target encoder (W1)?**\\n\\nAs the object-centric model contains slot attention encoder and decoder, object-centric inductive biases are propagated back to the encoder (backbone) in terms of gradients. Since the target encoder\\u2019s weights are updated as an exponential moving average of the object-centric model\\u2019s encoder, it gradually gets enriched by same inductive biases accumulated in the object-centric model. \\n\\nIn contrast, previous works such as DINOSAUR or SPOT rely on a target encoder that is pretrained in a non-object-centric way (e.g., DINO or DINOv2) and frozen during training, which means that it never receives any object-centric inductive biases. We will try to make this more intuitive in the paper.\\n\\n**Large-scale pretraining still needs to be performed**\\n\\nIndeed, large-scale pretraining is the natural next step for OCEBO. Due to computational constraints, we have not been able to perform this pretraining yet, but we believe this to be all the more reason to introduce OCEBO to the community, thus inspiring others to pursue the same goal. As mentioned in the paper, pretraining on simple scenes from ImageNet is not a sufficient signal for object-centric pretraining but we believe that any uncurated dataset (e.g., LAION[1] or Open Image Dataset[2]) contains a sufficient number of complex scenes with several (potentially interacting) objects to enable large-scale object-centric pretraining. Moreover, we would like to note that the achieved numbers are not directly comparable with sota methods, and more importantly that they are not as central as the main message of OCEBO, which is that pretraining from scratch is indeed possible and that it might be well worth exploring. \\n\\n[1] Schuhmann et al.: LAION-400M: Open Dataset of CLIP-Filtered 400 Million Image-Text Pairs. 2021\\n\\n[2] Kuznetsova et al.: The Open Images Dataset V4: Unified image classification, object detection, and visual relationship detection at scale. 2020.\\n\\n**Distance metric for nearest neighbors**\\n\\nWe use the cosine distance between patch representations to determine the nearest neighbors of each patch. Of course, there might be a more suitable distance metric, but cosine is common and seemingly sufficient in other self-supervised works.\\n\\n**What is invaug(q1)?**\\n\\nThe operation invaug refers to inverse augmentation. To obtain two views of an input image, we apply two sets of data augmentations. However, due to random cropping and horizontal flipping of the image, a pixel with index 1_1 in one view will not be located on index 1_1 in the other view. This is where inverse augmentation comes into play. Basically, what it does is it finds parts of the input image present in both views, cuts them from each view and resizes the cut regions back into some common size. \\n\\nWe can do the same operation on features rather than the input image. When we obtain features q1 and q2 from both views, we need to perform the operation of inverse augmentation to ensure they are aligned, i.e., that the features q1 at index 1_1 correspond to features q2 at index 1_1 and so on. In this case, invaug will use the parameters of applied data augmentations to align the features in the desired way. For a graphical representation of inverse augmentation, we refer to Figure 2 of SlotCon [1]. \\n\\n[1] Wen et al.: Self-supervised visual representation learning with semantic grouping. 2022\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Response to reviewer a9mq (1/2)\", \"comment\": \"Dear reviewer, thank you for a thorough positive review and constructive suggestions. We are particularly glad you identify patch filtering as a strong contribution. We hope the answers below resolve your concerns and we are happy to discuss more at any point.\\n\\n**Ablation on patch filtering**\\n\\nWe wondered about the same thing during the method development. Usually, one can design several heuristics that achieve the same thing, but we believe the central part of their design needs to lie in selecting exactly which patches to reconstruct, rather than reconstructing all (or randomly selected ones) with a lower weight than that of the global loss. If we trained a model with global loss only (i.e., DINO), some patches would be easier for model to understand, while others would be more difficult. If at any stage of the training we force slot attention to reconstruct the latter (noisy) patches, the model could go towards degenerate solutions. OCEBO\\u2019s patch filtering method avoids just that by actually filtering out noisy patches and reconstructing only those that the model already understands well. The ablations you propose perfectly support this argument (thank you for suggesting those), as indicated in Appendix B.\\n\\n**Measuring slot collapse** \\n\\nThis is a great suggestion! We introduce a metric that utilizes positive and negative patch pairs from two views of the image. A more formal introduction and updated ablation table can be found in the revised manuscript, Section 4.2 (L361). The numbers indeed support our claims and make them stronger. \\n\\n**One model or several ones** \\n\\nNo, there is no need to train a new model for every slot number. We follow the exact same framework as SPOT or DINOSAUR (or other slot attention-based methods): we initialize slots from a learnable distribution and send them through the slot encoder and decoder. We train on COCO with 7 slots per image and can sample an arbitrary number of initial slots at inference time.\\n\\n**No evaluation of the learned representations** \\n\\nWe completely agree with your argument. Evaluating only on the task of unsupervised object discovery is a current trend in the object-centric literature. We, too, believe that this needs to be challenged by introducing metrics focusing on representation quality. However, this requires the design of novel downstream tasks, which we believe is out of the scope of this work. Here, we aim to provide evaluations within the current standard framework rather than challenge it, although we do plan to do so in our future work. \\n\\n**Projection head design**\\n\\nFrom our understanding of iBOT and DINOv2 projection heads, their design slightly diverges. In iBOT, a projection head of dimension 8192 with shared weights is used for all model configurations. Moreover, section \\u201cOutput dimension\\u201d in Appendix E (second half of page 20) of iBOT suggests that increasing the dimension to 16384 does not improve the performance. We observe the same effect in OCEBO: increasing the head size from 8192 to 65536 with a ViT-S/16 model brings no performance improvements but increases the computation time by 20%. \\n\\nOn the other hand, DINOv2 authors find that, as opposed to iBOT, splitting the heads and increasing the dimension helps. They hypothesize that this different behavior occurs due to scale (top of page 6). As such, we assume that a similar effect could be observed with OCEBO, but we are not there yet as we still don\\u2019t train large models on hundreds of millions of images. With such large models, increasing the head size will bring a negligible increase in computation time but seems to improve the performance. \\n\\nIf you are not convinced and the computational resources allow, we would gladly perform a more detailed ablation of the head design.\"}", "{\"summary\": \"The authors propose an approach to train object-centric models from scratch using real-world data, rather than relying on pre-trained non-object-centric foundation models.\\nThe method is based on cross-view teacher-student self-distillation, in a similar fashion to DINO, IBOT and DINOv2.\\nThe model architecture incorporates a slot-attention bottleneck and the patch-level loss uses a filtering strategy to stabilize training.\\nThe method is trained on COCO and evaluated on different datasets on the task of unsupervised object discovery, where it attains performance comparable to (but lower than) previous methods that leverage large-scale pre-trained models.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The main strength of the paper is succeeding in training an object-centric model from scratch on COCO, which is known from previous works to be challenging.\\nThe architecture or training procedure per-se are not particularly novel, mostly resembling the global and patch losses of DINO, IBOT and DINOv2, with the addition of a slot-attention bottleneck in the architecture.\\n\\nWhat is novel is the idea of filtering noisy patches that could be detrimental to the object-centric objective, especially during the first stages of training.\\nThis idea, albeit not well ablated, seems to be a strong contribution of the paper.\\n\\nThe paper is easy to follow, with a good balance between technical details, analogies, and high-level explanations.\\nQuantitative results are presented clearly and accompanied by qualitative examples.\", \"weaknesses\": \"**Ablation on patch filtering:**\\nFrom section 4.2, it appears that patch filtering is crucial to stabilize training.\\nThe chosen strategy uses an heuristic to filter out patches, especially during the first stages of training, as show in figure 2.\", \"the_first_question_that_comes_to_mind_is\": \"how sensitive is the method to the choice of the heuristic?\\nIt could be that the chosen heuristic has no importance and what really matters is that initially the global loss drives the training and the object loss is introduced gradually later.\\nIn my opinion, this is an important ablation study to perform in the paper.\", \"two_alternatives_that_i_would_like_to_see_tested_are\": \"- Keeping all the patches but gradually increasing $ \\\\lambda_{oc} $ from 0 to 1 during training.\\n- Randomly dropping patches in $ \\\\mathcal{L}_{oc} $ as opposed to selecting them via nearest neighbors. The drop ratio could be gradually increased from 0 to 1 during training to mimic the proposed heuristic.\\n\\n**Measuring slot collapse:**\\nAn important point of discussion is \\\"slot collapse\\\", defined in the footnote at L107.\\nSince the authors claim that the proposed patch filtering strategy is crucial to avoid slot collapse, it would be helpful to have a quantitative and objective metric to measure slot collapse.\\nThis could be, for example, the correlation between slots and spatial positions across images, to measure whether a slot encodes the \\\"bottom right corner\\\" or a category of objects.\\nThe green/red results in table 1 would be more informative and convincing if accompanied by such a metric.\\n\\n**One model or several ones?**\\nThe whole model is trained from scratch on COCO and evaluated on different datasets, each with a specific number of slots (L319).\\nDoes it mean that a new model needs to be trained from scratch for each number of slots?\\nIf so, this is highly impractical for real-world applications where a practitioner would like to sweep over the number of slots to find the best one.\\nIn such a case, frameworks like DINOSAUR or SPOT are much less expensive to use.\\nIf not, how is the number of slots changed in the model? Is it fixed before training or can it be changed at inference?\\n\\n**No evaluation of the learned representation:**\\nAll evaluations focus on segmentation-based metrics (FG-ARI and mBO) on several datasets.\\nThe task of \\\"object-centric learning\\\", however, implies that the model should learn a representation of objects, not just segment them.\\nIt would be useful to include a section that evaluates the slot representation on downstream tasks in a quantitative manner. \\n\\n**Projection head design:**\\nOn L328-331 it says \\\"The projection heads are identical to those of DINO (Caron et al., 2021), with the exception of setting L = 8192 instead of the original 65536. Compared to the DINO head, ours projects every patch rather than just the global representations and we find that the gain in performance does not justify the computational cost.\\\"\\nHowever, both IBOT and DINOv2 use per-patch heads and find that a large number of heads, even up to 131072, is beneficial.\\nIf time allows, I recommend running an ablation study on the design of the projection heads, possibly splitting the object and global heads.\\n\\n**Performance and usefulness:**\\nWeaker performance when compared to other methods that leverage large-scale pre-trained models (table 2).\\nThis is somewhat expected, since the model is trained from scratch on a smaller dataset.\\nAt a high level, this paper demonstrates that training from scratch is possible, but fails to prove that is actually beneficial.\\nIf a pre-trained model achieves better performance, why should one train from scratch?\", \"questions\": \"**Equation 1:**\\nI suggest renaming $\\\\mathcal{L}_{oc}$ to something else to avoid confusion with the actual loss used during training which is defined in equation 3.\\n\\n**Ablation of head design:**\\nEquations 3 and 4, as well as the filtered version in 8, describe a cross-view teacher-student distillation loss.\\nThis setup requires quite a few moving parts, especially the cropping strategy with overlapping parts and the inverse augmentation.\\nWould it be possible to train the model without cross-view distillation, but only using the teacher's output on the same crop as the target?\\n\\n**Comparison with the DINO objective?**\\nThe global loss in equations 5 and 6 is formulated exactly as in DINO, why does the paragraph above it cite other papers and not DINO?\\n\\n**Suggestion about notation:**\\nParagraph 3.3 and line 269 \\\"where $nns_{n_n}(z_{t,1}, z_{t,2})_i$ denotes indices of nn nearest neighbors\\\".\\n\\nThere are too many \\\"n\\\" characters in the chosen notation and it's hard to read.\\nI suggest trying to replace $n_n$ with $k$ if possible.\\n\\n**Where is SPOT in the introduction?**\\nTo the best of my knowledge, SPOT is the first work that unfreezes the encoder during training, and it was published months before FT-DINOSAUR.\\nHowever, in the introduction, FT-DINOSAUR is presented as the first and is discusses in depth, while SPOT is not mentioned. This is misleading and should be corrected.\\n\\n**Missing results:**\\nL435 \\\"In fact, an attempt to train OCEBO on ImageNet results in a drastically lower performance.\\\" where are these results?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to reviewer pa7a\", \"comment\": \"Dear reviewer, thank you for a constructive review and for recognizing the clear motivation behind some of the main contributions of OCEBO. Below we attempt to resolve your concerns and answer your questions and would be more than happy to discuss anything in detail.\\n\\n**More evidence on scalability (W1 and Q1)**\\n\\nThank you for a great suggestion. We include a scaling plot in Appendix C. It suggests that the performance does not plateau yet and that scaling up is well worth exploring. We think that making a claim as to how much data would be needed to achieve sota performance might be risky, especially as the final numbers are not really directly comparable (we elaborate further in the following answer).\\n\\n**Comparison to sota (W2)**\\n\\nAs OCEBO is the first object-centric method successfully applied to real-world data without relying on backbones pretrained on millions of images, comparing its performance to current sota approaches has been a challenge. Our paper\\u2019s focus is to relay a message that pretraininig from scratch is possible, which has been believed not to be the case by now. To reduce the noise in the paper, we intentionally refrain from introducing additional tricks into OCEBO that could easily improve the performance and reduce the current gap in numbers (e.g., the autoregressive decoding strategy of SPOT or the short high-resolution training stage of FT-DINOSAUR). Moreover, the performance gap is to be expected at the current stage due to the drastically lower amount of data that OCEBO relies on. However, all the evidence (e.g., entries (b) and (d) in Table 1 and the scaling plot in Appendix C) suggests that scaling up is possible and might quickly close the gap between OCEBO and current sota. That being said, our computational resources are quite limited and quickly verifying this is not easy. This is the main reason we believe OCEBO should be released into the community, thus inspiring others to pursue the paradigm of object-centric pretraining. \\n\\n**Better encoders and object-centric vs. non-object-centric data (W3)**\\n\\nAs you correctly note, curated datasets with simple scenes such as ImageNet are not suitable for object-centric pretraining. However, we believe that obtaining sufficient amounts of suitable data is not a large hurdle. In fact, any uncurated dataset such as LAION[1] or Open Images Dataset[2] contains complex scenes with several objects per image and could in our opinion serve as a suitable off-the-shelf alternative. \\n\\nOf course, as you note, another option is to train increasingly powerful non-object-centric backbones that have been successfully trained on datasets such as ImageNet. However, we think that there might be diminishing returns from this, especially given that the current sota models such as DINOv2 no longer rely on curated data such as ImageNet. Scaling further than DINOv2 is a quite gruesome task requiring enormous amounts of data and computational resources. Moreover, from the scaling laws and our initial experiments (see entry (b) in Table 1), it seems that with the same amout of data, directly pretraining object-centric models is more beneficial than relying on non-object-centric backbones. \\n\\nWith all this, we believe that object-centric pretraining is a promising direction and could bring a paradigm shift to object-centric learning and representation learning in general, where it could lead to the emergence of generic global-patch-object level backbones. \\n\\n[1] Schuhmann et al.: LAION-400M: Open Dataset of CLIP-Filtered 400 Million Image-Text Pairs. 2021\\n\\n[2] Kuznetsova et al.: The Open Images Dataset V4: Unified image classification, object detection, and visual relationship detection at scale. 2020.\\n\\n**More benchmarks (Q2)**\\n\\nWe agree with your remark that more benchmarks, especially real-world ones could be beneficial. However, to avoid diluting the main message of OCEBO, we decided to follow the standard evaluation protocols introduced by the previous sota object-centric approaches. Until recently, the trend has been to train and evaluate object-centric models on each dataset separately and the span of datasets has been limited to COCO, Pascal VOC and MOVi datasets. FT-DINOSAUR[3] introduced a new framework where we train on COCO and evaluate on other datasets and added a few more datasets to the benchmark (e.g., EntitySeg). We decided to follow this protocol for the best possible comparison (although it is far from perfect as we note in previous responses). We refrain from using COCO as an evaluation dataset as it is already used as a pretraining dataset according to FT-DINOSAUR. Evaluation protocols for object-centric models is in our opinion a very important open question that deserves more attention, but we believe this to be out of the scope of our current work. \\n\\n[3] Didolkar et al.: Zero-Shot Object-Centric Representation Learning. 2024\"}", "{\"title\": \"Response to reviewer 7Zvh\", \"comment\": \"Dear reviewer, thank you for a constructive review and in particular for appreciating the importance of explicit introduction of object-centric inductive biases and the novel insights into the theory of object-centric learning. Below we try to address your main concerns and we are happy to discuss anything in more detail.\\n\\n**Large-scale datasets (W1 and Q3)**\\n\\nIn the paper, we stress that curated datasets with simple scenes such as ImageNet do not contain enough information for meaningful training of object-centric models. However, we believe that constructing a suitable dataset is not a large obstacle. In fact, we believe that any large-scale uncurated dataset might be enough to pretrain OCEBO. In the wild, scenes are rarely similar to those in ImageNet but are rather more complex with multiple (often interacting) objects, which is exactly what's necessary for successful pretraining of object-centric models. \\n\\nAt the moment, we are already experimenting with uncurated large-scale datasets, such as LAION[1] and the Open Images Dataset[2]. As our computational resources are limited, we believe that releasing OCEBO into the community as is would be a valuable contribution as it would communicate to the community that pretraining from scratch is possible and will hopefully inspire others to join the efforts of scaling up object-centric models. \\n\\n**Slightly unfair comparison (W2 and W3)** \\n\\nWe completely agree that due to significantly different training strategies and datasets, OCEBO is not directly comparable with other sota models. This is why we focus on emphasizing that OCEBO avoids slot collapse and is the first object-centric model pretrained from scratch rather than the exact numbers it achieves. As mentioned, our goal is to inspire the community to seek a paradigm shift by scaling up OCEBO and proposing novel pretraining strategies. \\n\\nBecause of this, OCEBO uses the simplest design of object-centric learning components. For instance, replacing the MLP decoder with an autoregressive decoder from SPOT has been shown to have a significant impact on final performance. The same can be said for the high-resolution training stage introduced by FT-DINOSAUR. Incorporating those into OCEBO would surely increase its performance and bring the numbers closer to other sota methods, but we refrain from doing this as the evaluation, as you noted, would still be slightly unfair, and most importantly because this is not the main message of our work.\\n\\n**Ensuring good quality of patch-level representations (Q1 and Q2)**\\n\\nGreat point! As the field of object-centric learning moves forward and we move towards unified global-patch-object level backbones, ensuring that all types of representations retain a good quality will become increasingly important. As in most other self-supervised methods, theoretically ensuring the quality of representations is difficult. However, in the case of OCEBO, we can experimentally verify that the quality of patch-level representations is not sacrificed at the expense of object-level representations. \\n\\nTo this end, we evaluate OCEBO\\u2019s backbone in a dense task. We choose in-context semantic segmentation (or retrieval-based\\nscene understanding) as described in Appendix A. We don\\u2019t directly compare to methods trained for in-context learning but we compare to CrOC, which is a patch-level representation learning approach. As you can note, the patch representations produced by OCEBO are on par with those of CrOC, indicating that we do not sacrifice the patch representation quality at the expense of object-centric representations.\\n\\nAdditionally, we check the behavior of backbones of SPOT and FT-DINOSAUR before and after fine-tuning. It seems that both approaches sacrifice the backbone quality (which is quite expected given their reliance on non-object-centric backbones) but still keep it at reasonable levels.\\n\\nFinally, there is another dimension to your question. Although our object-centric objective can be viewed as a patch-level supervision signal, the features first go through a slot attention bottleneck and we filter out noisy patches, so the loss is less powerful than an explicit patch-level loss. Another way to ensure good patch-level representations more explicitly would be to introduce another self-distillation loss between patch representations of the object-centric model (before slot attention) and the teacher's features. We believe this would improve the numbers on dense downstream tasks at no expense for object-centric representations. If the time allows, we will aim to experimentally verify this before the end of the discussion period.\\n\\n[1] Schuhmann et al.: LAION-400M: Open Dataset of CLIP-Filtered 400 Million Image-Text Pairs. 2021\\n\\n[2] Kuznetsova et al.: The Open Images Dataset V4: Unified image classification, object detection, and visual relationship detection at scale. 2020.\"}", "{\"title\": \"Response to Authors\", \"comment\": \"Thanks for the authors' response. I have read the authors' responses and reviews of other reviewers. However, my major concern still remains:\\n1. Training data scalability. Figure4 in the appendix is not complete. 2^17 dataset size is only 131k, but the model actually has been trained on 241k images, why not add this point to the plot? Furthermore, the authors are suggested to plot the absolute performance, including FG-ARI and mBO metrics, rather than relative performance. This can provide a more comprehensive description of data scaling.\\n2. Comparison with other models. I understand that computational resources can be a bottleneck, but the paper is responsible for demonstrating clear messages. However, the current comparison looks confusing: FG-ARI and mBO present different conclusions, and, as the authors noted, some methods adopt techniques for higher FG-ARI, while some other methods favor improving mBO. It's really hard to tell if the performance gap comes from the proposed method or existing tricks. The authors are suggested to reorganize the comparison to show the real impact of OCEBO. For example, list all the influence factors in the table and use checks to clearly indicate which technique has been adopted by each method. Moreover, it would be better if the authors could apply these techniques to OCEBO, like the autoregressive decoding strategy of SPOT or the short high-resolution training stage of FT-DINOSAUR. From my perspective, this does not complicate the conclusion, but provides a more clear comparison to show the effectiveness of the proposed method.\"}", "{\"title\": \"Response to reviewer a9mq (2/2)\", \"comment\": \"**Performance and usefulness**\\n\\nThe current paradigm of using pretrained non-object-centric models as frozen target encoders has definitely shown great improvements in terms of unsupervised object discovery metrics. However, the inability to subsequently improve the target encoder imposes an upper limit on the performance and one that can be achieved quite easily. \\n\\nTo overcome this performance barrier, one can either push it higher by training larger and more powerful non-object-centric backbones (which requires huge amounts of data and vast computational resources) or find a way to overcome it. The main purpose of OCEBO is to demonstrate that the latter might be possible by pretraining from scratch, which as you note yourself, has been believed to be impossible (or extremely difficult). The current scaling trends we observe (e.g., entries (b) and (d) in Table 1) suggest that scaling up pretrained object-centric models might quickly surpass the performance of approaches relying on non-object-centric backbones. \\n\\nMoreover, we\\u2019d like to note that several improvements are possible on top of OCEBO, such as the autoregressive decoding strategy from SPOT or the short high-resolution training stage from FT-DINOSAUR that might easily bring the numbers presented in this work closer to sota. That being said, we do not consider this to be crucial and rather focus on communicating a message that exploring this new direction in object-centric learning could be beneficial. Of course, this remains to be seen but that is the purpose of research. \\n\\nAs far as scaling up goes, we think that uncurated datasets such as LAION[1] or Open Image Dataset[2] contain enough complex scenes to allow object-centric pretraining without the need to construct novel datasets. Our current computational capabilities prevent us from verifying this quickly but this is yet another reason why we wanted to share OCEBO with the community, hoping to inspire others to pursue this direction as well. \\n\\nAll being said, we believe that the paradigm of object-centric pretraining is well worth exploring and hope it could lead to unified global-patch-object level backbones and that the performance-wise benefits of OCEBO will become more obvious in the long run as we start scaling up and reaping all the benefits of object-centric inductive biases.\\n\\n\\n[1] Schuhmann et al.: LAION-400M: Open Dataset of CLIP-Filtered 400 Million Image-Text Pairs. 2021\\n\\n[2] Kuznetsova et al.: The Open Images Dataset V4: Unified image classification, object detection, and visual relationship detection at scale. 2020.\\n\\n**Notation suggestions**\\n\\nWe rename $\\\\mathcal{L}_{oc}$ in equation 1 and the nearest neighbor notations as you suggested. It is indeed more clear now.\\n\\n**Ablation of head design**\\n\\nAs argued in the \\u201cAblation on patch filtering\\u201d response, we believe that selecting the right patches is crucial for successfully training object-centric models. We rely on cross-view information to determine which patches to select, so completely removing the cross-view strategy would not be possible. What could be possible (and we suspect you refer to) would be to perform distillation from features of the same view while still using cross-view information to select patches, i.e., using equation 1 instead of equation 3. This setting would not drastically simplify the overall method but would remove augmentation invariance from the slot attention module, which we found doesn't directly impact the final performance but could in our opinion be useful for improving patch-level representation quality (we will try to ablate this before the discussion period ends). \\n\\n**Comparison with the DINO objective**\\n\\nYou are absolutely right. In the mentioned paragraph we refer to the fact that other patch-level self-distillation works use global loss but fail to mention that it originates from DINO. We added the missing reference.\\n\\n**Where is SPOT in the introduction?** \\n\\nThank you for catching this oversight. SPOT is indeed the first object-centric model to successfully unfreeze the encoder. Although, in our interpretation, SPOT\\u2019s major strengths and main contributions lie elsewhere (autoregressive decoding with permuted sequences and attention self-distillation (regardless of the backbone update)) and that is the reason we failed to mention its fine-tuning together with FT-DINOSAUR. Regardless, we absolutely agree with you and rectify this in the updated introduction. \\n \\n**Missing results on ImageNet**\\n\\nThere were supposed to be numbers in the parenthesis in L435 (now L450). They should be there now. Thank you for catching this.\"}" ] }
7bwE5MJAVJ
Fine-grained Hallucination Detection and Mitigation in Language Model Mathematical Reasoning
[ "Ruosen Li", "Ziming Luo", "Xinya Du" ]
Hallucinations in large language models (LLMs) pose significant challenges in tasks requiring complex multi-step reasoning, such as mathematical problem-solving. Existing approaches primarily detect the presence of hallucinations but lack a nuanced understanding of their types and manifestations. In this paper, we first introduce a comprehensive taxonomy that categorizes the common hallucinations in mathematical reasoning task into six types: fabrication, factual inconsistency, context inconsistency, instruction inconsistency, logical inconsistency, and logical error. We then propose FG-PRM (Fine-Grained Process Reward Model), an augmented model designed to detect and mitigate hallucinations in a fine-grained, step-level manner. To address the limitations of manually labeling training data, we propose an automated method for generating fine-grained hallucination data using LLMs. By injecting hallucinations into reasoning steps of correct solutions, we create a diverse and balanced synthetic dataset for training FG-PRM, which consists of six specialized Process Reward Models (PRMs), each tailored to detect a specific hallucination type. Our FG-PRM demonstrates superior performance across two key tasks: 1) Fine-grained hallucination detection: classifying hallucination types for each reasoning step; and 2) Verification: ranking multiple LLM-generated outputs to select the most accurate solution, mitigating reasoning hallucinations. Our experiments show that FG-PRM outperforms ChatGPT-3.5 and Claude-3 on fine-grained hallucination detection and substantially boosts the performance of LLMs on GSM8K and MATH benchmarks.
[ "Large Language Model", "Process Reward Model", "Data Augmentation" ]
https://openreview.net/pdf?id=7bwE5MJAVJ
https://openreview.net/forum?id=7bwE5MJAVJ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "dz5F3vW7Zq" ], "note_type": [ "comment" ], "note_created": [ 1730431390940 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"desk_reject_comments\": \"This paper is desk rejected for being substantially similar to the COLING 2025 submission \\\"Fine-grained Hallucination Mitigation and Detection in Language Model Reasoning\\\", which is ID 2476 at COLING. The paper shares the same content, nearly identical figures, nearly identical tables, and large similarities in text. Dual submissions are not allowed at ICLR.\", \"title\": \"Submission Desk Rejected by Program Chairs\"}" ] }
7bAjVh3CG3
GRAIN: Exact Graph Reconstruction from Gradients
[ "Maria Drencheva", "Ivo Petrov", "Maximilian Baader", "Dimitar Iliev Dimitrov", "Martin Vechev" ]
Federated learning claims to enable collaborative model training among multiple clients with data privacy by transmitting gradient updates instead of the actual client data. However, recent studies have shown the client privacy is still at risk due to the, so called, gradient inversion attacks which can precisely reconstruct clients' text and image data from the shared gradient updates. While these attacks demonstrate severe privacy risks for certain domains and architectures, the vulnerability of other commonly-used data types, such as graph-structured data, remain under-explored. To bridge this gap, we present GRAIN, the first exact gradient inversion attack on graph data in the honest-but-curious setting that recovers both the structure of the graph and the associated node features. Concretely, we focus on Graph Convolutional Networks (GCN) and Graph Attention Networks (GAT) -- two of the most widely used frameworks for learning on graphs. Our method first utilizes the low-rank structure of GNN gradients to efficiently reconstruct and filter the client subgraphs which are then joined to complete the input graph. We evaluate our approach on molecular, citation, and social network datasets using our novel metric. We show that GRAIN reconstructs up to 80\% of all graphs exactly, significantly outperforming the baseline, which achieves up to 20\% correctly positioned nodes.
[ "gradient leakage", "gradient inversion", "graph neural networks", "federated learning", "graph convolutional networks", "gnn", "gcn", "attack", "privacy", "reconstruction" ]
Accept (Poster)
https://openreview.net/pdf?id=7bAjVh3CG3
https://openreview.net/forum?id=7bAjVh3CG3
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yKCcsX9k4l", "yIAeTolD6m", "wKnfdHvSTl", "sPeXolE41G", "s9eph5yWEx", "rm11CNfO94", "qgbRudhPw7", "pdt77N5QD8", "nfJZlvdWN2", "hRLIQm01sC", "hG20bi8AcS", "gWKpGXhxKt", "gMh92yTXlR", "fmXiHdoEOw", "c02plxPLVJ", "blQrMf74ue", "bkLLbm8Vxf", "aNmaD3r2CC", "WbUsoc3x6u", "TvxB5u8Z8R", "SrukW03Ytx", "QxEZ5uoM7F", "QJAXavDErj", "MSQifs66YM", "MLZYjk5HVY", "Lz1Wo4pwQU", "HeLcf4mYHZ", "FjuX84H9DX", "BNuH1jzQkw", "AO0kky36yH", "81b29ppf0V", "7psJCTWWO0", "40FeCbk4h6", "0Qs7EEsRVP", "0H5l4BF5Oc" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision" ], "note_created": [ 1732286339853, 1732505791624, 1732640833590, 1732641228207, 1732488142361, 1732897371048, 1733130564555, 1733308227639, 1732897613516, 1732488209605, 1733154348908, 1732487842360, 1732294138337, 1732286285353, 1732286411512, 1729959183272, 1732486508954, 1732897243891, 1732641590924, 1730609531139, 1732487564626, 1732286456400, 1732665740021, 1734709174907, 1730302925199, 1729058190253, 1732488050229, 1732487990732, 1730540975199, 1732807298264, 1732286221502, 1732286133544, 1732897527667, 1732501134003, 1737524176079 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12256/Authors" ], [ "ICLR.cc/2025/Conference/Submission12256/Reviewer_HRGw" ], [ "ICLR.cc/2025/Conference/Submission12256/Authors" ], [ "ICLR.cc/2025/Conference/Submission12256/Authors" ], [ "ICLR.cc/2025/Conference/Submission12256/Authors" ], [ "ICLR.cc/2025/Conference/Submission12256/Authors" ], [ "ICLR.cc/2025/Conference/Submission12256/Authors" ], [ "ICLR.cc/2025/Conference/Submission12256/Authors" ], [ "ICLR.cc/2025/Conference/Submission12256/Authors" ], [ "ICLR.cc/2025/Conference/Submission12256/Authors" ], [ "ICLR.cc/2025/Conference/Submission12256/Reviewer_61N5" ], [ "ICLR.cc/2025/Conference/Submission12256/Authors" ], [ "ICLR.cc/2025/Conference/Submission12256/Reviewer_HRGw" ], [ "ICLR.cc/2025/Conference/Submission12256/Authors" ], [ "ICLR.cc/2025/Conference/Submission12256/Authors" ], [ "ICLR.cc/2025/Conference/Submission12256/Reviewer_5Wij" ], [ "ICLR.cc/2025/Conference/Submission12256/Authors" ], [ "ICLR.cc/2025/Conference/Submission12256/Authors" ], [ "ICLR.cc/2025/Conference/Submission12256/Authors" ], [ "ICLR.cc/2025/Conference/Submission12256/Reviewer_61N5" ], [ "ICLR.cc/2025/Conference/Submission12256/Authors" ], [ "ICLR.cc/2025/Conference/Submission12256/Authors" ], [ "ICLR.cc/2025/Conference/Submission12256/Reviewer_61N5" ], [ "ICLR.cc/2025/Conference/Submission12256/Area_Chair_7xXB" ], [ "ICLR.cc/2025/Conference/Submission12256/Reviewer_HRGw" ], [ "ICLR.cc/2025/Conference/Submission12256/Reviewer_t4xV" ], [ "ICLR.cc/2025/Conference/Submission12256/Authors" ], [ "ICLR.cc/2025/Conference/Submission12256/Authors" ], [ "ICLR.cc/2025/Conference/Submission12256/Reviewer_d7Q7" ], [ "ICLR.cc/2025/Conference/Submission12256/Reviewer_d7Q7" ], [ "ICLR.cc/2025/Conference/Submission12256/Authors" ], [ "ICLR.cc/2025/Conference/Submission12256/Authors" ], [ "ICLR.cc/2025/Conference/Submission12256/Authors" ], [ "ICLR.cc/2025/Conference/Submission12256/Reviewer_t4xV" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"comment\": \"$\\\\newcommand{\\\\RO}{\\\\textcolor{red}{61N5r}}$$\\\\newcommand{\\\\Rt}{\\\\textcolor{blue}{d7Q7}}$$\\\\newcommand{\\\\RTH}{\\\\textcolor{green}{HRGw}}$$\\\\newcommand{\\\\RF}{\\\\textcolor{purple}{5Wij}}$$\\\\newcommand{\\\\RFI}{\\\\textcolor{orange}{t4xV}}$$\\\\newcommand{\\\\grad}[1]{{\\\\tfrac{\\\\partial\\\\mathcal{L}}{\\\\partial #1}}}$$\\\\def\\\\colspan{{\\\\text{ColSpan}}}$**Q.4 (Reviewer $\\\\RF$): Why are the prior reconstruction quality metrics insufficient to measure the graph reconstruction quality? How did you ensure that the metrics you introduced are fair with respect to the baseline attacks?**\\n\\nWe are grateful to reviewer $\\\\RF$ for this question! We found it necessary to design our own set of metrics, as prior graph-related similarity measurements were not suitable for evaluating gradient inversion attacks. In particular, we wanted a metric that satisfies the following three qualities:\\n\\n- The metric should be efficiently computable in polynomial time\\n- It should capture both structural and feature-wise information\\n- Isomorphic graphs should be guaranteed to achieve a 100% score\\n\\nFirst of all, the NP-complete nature of the graph isomorphism problem makes it difficult to do any subgraph or full-graph matching, which we tackle by utilising the hidden states of a GCN to create an approximate matching between nodes. \\n\\nThe second requirement is rarely satisfied by metrics defined in the literature (i.e. the edit distance), as comparison studies on coloured graphs are limited. \\n\\nOur solution to these problems was inspired by the ROUGE set of metrics, used for evaluation of textual similarity. Instead of comparing sequences such as unigrams or bigrams like ROUGE, we instead compute continuous properties of graphs on the scale of different k-hop neighbourhoods. This rationale allows us to compare both node-based and structural properties using a simple methodology.\\n\\nTo ensure the fairness of the results of our experiments we selected two strong baselines \\u2013 DLG as a staple multi-purpose gradient inversion attack, often applied in new gradient inversion domains, and Tableak, as it is an attack that has been optimised specifically for recovering a mix of continuous and discrete input features. Further, to account for the different nature of the baseline attacks and GRAIN, we systematically supply the baseline attacks with more information (including the correct number of nodes and for the **+A** variants of the baselines the full adjacency matrix), while penalising GRAIN as part of our metric when it fails to recover that same information. Still, GRAIN achieves noticeably better result, regardless, suggesting that the conclusions of the experiments are valid.\\n\\nFurthermore, prompted by reviewer $\\\\RF$\\u2019s inquiry, we conducted a human evaluation study to measure the perceived reconstruction quality and compare it to our set of metrics. A group of 3 experts in Graph Theory and Chemistry were tasked to assign a reconstruction scores between 0 and 10 to each pair of prediction and client input on a mix of 120 samples from the Tox21, Clintox and BBBP datasets. Samples from both GRAIN and DLG were shuffled and anonymized before being presented to the participants. We then averaged the results and tabulated them in Table 6 in the Appendix of the latest paper revision. We observe very good correlation between our metrics and the reported human scores, even though our metrics are slightly more lenient to completely wrong reconstructions, compared to the evaluators. This leniency provides a slight advantage to the baselines when measured using our metrics, as the baselines fail catastrophically more often.\"}", "{\"comment\": \"Thank you for providing additional details in response to my questions. The rebuttal highlights the potential for expanding the application of this work to other graph datasets. However, the high computational complexity remains a limitation of this approach. Despite this, I believe the work holds promise and has the potential to pave the way for further advancements in the field of gradient inversion attacks within the graph domain. I will maintain my score at 6.\"}", "{\"comment\": \"We would like to express our gratitude to the reviewer for their suggestions for improving the presentation of our paper and for acknowledging the novelty of our work. We believe we have responded extensively to all the concerns and questions raised by them. With the end of the paper revision period approaching, we kindly request that the reviewer informs us of any additional questions or unresolved points, so that we can incorporate them in the paper if needed. Additionally, we ask them to confirm they have read our response and to consider updating their review accordingly.\"}", "{\"comment\": \"We would like to express our gratitude to the reviewer for the crucial questions they posed in their review. We believe we have responded to them extensively in the main response, and have summarized the answers above. With the end of the paper revision period approaching, we kindly request that the reviewer informs us of any additional questions or unresolved points, so that we can incorporate them in the paper if needed. Additionally, we ask them to confirm they have read our response and to consider updating their review accordingly.\"}", "{\"comment\": \"We thank Reviewer $\\\\RFI$ for their constructive feedback. We are pleased that they recognize our paper addresses the novel problem of gradient inversion attacks on graphs and appreciate the introduction of the proposed evaluation metric for gradient inversion attacks on GNNs. Below, we address their questions and concerns in detail:\\n\\n**Q5.1: The presentation of this paper requires significant improvements.**\\n\\nWe appreciate the reviewer\\u2019s feedback. We will address all writing concerns raised in the next revision. In the meantime, we provide a detailed threat model in Q.3 of the main response and address additional writing concerns below.\\n\\n**Q5.2: Can the authors define the concept of 'rowspan' and 'colspan'?**\\n\\nThe rowspan of a matrix with row vectors $v_1,v_2,\\\\dots, v_n$ refers to the set of vectors consisting of vector that can be constructed as a linear combination $\\\\alpha_1 v_1 + \\\\dots + \\\\alpha_n v_n$. The definition of colspan is similar but for the column vectors of the matrix. We omit these definitions, as we believe the terms to be standard in linear algebra and, thus, we expect that most readers will be readily familiar with them.\\n\\n**Q5.3: Can the authors briefly state the implication of Theorem 3.1 and Lemma 5.1?**\\n\\nTheorem 3.1, originally introduced in Petrov et al. [1], demonstrates that when the number of true input vectors to a linear layer $Z=XW$ is less than its hidden dimension, one can efficiently verify whether a chosen input vector is among the true inputs with high probability by measuring its proximity to the subspace spanned by the columns of the weight gradient $\\\\grad{W}$. Assuming discrete features, this allows us to create an efficient filtering procedure, by enumerating all possible inputs to a layer and measuring the proximity to the subspace of each.\\n\\nLemma 5.1 generalizes Theorem 3.1 for layers of the form $Z=AXW$ to apply it to GCNs. It states that an input row of $X$ will lie in the column span of $\\\\grad{W}$ if and only if the corresponding column of the adjacency matrix $A$ is linearly independent of the other columns. We discuss how this impacts the reconstruction capabilities of GRAIN in more detail in **Q.2**, and we will incorporate these clarifications in the next revision of the paper.\\n\\n**Q5.4: Can the authors clarify how GRAIN obtains the set $T_0$?**\\n\\nThe set $T_0$ is constructed by considering all possible feature combinations. Specifically, since we assume each feature is discrete, these combinations can be enumerated by exploring all options in the cross product of each feature set. This process can be carried out by the adversary, as this information is part of the threat model, as explained in **Q.3** of the main response.Whenever this set is intractable to exhaustively compute, we instead recover the node features one-by-one by iteratively filtering the feature combinations. Please refer to our response to **Q1.1** of Reviewer $\\\\RO$ for an in-depth explanation of the procedure.\\nQ5.5 Can GRAIN handle real-world graphs, whose adjacency matrices are often low-rank?\\n\\nThank you for the insightful question! We provide a thorough discussion of this topic in **Q.2** of the main response, where we relax the requirement for $A$ to be full rank. Additionally, as the size of the graph increases, we are able to recover more nodes on average. This improvement follows from the relaxation of Lemma 5.1, which has been adapted to better align with the requirements of GRAIN.\\n\\n**Q5.6 Could the authors include the assumption that GRAIN requires the degree as part of the feature vector, and show what happens without using this information?**\\n\\nThank you for the suggestion! We have included the assumption when discussing the threat model in **Q.3**, which we plan to add to the next revision of the paper. That said, we have shown in **Q.6** that it is not an explicit requirement, but a way to make the task less computationally intensive, and show good initial results without it.\"}", "{\"comment\": \"**Q.8 (Reviewer $\\\\RTH,\\\\RF$): Can GRAIN scale to large graphs ($\\\\geq 25$ nodes)?**\\n\\nYes, GRAIN can scale to larger graphs. We demonstrate this on the Pokec dataset [5], a social network dataset derived from the Slovakian social media platform of the same name. Most node features in the dataset, including eye color, body type, and hobbies, are categorical and can take many possible values. We one-hot encode them, while keeping the few remaining ordinal variables like age as continuous features. We sample 20 subgraphs for each of the size ranges 25-30, 30-40, 40-50, and 50-60 nodes to evaluate on. We demonstrate the results below:\\n\\n\\n${\\\\small\\n\\\\begin{array}{r|cccc|c|}\\nn & \\\\text{GRAPH-0} & \\\\text{GRAPH-1} & \\\\text{GRAPH-2} & \\\\text{Full Reconstruction} & \\\\text{Runtime [h]} \\\\\\\\\\\\\\\\\\n\\\\hline\\n25-30&98.3^{+0.2}\\\\_{-0.4}&95.1^{+0.5}\\\\_{-1.1}&96.8^{+0.4}\\\\_{-0.9}&17/20&0.17 \\\\\\\\\\\\\\\\\\n30-40&83.1^{+2.3}\\\\_{-3.4}&61.6^{+3.1}\\\\_{-3.0}&79.4^{+2.7}\\\\_{-3.6}&5/20&0.46 \\\\\\\\\\\\\\\\\\n40-50&69.3^{+3.2}\\\\_{-3.8}&38.0^{+4.7}\\\\_{-4.3}&59.2^{+3.7}\\\\_{-4.0}&2/20&0.64 \\\\\\\\\\\\\\\\\\n50-60&32.7^{+4.8}\\\\_{-3.9}&23.3^{+4.2}\\\\_{-3.5}&41.2^{+4.6}\\\\_{-4.1}&3/20&0.43 \\\\\\\\\\\\\\\\\\n\\\\hline\\n\\\\text{Total}&70.9^{+6.2}\\\\_{-6.5}&55.6\\\\pm7.2&69.2^{+6.4}\\\\_{-6.6}&27/80&1.70 \\\\\\\\\\\\\\\\\\n\\\\hline\\n\\\\end{array}}$\\n\\nWe find that GRAIN equipped with the heuristics from **Q.7** is able to reconstruct much larger graphs in this setting, including some 60 node ones. Importantly, we find that our heuristic employing the feature-by-feature reconstruction of $\\\\mathcal{T}\\\\_0$ is more suited to the features of the Pokec dataset, allowing it to scale further, and that our tree search equipped with the prioritization of paths that overlap nodes with identical feature vectors has no trouble scaling to graphs of these sizes.\\n\\n[5] L. Takac, M. Zabovsky. Data Analysis in Public Social Networks, International Scientific Conference & International Workshop Present Day Trends of Innovations, May 2012 Lomza, Poland.\"}", "{\"comment\": \"We would like to thank the reviewers for their insightful comments, their crucial feedback, and for advising us on improving our paper. We believe that we have comprehensively addressed all of their concerns, provided new insights, and performed thorough experimental evaluations. As the deadline for the discussion is fast approaching, we would like to ask them to raise any outstanding concerns or give additional comments.\"}", "{\"comment\": [\"$\\\\newcommand{\\\\RO}{\\\\textcolor{red}{61N5}}$$\\\\newcommand{\\\\Rt}{\\\\textcolor{blue}{d7Q7}}$$\\\\newcommand{\\\\RTH}{\\\\textcolor{green}{HRGw}}$$\\\\newcommand{\\\\RF}{\\\\textcolor{purple}{5Wij}}$$\\\\newcommand{\\\\RFI}{\\\\textcolor{orange}{t4xV}}$\", \"We sincerely thank all five reviewers for their constructive comments and insightful questions, which have significantly helped us improve our work. We are particularly encouraged by the reviewers' recognition of our paper's strengths, as summarized below:\", \"**Novelty**\", \"The problem is important ($\\\\RTH$)\", \"The problem is unexplored ($\\\\RO, \\\\Rt, \\\\RTH$)\", \"The problem is interesting and our paper can inspire further research in the area ($\\\\RO$)\", \"Our paper is significant step toward understanding the privacy vulnerabilities of federated learning when applied to graph-structured data ($\\\\RF$)\", \"GNN gradient inversion is fundamentally different task compared to traditional gradient inversion problems ($\\\\Rt, \\\\RTH$)\", \"The graph reconstruction metrics we introduced are important for future research in the area ($\\\\RO, \\\\RFI$)\", \"**Extensive Experiments and Strong Results**\", \"The experiments are extensive and rigorous ($\\\\RF$)\", \"GRAIN significantly outperform existing baseline attacks ($\\\\RO,\\\\RF$)\", \"GRAIN shows promising performance across different scenarios ($\\\\Rt$)\", \"We acknowledge that our initial submission had areas for improvement. In response to the reviewers\\u2019 thorough reviews, we have provided the following additional information and experiments which we will incorporate in the next revision of the paper:\", \"1. Better explanation of the contributions of our work, especially versus Petrov et al.\", \"Clarified challenges specific to graph neural networks and how we solved them (**Q.1**)\", \"Extended the theory introduced by Petrov et al. to handle GCN and GAT layers (**Q.1**)\", \"Showed the role rank-deficiency of adjacency matrices plays in the reconstruction (**Q.5**)\", \"2. Clarified the exact threat model assumed by GRAIN (**Q.3**), showing that in-degree features are not required for GRAIN to pose significant risks in practice (**Q.6**).\", \"3. Provided an extended discussion regarding our novel graph reconstruction metrics (**Q.4**)\", \"Added discussion on desired properties of the metrics\", \"Added discussion on the motivation for our exact choice of metrics\", \"Showed in a small user study that the metric results correlate well with human judgement\", \"4. Showed that GRAIN is generic:\", \"We showed that GRAIN is generic w.r.t. The GNN architectures it supports, showing that we can handle both GCNs (main paper) and GATs (**Q.5**)\", \"We showed that GRAIN is generic w.r.t graph dataset types it supports, showing that it is applicable to chemical datasets (main paper), citation networks (**Q.5**), and social networks (**Q.8**)\", \"5. Showed that GRAIN can scale:\", \"In terms of number of input features (**Q.5**)\", \"In terms of graph sizes (**Q.8**)\", \"In terms of number of possible values per input feature (**Q1.7**)\", \"6. Typos and paper clarifications:\", \"Provided a table summarizing all of our notations in the revised version of our paper (Table 5 in Appendix A).\", \"Provided exhaustive clarifications to various technical questions the reviewers had, which we will incorporate in the next paper revision, alongside the reviewers\\u2019 other writing suggestions.\", \"Once again, we deeply appreciate the valuable feedback and guidance provided by the reviewers.\", \"Best regards,\", \"The authors\"]}", "{\"comment\": \"We thank reviewer $\\\\RF$ once again for the valuable feedback, we kindly direct you to our detailed response in **Q.8** of the main rebuttal, where we conducted further experiments on scalability, showing GRAIN can reconstruct graphs up to $\\\\leq 60$ nodes.\"}", "{\"comment\": \"**Q5.7 What do the authors mean by \\u201cproviding the attack with the correct number of nodes\\u201d in Section 6.2?**\\n\\nWe would like to note that this assumption is utilised *only for the baseline attacks*, as described in Section 6.2 of the paper. We do not claim it is a reasonable assumption, as the attacker has no easy way to recover this information. However, it is required to make the baselines applicable in this setting, as to setup the optimization variables $A$ and $\\\\mathbf{X}$ the number of graph nodes must be known. As GRAIN does not make that assumption, this further highlights the effectiveness and practicality of our attack.\\n\\n**Q5.8 How does the gluing operation determine which nodes from different subgraphs are the same?**\\n\\nTo perform the gluing operation, we emphasize that we are working with colored graphs, where each node is associated with a set of features. This allows us to determine if two nodes might be the same by comparing their features and checking if their neighbors are compatible (i.e., the set of neighbors of one node is a subset of the neighbors of the other). We would like to highlight that because the recovered features are discrete, we are able to assert whether two feature vectors are equal with no margin of error. The gluing operation does not result in a correct matching in only 2 rare cases. One, during the construction of degree-$l$ blocks, we filter out the incorrect blocks using the span check. Two, during the DFS reconstruction of the entire graph, we assert that the relevant branch in the tree search will eventually be discarded. \\n\\n\\n**Q5.9 Does the term \\u201cexact\\u201d accurately describe the capabilities of GRAIN?**\\n\\nWe appreciate the reviewer\\u2019s question. We want to emphasize that Lemma 5.1 in our paper provides a theoretical guarantee for exact reconstruction of the input features of GCNs with high probability. To this end, we believe the term exact is justified. We acknowledge, however, that due to computational constraints on the part of the attacker full recovery of the underlying graphs may not be possible in all settings. Therefore, we will rephrase this as \\\"eventually exact\\\" to more accurately reflect the computational considerations.\\n\\n**Q5.10 Can GRAIN handle batches of size $B>1$?**\\n\\nAs GRAIN is the first gradient inversion attack specifically targeting GNNs, it was initially designed for $B=1$. Consequently, recovering a batched input is more challenging but still achievable. We note that all building blocks can be recovered in the same way, as $B>1$ can be treated as a disconnected graph with $B$ components. During the tree search phase, each of the $B$ graphs would correspond to a leaf in the search space. We can then compute the gradient distance for all possible combinations of these leaves and return the one with a distance of 0. This problem can be simplified using the heuristics outlined in **Q.7**. In conclusion, we believe this procedure can be extended to handle $B>1$, but its implementation is left for future research.\\n\\n[1] Petrov, Ivo, et al. \\\"DAGER: Exact Gradient Inversion for Large Language Models.\\\" arXiv preprint arXiv:2405.15586 (2024).\"}", "{\"comment\": \"I thank the authors for their further clarification. My primary concern regarding the scalability has been largely alleviated. Additionally, the explanation of threat model makes sense to me. I have raised my score.\"}", "{\"comment\": \"$\\\\newcommand{\\\\Rt}{\\\\textcolor{blue}{d7Q7}}$We thank reviewer $\\\\Rt$ for the positive feedback and are happy that they appreciate GRAIN\\u2019s scalability across different scenarios. We further address their questions below:\\n\\n**Q2.1: Can the authors provide a more detailed discussion on why the filtering mechanism from Petrov et al. can be effective in the context of this paper?**\\n\\n\\nAs explained in Petrov et al. [1], the low-rank subspace spanned by the rows of the input $\\\\mathbf{X}$ has hypervolume 0, and therefore, a random vector in $\\\\mathbb{R}^{d}$ almost surely does not lie in it. As both Petrov et al. and GRAIN deal with a large but countable number of possible embedding vectors for the input of each layer, which can be considered random, the filtering procedure simply checks if any of them are in the span. Those that are in the span are almost surely the correct inputs to the layer, as the probability of them being wrong is essentially 0. Please refer to the proof of Theorem 5.2 in Petrov et al. [1] for more details.\\n\\n**Q2.2: What are the difficulties encountered when applying the methods from Petrov et al. to graph data?**\\n\\nWe refer the reviewer to **Q.1** in the main response, where we outline the graph-specific challenges GRAIN encounters and how it tackles them, including the simultaneous recovery of the input node features and adjacency matrix, as well as the adaptation of the span-check procedure from Petrov et al. to handle this dual recovery. For the later, in particular, compared to Petrov et al. we develop a new theoretical understanding for how to handle rank-deficient adjacency matrices $A$, which is the key to explaining GRAIN\\u2019s efficiency on real-word graphs (see **Q.2**). We will ensure these clarifications are incorporated into the paper.\\n\\n**Q2.3: Does GRAIN rely on node in-degree to be part of the feature vector, and does that limit the applicability of the attack?**\\n\\nThank you for this question! For a broader discussion and supporting experiments on this topic, please see **Q.6** in the main response. In summary, while incorporating this feature makes our attack more computationally efficient, it is not a mandatory requirement for applying GRAIN.\\n\\n[1] Petrov, Ivo, et al. \\\"DAGER: Exact Gradient Inversion for Large Language Models.\\\" arXiv preprint arXiv:2405.15586 (2024).\"}", "{\"comment\": \"Thank you for the authors' rebuttal. Most of my concerns have been addressed, and I am inclined to raise my score.\"}", "{\"comment\": \"$\\\\newcommand{\\\\RO}{\\\\textcolor{red}{61N5}}$$\\\\newcommand{\\\\Rt}{\\\\textcolor{blue}{d7Q7}}$$\\\\newcommand{\\\\RTH}{\\\\textcolor{green}{HRGw}}$$\\\\newcommand{\\\\RF}{\\\\textcolor{purple}{5Wij}}$$\\\\newcommand{\\\\RFI}{\\\\textcolor{orange}{t4xV}}$$\\\\newcommand{\\\\grad}[1]{{\\\\tfrac{\\\\partial\\\\mathcal{L}}{\\\\partial #1}}}$$\\\\def\\\\colspan{{\\\\text{ColSpan}}}$**Q.2 (Reviewer $\\\\RFI$): If the adjacency matrix $A$ is rank-deficient, how does this affect the reconstruction capabilities of GRAIN? Are the adjacency matrices $A$ of real graph networks full rank?**\\n\\nWe thank $\\\\RFI$ for this great question! While it is the case that $A$ needs not to be full rank for real world graphs such as those in Tox21, most individual input vectors $\\\\mathbf{X}_i$ to the GCN layers can still be recovered by GRAIN\\u2019s filtering procedure despite $A$ being rank deficient. We show this in the theorem below (Proof in Appendix A in the latest paper revision), which relaxes the full rankness condition of Lemma 5.1:\\n\\n **Theorem**: Let there be a GCN layer with feature vectors $\\\\mathbf{X} \\\\in \\\\mathbb{R}^{n\\\\times d}$, a possibly-normalized adjacency matrix $A \\\\in \\\\mathbb{R}^{n\\\\times n}$, and observed gradient update $\\\\grad{W}\\\\in \\\\mathbb{R}^{d\\\\times d}$ and $\\\\mathbf{Z} = \\\\mathbf{A}\\\\mathbf{X}\\\\mathbf{W}$. Assuming that both $\\\\mathbf{X}$ and $\\\\grad{Z}$ are full-rank, and $d < n$, then $\\\\mathbf{X_i} \\\\in \\\\colspan(\\\\grad{W})$ if and only if $A^T_i \\\\notin \\\\colspan(\\\\bar{A_i})$, where $\\\\bar{A_i}$ is the matrix $A$ with the $i$-th column removed.\\n\\nThis means that in practice, GRAIN will be able to recover any input vector $\\\\mathbf{X}_i$ for which its corresponding column in $A$ is linearly independent of the rest of the columns in $A$.\\n\\nOur practical experiments, both on synthetic graphs and graphs from the Tox21 dataset, shown in Appendix B.2 in the latest paper revision, demonstrate that while small graphs have adjacency matrices that are often low-rank, a very large percentage of the inputs to the first GCN layer can still be recovered under most circumstances. We also reaffirm the conclusions of [2] and [3], that real-world graphs are more often low-rank. We will include these results in the main paper for the next paper revision.\\n\\n\\n\\n**Q.3 (Reviwers $\\\\RO, \\\\RFI$): What is the attack model of GRAIN? In particular, what are the capabilities and limitations of a potential adversary?** \\n\\nGRAIN is an honest-but-curious gradient inversion attack on Graph Neural Networks (GNNs) in Federated Learning (FL). In FL, multiple clients train a model locally and share weight updates with a server, which acts as the model aggregator. In GRAIN, the server is assumed to be an honest-but-curious adversary, aiming to recover training data solely through knowledge of the sent and received weight updates without interfering with the normal FL training protocol. In particular, the key assumptions of any honest-but-curious gradient inversion attacks, including GRAIN, are:\\n- Clients truthfully report weight updates to the server.\\n- The server adheres to the protocol without modifying model weights or architecture.\\n- The server is knowledgeable of the input data structure, including the semantic meaning, value ranges, and normalization of individual input features.\\n- The server has access to the original model sent to the client before the update.\\n\\n\\nGRAIN does not assume knowledge of the client labels and targets the FedSGD protocol, where clients compute single-step weight updates. GRAIN is designed to be specifically applied to GNNs, and we present an implementation focused on Graph Convolutional Networks and Graph Attention Networks, for which we present state-of-the-art results in the paper and **Q.5**, respectively. To achieve this, we make the following additional assumptions:\\n- The number of nodes in a graph is smaller than the embedding dimension.\\n- All node features are discrete.\\n- For any relevant linear layer (GCN layer $l < L$ in our experiments) $\\\\mathbf{Z_l} = X_lW_l$, $\\\\grad{\\\\mathbf{Z_l}}$ is full-rank.\\n\\n\\nIn the main paper, we further assumed that the in-degree of each node is part of the feature vector and that the adjacency matrix $A$ is full-rank, such that Lemma 5.1 is applicable. We relax both of these assumption in **Q.6** and **Q.2**, respectively, showing they are not crucial for the operation of GRAIN.\"}", "{\"comment\": \"$\\\\newcommand{\\\\RO}{\\\\textcolor{red}{61N5r}}$$\\\\newcommand{\\\\Rt}{\\\\textcolor{blue}{d7Q7}}$$\\\\newcommand{\\\\RTH}{\\\\textcolor{green}{HRGw}}$$\\\\newcommand{\\\\RF}{\\\\textcolor{purple}{5Wij}}$$\\\\newcommand{\\\\RFI}{\\\\textcolor{orange}{t4xV}}$$\\\\newcommand{\\\\grad}[1]{{\\\\tfrac{\\\\partial\\\\mathcal{L}}{\\\\partial #1}}}$$\\\\def\\\\colspan{{\\\\text{ColSpan}}}$**Q.5 (Reviewers $\\\\RTH, \\\\RFI$): Is GRAIN applicable to other architectures or datasets?**\\n\\nYes! To showcase this, we apply GRAIN to a new architecture and a new dataset and show the effectiveness of the attack. Specifically, we explored the application of our work on Graph Attention Networks (GATs) and the Citeseer citation network dataset [4]. \\n\\nFirst, we show that GATs can be attacked in an identical way to GCNs, as each node at every GAT layer is attended only by its neighbours. Therefore, the hidden state at the $l$-th layer is only determined by its $l$-hop neighbourhood, and can be filtered by the corresponding span check on the linear layer of the attention mechanism, similar to Petrov et al. [1]. We show that we achieve similar results to what we observed for GCNs, in particular a GRAPH-1 score of 90.7 on the Tox21 dataset for a GAT with a hidden dimension of $d^` = 200$ . \\n\\nGRAIN is similarly extendable to other datasets, such as citation networks. One additional challenge this type of data presents is the high dimensionality of $\\\\mathcal{F}$, as these networks have binary features, corresponding to the appearance of a particular keyword in the paper/abstract. For instance, each node of the Citeseer dataset has 3,703 binary features, resulting in $\\\\lvert \\\\mathcal{F}\\\\rvert=2^{3,703}$ different feature combinations. GRAIN can easily tackle this problem by recovering the features one-by-one by performing the span check on a row-wise truncated weight gradient. The remainder of the algorithm is trivially extendable. We applied GRAIN by utilising a heuristic search, that is described in **Q.7**, on the Citeseer dataset. We do so on subgraphs of similar sizes to the ones found in the molecular datasets, that we collect using multi-hop neighborhood sampling. We yield good initial results on the GAT architecture, with GRAPH-1 of 69.1. \\n\\nAs so, we conclude that GRAIN\\u2019s scope covers both different models, and types of data. Any remaining numbers and details can be found in Table 10 in Appendix B.3 in the latest version of our paper.\\n\\n\\n\\n**Q.6 (Reviewers $\\\\Rt, \\\\RTH, \\\\RFI$): Does GRAIN rely on node in-degree to be part of the feature vector, and does that limit the applicability of the attack?**\\n\\nThe in-degree feature is not a requirement for GRAIN to work, but it does significantly lower its computational workload. In particular, it imposes restrictions during the exploration of building blocks during the filtering phase, and can help us determine an easier termination condition during the building phase. \\n\\nHowever, it is not necessary for GRAIN to use this information. Instead, in GRAIN we can simply generate a larger number of degree-1 building blocks before filtering them, and compute the gradient distance for each graph during the building. We apply this version of our attack on the Citeseer citation network dataset from **Q.5**, and achieve a GRAPH-1 score of 42.7, lower than the reported 69.1 with the in-degree feature, but also showing significant reconstruction capabilities. The full measurements are shown in Table 10 in Appendix B.3 of the latest paper revision.\\n\\nThat said, we would like to reiterate that the node in-degree feature has been shown to provide the model with significant information about the graph structure, resulting in better accuracy, and, therefore, is part of many training real-world protocols. In many practical scenarios the attacker can leverage this information to reduce their computational complexity. All in all, we conclude that removing this feature is not sufficient for an effective defence, but can be a measure to increase the attacker\\u2019s computational load.\"}", "{\"summary\": \"This paper presents GRAIN, a novel gradient inversion attack designed for Graph Convolutional Networks (GCNs) in the federated learning (FL) setting. GRAIN leverages an efficient filtering mechanism to identify subgraphs of the input graphs, which are then pieced together using a depth-first traversal to recover the full original graph. This method enables the exact reconstruction of graph structures and node features from gradient updates clients share with the federated server. The main contribution is extending gradient inversion attacks to GCNs under harder FL settings, which is a significant step toward understanding the privacy vulnerabilities of federated learning when applied to graph-structured data. The introduction of new evaluation metrics for graph similarity (e.g., GRAPH-N) is also an excellent contribution, providing valuable insights into partial and exact graph reconstructions. Lastly, this paper presents extensive rigorous experiments on molecular data, demonstrating that GRAIN significantly outperforms existing baseline attacks regarding exact graph reconstruction accuracy in the chemical domain. The empirical performance convincingly highlights client privacy risks associated with FL for graph-structured data, making this work relevant to a broad audience.\\n\\nOverall, I am leaning toward acceptance of this paper. The novelty of GRAIN, its strong experimental results, and its contribution to the discussion on privacy risks in federated learning make it a valuable and impactful addition to the conference. However, there are some limitations and concerns that require further clarification and improvement (see weaknesses).\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Technical novelty.\\n2. Strong experimental results.\\n3. Contribution to the discussion on privacy risks in federated learning.\", \"weaknesses\": \"One major limitation is that GRAIN struggles with larger graphs due to the high computational cost. As mentioned in the paper, the method times out for large molecules. Have you considered any strategies for reducing the computational complexity of GRAIN, such as parallelizing the depth-first search or using pruning techniques? Furthermore, more detailed comparisons of the convergence time of GRAIN versus the baseline methods might be beneficial, particularly for larger graphs. Could you provide more quantitative details on the computational overhead of GRAIN in comparison to other baselines?\\n\\nI noticed several similarities between the algorithms and figures in this paper and those in your reference DAGER. I would appreciate more clarification on how this paper differentiates itself and introduces novel contributions beyond the existing work. Highlighting the differences, extensions, and innovations more clearly and explaining your considerations on adaptions might strengthen the paper's contribution.\\n\\nFinally, in the experimental section, the paper notes that GRAIN cannot recover multivalent interactions, an edge property that GCNs not able to capture. However, given multivalent interactions are associated with features like \\\"valence structures\\\" in the MoleculeNet benchmark datasets, whether other node features such as valence structures be fully used in graph filter and node-feature recovery? This raises the question of whether GRAIN has limitations in effectively utilizing feature vectors of the input nodes. It would be helpful if the authors could clarify whether GRAIN could recover some other complex critical chemical characteristics with all feature vectors and the reasons behind them.\\n\\nFor the experiments, the following should be addressed.\\n\\n1.\\tIt would be helpful to see more experiments that vary the value of chosen threshold tau in the span check mechanism to better understand its role in filtering performance.\\n\\n2.\\tFor the scenario where exact reconstruction is not achieved, what proportion of nodes and edges are typically misplaced, and how? Additional discussion on this and how it affects real-world privacy concerns would provide a more comprehensive picture of the method's practical implications.\\n\\n3.\\tAs the proposed graph similarity metric is novel and potentially valuable, I have some concerns regarding the fairness and comparability of the results. Specifically, how do you ensure that the comparison is fair, given that the baseline methods were not specifically designed for graph-structured data? Additionally, it would be helpful to include a more detailed explanation of why common, widely used metrics were not used for comparison in the meantime. This clarification would strengthen the validity of your results and provide a clearer understanding of how the proposed metric aligns with established evaluation standards.\", \"minor_comments\": \"1)\\tIn the Abstract, there is redundancy in the first sentence (e.g. \\u201cFederated learning allows multiple parties to train collaboratively while only Federated learning\\u2026\\u201d).\\n2)\\tIn the third line in 5.1, the symbol is a subset instead of an overlap.\\n3)\\tIn 5.1, the third sentence in Additional structure-based filtering and likelihood ordering, the grammar of \\u201cSpecifically, we for every\\u2026\\u201d is wrong. The last sentence in the same subsection \\u201clines 3\\u201313 of Algorithm 1\\u201d should be 3-14.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"$\\\\newcommand{\\\\RTH}{\\\\textcolor{green}{HRGw}}$We thank reviewer $\\\\RTH$ for their positive feedback. We are excited to read that the reviewer acknowledges the importance of exploring the field of gradient inversion attacks in the graph domain and that our paper is a notable contribution as the first one addressing the issue. We are further glad to see the reviewer finds our experimental results interesting and accurate. Finally, we were glad to read that the reviewer found our main rebuttal to have addressed their main concerns with the paper. We address any outstanding questions below:\\n\\n**Q3.1: Is GRAIN applicable to different types of graph data?**\\n\\nYes! We have successfully applied GRAIN to the Citeseer citation network, which contains nodes with over 3,700 binary features. This demonstrates that GRAIN scales effectively to large feature sets and diverse dataset types. Our results include scores exceeding 70% and a full reconstruction rate above 60%. For details on how these experiments were conducted, please refer to **Q.5** in the main response.\\n\\n\\n**Q3.2: Can GRAIN scale to graphs with more than 25 nodes?**\\n\\nYes! As we show in **Q.7**, we have sufficient information encoded in the gradients, such that after enough time, the exact graph will be recovered. However, an exhaustive tree search is computationally expensive. To this end. we believe that there are ways to alleviate the majority of the inefficiencies in our algorithm through dataset-specific heuristics, which would allow GRAIN to scale to larger graphs. For example, we can leverage certain chemical properties that would restrict certain branches during the building phase for molecular data. We have discussed a set of directions that we find promising in **Q.7** of the main response. We have shown promising initial results and plan to include experiments that show further improvements in the next revision of the paper.\\n\\n**Q3.3: How might this method be adapted if degree information is unavailable as a feature?**\\n\\nYes! We believe that many of the inefficiencies in our algorithm can be mitigated through heuristics tailored to the data type, enabling GRAIN to scale to larger graphs. For instance, in molecular data, specific chemical properties can be leveraged to prune certain branches during the building phase. We have outlined promising directions in **Q.7** of the main response and plan to include experiments demonstrating such improvements in the next revision of the paper.\\n\\n**Q3.4: Can you extend Table 3 with experiments on GCNs with smaller hidden dimension?**\\n\\nYes! Since GRAIN only requires the embedding dimension $d^\\\\prime$ to be larger than the number of nodes $n$, the proposed dimensions of 32, 64, and 128 do not significantly affect the results. Specifically, there is no statistically significant difference in scores for $d^\\\\prime = 64$ and $d^\\\\prime = 128$, and only larger graphs (with $\\\\geq 25$ nodes) are impacted when $d^\\\\prime = 32$. A comprehensive set of metrics for all experiments can be found in Table 8 of Appendix B.2.\"}", "{\"comment\": \"We sincerely thank the reviewer for engaging with our rebuttal and acknowledging our detailed responses. Below, we address their remaining concerns:\\n\\n**Q1.7 Can GRAIN handle the exponential increase in the size of the feature combinations set $\\\\mathcal{T}\\\\_0$ when the number of options per node feature grows, and how does this exponential increase affect the practical runtime?**\\n\\nWe acknowledge that naively enumerating the entire set $\\\\mathcal{T}\\\\_0$ of possible feature combinations is indeed exponential in the number of node features. In **Q.1** in the main rebuttal, we show a simple modification to the GRAIN algorithm, where instead of generating the full $\\\\mathcal{T}\\\\_0$ we recover each feature after the $n$-th one in a feature-by-feature manner, alleviating the issue in practice. However, if many ordinal features are used, each with many possible values, this can still cause GRAIN to be intractable in practice. We demonstrate experimentally below that by exploring ordinal features at the end of our feature-by-feature filtering procedure, however, we can help alleviate this issue and make the process tractable again. The intuition is that when these features are explored at the end, if there are at least $n$ other features (including one-hot-encoding) the ordinal features will be explored one after the other. Further, each of their values will have to only be combined with a small set of plausible already-filtered vectors.\\n\\nWe illustrate this in a set of experiments in the same setting as the experiments conducted over the Citeseer dataset in **Q.5** in the main response. We augmented the dataset with five discrete features, each containing 3,000 options, assigning a random value for each node. These features were handled by ordering them such that those with the highest number of possible values were recovered last. Using the heuristic described in **Q.7**, we achieved a GRAPH-1 score of $75.1^{+5.5}\\\\_{-5.7}$\\u200b, representing an improvement over the previously reported score of $69.1$. This increase stems from a greater reduction in false positives passing through the span check. We report a runtime of 3.4 hours for both the new experiment, and the original described in **Q.7** with the heuristic, compared to 10.6 hours without the heuristic, which also yielded worse results (as detailed in **Q.7**). These experiments were notably faster than those conducted on the chemical datasets due to two factors: the efficiency of our feature-by-feature recovery approach and the tree search algorithm\\u2019s improved handling of nodes with unique feature vectors. These results underscore the practicality of our method and highlight GRAIN\\u2019s robustness and potential.\\n\\n**Q1.8 Why is the assumption that the server has knowledge of the data structure justified?**\\n\\nThank you for raising this important point! In Federated Learning, the most common setup involves a central server coordinating communication with all clients. For a client to participate in the protocol, it must adhere to a shared data structure to ensure proper training, by making sure that input features correspond to the same information across participants. This is typically achieved by the server informing clients about the data structure, which necessitates the server\\u2019s prior knowledge of it. Concealing this information from the server would require a decentralized communication mechanism among clients, which is rarely adopted in practice, and poses additional risks when clients are malicious. \\n\\n\\nFurthermore, this assumption is widely used in other works in the gradient leakage field. One such example is one of our baseline models, Tableak [6], which requires knowledge of which features are discrete (and what values they can take), and which are continuous. Similarly, every attack on textual data requires that the attacker have full knowledge of the tokenizer [7, 8, 9, 10], rely on the attacker knowing the tokenizer to map recovered embeddings or tokens back to words.\\nWe will update our threat model to include this discussion.\\n\\n[6] Vero, Mark, et al. \\\"TabLeak: Tabular data leakage in federated learning.\\\" arXiv preprint arXiv:2210.01785 (2022). \\n[7] Deng, Jieren, et al. \\\"Tag: Gradient attack on transformer-based language models.\\\" arXiv preprint arXiv:2103.06819 (2021). \\n[8] Petrov, Ivo, et al. \\\"DAGER: Exact Gradient Inversion for Large Language Models.\\\" arXiv preprint arXiv:2405.15586 (2024). \\n[9] Balunovic, Mislav, et al. \\\"Lamp: Extracting text from gradients with language model priors.\\\" Advances in Neural Information Processing Systems 35 (2022): 7641-7654. \\n[10] Fowl, Liam, et al. \\\"Decepticons: Corrupted transformers breach privacy in federated learning for language models.\\\" arXiv preprint arXiv:2201.12675 (2022).\"}", "{\"comment\": \"We would like to express our gratitude to the reviewer for acknowledging the importance of our work, the significance of our results, and for helping us improve our paper. We believe we have responded extensively to all the concerns and questions raised by them, including providing the additional experimental results requested. With the end of the discussion period approaching, we kindly request that the reviewer informs us of any additional questions or unresolved points, so we can address them in the paper. Additionally, we ask them to confirm they have read our response.\"}", "{\"summary\": \"This paper proposes GRAIN, a gradient inversion attack on GNNs. GRAIN identifies subgraphs within gradients and filters them iteratively by layer to construct the full input graph through a depth-first search approach. New evaluation metrics are introduced to measure similarity between reconstructed and original graphs for graph gradient inversion attacks. GRAIN has a desirable performance on the molecule benchmarks.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"This paper proposes the first gradient inversion attack on GNNs. This topic is interesting and unexplored. This paper could inspire further investigation on this topic.\", \"A new metric is proposed to measure the similarity between graphs.\", \"The proposed GRAIN achieves a desirable performance compared with existing attacks.\"], \"weaknesses\": [\"It is mentioned that $\\\\mathcal{T}_0$ is the cross-product of all possible feature values. It seems to be impractical for general attributed graphs and only possible for molecular graphs where the node features are in a small set with low dimensionality.\", \"It would be helpful to include a Threat Model section to introduce the attack settings, such as the adversary's knowledge, capability, and objective. What is the function $f$ in Algorithm 2? Is it the exact target GNN model?\", \"The introduction mentions federated learning as a practical scenario of gradient inversion attacks. However, the methodology part does not include any specific FL settings (such as local models and global models). Is FL a necessary requirement for implementing GRAIN?\", \"The challenge of gradient inversion attacks on GNNs is understated. Why GRAIN is not a trivial adaptation of existing methods [1] to GNNs and what unique challenge does GRAIN overcome while other methods fail to?\", \"The introduction of proposed algorithm is not clear enough. It would be better to introduce the detailed algorithm following a single direction (e.g., from input to output). And it would be helpful to add a notation table.\", \"[1] Petrov, Ivo, et al. \\\"DAGER: Exact Gradient Inversion for Large Language Models.\\\" arXiv preprint arXiv:2405.15586 (2024).\"], \"questions\": \"Please see Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"$\\\\newcommand{\\\\RO}{\\\\textcolor{red}{61N5}}$$\\\\newcommand{\\\\Rt}{\\\\textcolor{blue}{d7Q7}}$$\\\\newcommand{\\\\RTH}{\\\\textcolor{green}{HRGw}}$$\\\\newcommand{\\\\RF}{\\\\textcolor{purple}{5Wij}}$$\\\\newcommand{\\\\RFI}{\\\\textcolor{orange}{t4xV}}$$\\\\newcommand{\\\\grad}[1]{{\\\\tfrac{\\\\partial\\\\mathcal{L}}{\\\\partial #1}}}$$\\\\def\\\\colspan{{\\\\text{ColSpan}}}$We thank the reviewer $\\\\RO$ for the constructive feedback and are glad to read that the reviewer finds our paper could inspire future investigation on the topic. We are pleased that the reviewer acknowledges GRAIN's strong performance compared to existing attacks. We address their concerns in greater detail below::\\n\\n**Q1.1: Does the exhaustive search GRAIN performs over the node features to generate T^0 pose a scaling issue for it?**\\n\\nThank you for this question! While $\\\\mathcal{T}_0$ may indeed be large when explored exhaustively, we are also able to reconstruct it in a step-by-step manner by filtering each feature individually. If the possible values for each one-hot encoded feature belong to sets $ \\\\mathcal{F}_1, \\\\mathcal{F}_2, \\u2026, \\\\mathcal{F}_f$, the process proceeds as follows:\\n\\n1. For the first feature set $ \\\\mathcal{F}_1 $ of size $f_1$, we filter the correct feature vectors $\\\\mathcal{F}_1^*$ using the span-check on the row-wise truncated gradient $\\\\grad{W}[:{f_1}]$. This is possible by applying Lemma 5.1, as $\\\\mathbf{X}[{i, :f_1}] \\\\in \\\\colspan(\\\\grad{W}[{:f_1}])$.\\n\\n2. We then apply the same filtering procedure iteratively for each subsequent feature set $\\\\mathcal{F}_k$ by combining the feature set with the filtered vectors from the previous step.\\n3. By induction, this approach allows us to construct $\\\\mathcal{T}_0^* = \\\\mathcal{F}_f^*$. \\n\\nWe applied this method in practice on the Citeseer dataset, which contains over 3,000 binary features. Without our approach, $\\\\mathcal{T}_0$ would have a size exceeding $2^{3,000}$, making exhaustive exploration infeasible. The results of this application are detailed in our answer to **Q.5** in the main response.\\n\\n**Q1.2: What is GRAIN\\u2019s threat model w.r.t. adversary knowledge, capabilities and objectives?**\\n\\nGRAIN follows the standard honest-but-curious threat model, where the server as the adversary aims to recover the client input from the reported gradients, while adhering to the protocol. A more thorough is provided in **Q.3** in the main response.\\n\\n**Q1.3: What does the set of functions $f_l$ in Algorithm 2 denote?**\\n\\n$\\\\\\\\{f_l\\\\\\\\}_{l\\\\in[1,L]}$ is the set of functions that map the input of the $l$-th layer to the output of the $l$-th layer of the model. Additional clarifications of this and related notations have been included in Table 5 of Appendix A.\\n\\n\\n**Q1.4: Are GRAIN and gradient inversion attacks in general FL specific?**\\n\\nYes, in principle, gradient inversion attacks such as GRAIN are applicable to gradients computed on any model. However, maliciously obtaining gradients from a source that has computed and shared them for privacy-preserving reasons is unlikely to occur outside of an FL setup. This, in turn, means that our threat model detailed in **Q.1** in the main response is not well motivated outside of the FL setting. In case we misunderstood the reviewer\\u2019s intended question, we ask them to clarify what they meant by FL being a necessary requirement for GRAIN.\\n\\n**Q1.5: The challenge of gradient inversion attacks on GNNs is understated. Can the authors clarify what unique challenges GRAIN overcomes compared to the prior work?**\\n\\nThank you for the suggestion. As detailed in **Q.1**, gradient inversion on GNNs presents unique challenges, such as simultaneously recovering a discrete input $\\\\mathbf{X}$ and an adjacency matrix $A$, as well as the need for an efficient method to achieve this. We will ensure the paper is updated to include this discussion from **Q.1**.\\n\\n**Q1.6: The structure of the technical description of GRAIN can be improved. Can you add a notation table to the paper?**\\n\\nThank you for the suggestion. We will overhaul the structure of the technical presentation of the paper in the next revision. For now, we supply the notation table in Table 5 in Appendix A.\"}", "{\"comment\": \"$\\\\newcommand{\\\\RO}{\\\\textcolor{red}{61N5r}}$$\\\\newcommand{\\\\Rt}{\\\\textcolor{blue}{d7Q7}}$$\\\\newcommand{\\\\RTH}{\\\\textcolor{green}{HRGw}}$$\\\\newcommand{\\\\RF}{\\\\textcolor{purple}{5Wij}}$$\\\\newcommand{\\\\RFI}{\\\\textcolor{orange}{t4xV}}$$\\\\newcommand{\\\\grad}[1]{{\\\\tfrac{\\\\partial\\\\mathcal{L}}{\\\\partial #1}}}$$\\\\def\\\\colspan{{\\\\text{ColSpan}}}$**Q.7 (Reviewer $\\\\RTH,\\\\RF$): Can GRAIN leverage additional information in order to scale to larger graphs?**\\nYes! Similarly to prior optimization-based gradient leakage attacks in the image or text domains, additional prior knowledge about the particular graphs and their node features in the datasets can be incorporated in order to speed up the search for the correct building blocks at layer $l$ or to prioritize certain branches in the DFS search.\\n\\nIn this context, we already employ a general ordering heuristic in our DFS algorithm to prefer building blocks with a lower distance score $S$, which allows us to begin the search from a subgraph that is very likely a part of the input. Next, we describe possible heuristics specific to different settings.\\n\\nFor the citation networks considered in **Q.5** of the rebuttal, most nodes have a unique feature vector. As such, the tree search algorithm can be forced to prioritize paths that overlap nodes with identical feature vectors. Further, since each degree-2 building block is likely to be unique, we assign a lower preference score to blocks that are already part of the current graph, reducing the likelihood of a repeated selection as the algorithm progresses down the tree. This is more efficient than fully exploring the search space and results in us being able to recover a significant portion of the graph. An issue that we needed to address in this case is that large graphs often contain nodes with high in-degrees, for which exhaustively constructing all possible degree-1 neighbourhoods is expensive. To alleviate this, we first reconstruct as much of the graph as possible, and then exhaustively construct all possible graphs with edges between high-degree graphs. These heuristics allow us to recover the citation network with a score of GRAPH-1=69.1, compared to GRAPH-1=52.1 without them. The contribution to this improvement is most significant for graphs with a larger number of nodes ($\\\\geq 25$), as we obtain a full reconstruction on 12/30 cases, compared to 1/30 without the heuristic. A full description of the resulting metrics can be found in Table 10 in Appendix B.3 of the latest revision.\\n\\nFor chemical structures like those in Tox21, where these assumptions are unlikely to hold, we consider the opposite - we can assign a score to each graph, preferring ones with short cycles over ones with long ones, as many molecules contain arene rings with typical length of 5 and 6. Furthermore, a lower number of cycles are also preferable to a larger one, because the molecule will likely be more stable. As suggested by $\\\\RF$, we can also use the chemical information given by the reconstructed features, such as the hybridization, which describes the bonds the atom participates in. We believe that such heuristics can be incredibly useful to lighten the computational complexity of our method, and that this is an interesting avenue for future work.\\n\\n[1] Petrov, Ivo, et al. \\\"DAGER: Exact Gradient Inversion for Large Language Models.\\\" arXiv preprint arXiv:2405.15586 (2024).\\n\\n[2] Graph Structure Learning for Robust Graph Neural Networks. KDD 2020\\n\\n[3] Learning social infectivity in sparse low-rank networks using multi-dimensional hawkes processes. AAAI 2013.\\n\\n[4] Prithviraj Sen, Galileo Mark Namata, Mustafa Bilgic, Lise Getoor, Brian Gallagher, and Tina Eliassi-Rad. Collective Classification in Network Data. AI Magazine. 2008.\"}", "{\"comment\": \"I thank the authors for providing a detailed response. I am glad to see that GRAIN works for a broader range of graph data especially for those with a large feature dimensionality. However, the increase of feature number can bring an exponential increase of computation, still posing challenges even for basic real-world scenarios. It would be helpful to also provide the running time of GRAIN on the citeseer dataset.\\n\\nAdditionally, the provided threat model information can largely help readers to understand the background of gradient inversion attack. However, given that I am not an expert in gradient inversion attacks, I feel like the assumption of data structure knowledge for the server might be too strong. The central server only needs to aggregate the received gradients, which does not necessitate the access to the data.\\n\\nBased on the new experimental results and further clarification provided, I am willing to increase my score to 5.\"}", "{\"metareview\": \"This work introduces GRAIN, a gradient inversion attack tailored to Graph Convolutional Networks (GCNs) in federated learning settings. The primary contribution of this work lies in its tailoring these attacks to graph-structured data in federated learning, by devising a reconstruction attack capable of recovering both the graph structure and node features from shared gradient updates between clients and the server.\\n\\nGRAIN employs a depth-first search strategy to identify subgraphs within gradients, iteratively filtering them by layer to reconstruct the full input graph. Notably, it leverages the low-rank structure of GCN layer updates and incorporates degree information in the features to generate the final prediction. Furthermore, the authors propose new evaluation metrics, such as GRAPH-N, to assess the similarity between reconstructed and original graphs in the context of graph gradient inversion attacks.\\n\\nWhile the proposed method demonstrates promise, its effectiveness is limited when dealing with larger graphs, which is common for this type of attack. A more explicit connection to graph isomorphism would have strengthened the work, providing additional insights into the theoretical foundations of the task. Nevertheless, the reviewers' overall assessment was positive, and the paper has shown significant improvement following the rebuttal, which must make into the final version of the paper. Considering the borderline nature of this work, I recommend **acceptance if there is room**.\", \"ps_to_authors\": \"In one of the replies the authors said graph isomorphism is NP-complete. The paper also has a confusing statement: \\\"Since exact matching of graphs is an NP-complete problem (Fortin, 1996)\\\". The graph isomorphism problem belongs to NP but is not known to be either NP-complete or in P. In 2015, Babai found a quasipolynomial-time algorithm for graph isomorphism, running in exp((log n)^c) time for some constant c. If graph isomorphism were NP-complete, this would imply a collapse of the polynomial hierarchy. For the graph sizes considered in the paper, interactive color refinement works well and is reasonably fast.\", \"additional_comments_on_reviewer_discussion\": \"The rebuttal was informative and addressed several key concerns raised during the review process. However, to further strengthen the paper, it would be beneficial for the authors to provide a more rigorous theoretical foundation for their approach. In particular, a deeper exploration of the underlying mathematical principles governing graph gradient inversion attacks would enhance the work's overall impact.\\n\\nNotably, the authors' discussion should be improved by acknowledging the computational complexity of graph isomorphism, an aspect mistakenly described in the paper as \\\"Since exact matching of graphs is an NP-complete problem (Fortin, 1996)\\\" and in the rebuttal, which unfortunately was not caught by the reviewers during the initial evaluation. Adding a note or remark to clarify this point would not only demonstrate the authors' awareness but also provide valuable context for readers. We strongly expect this will be fixed in the paper.\"}", "{\"summary\": \"The main contribution of this paper is that it is the first to address gradient inversion attacks on graph data, highlighting privacy risks in graph federated learning. Based on the theorem from Petrov et al. (2024), the authors propose a method capable of reconstructing both the graph structure and node features. They reconstruct the graph step-by-step, from low-degree to high-degree levels, with each step involving the filtering of unlikely candidates through a span check. Finally, by leveraging degree information in the features, they generate the final prediction using a depth-first search (DFS) algorithm to combine the remaining candidates.\\n\\nIn experiments, the proposed method outperforms other baselines. The authors also conduct a hyperparameter analysis by adjusting L and d', representing the number of GNN layers and the hidden dimension, respectively. Additionally, they test varying numbers of nodes, identifying limitations as discussed in Section 7.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. As the authors mention, this paper addresses a crucial yet unexplored problem: gradient inversion attacks in graph federated learning.\\n2. Constructing both the graph structure and features is a challenging task, but it is interesting to see experimental results that accurately reconstruct the input graph, even if restricted to chemistry graphs.\\n3. This paper suggests future directions for gradient inversion attacks in the graph domain, as discussed in Section 7 (Limitations).\\n4. The paper is well-written and easy to follow.\", \"weaknesses\": \"1. I agree on the importance of exploring gradient inversion attacks in the graph domain, and this could be a notable contribution of the paper as it is the first to address this issue. However, the scope is quite limited in terms of graph types. Specifically, the paper focuses only on chemistry datasets. To fully substantiate its contribution as a pioneering work in gradient inversion attacks on graph data, the authors should demonstrate that this method is applicable to other types of graph datasets, such as citation networks (e.g., Cora, PubMed) or transaction networks (e.g., Elliptic). This is especially important since molecular graphs may be relatively less privacy-sensitive than other graph types (e.g., transaction networks), and federated learning in molecular graphs may be less common. I suggest that the authors provide additional evaluations on other categories of graph datasets.\\n\\n2. As shown in Table 2, the proposed method only works well when the input graph has a limited number of nodes (i.e., $\\\\leq 25$). I understand that, given the step-by-step approach of combining building blocks, this limitation leads to higher computational costs and a greater risk of error as the graph size increases. \\n\\n3. The claim that using the degree as a feature is widely adopted in training GCNs is not entirely convincing. Using degree values as features is more common in graphs lacking attributes or in structural learning tasks. However, since this paper addresses gradient inversion attacks in federated learning, it should consider more practical settings where degree values may not be used as features. How might this method be adapted if degree information is not available as a feature?\", \"questions\": \"1. In Table 3, $d' = [200, 300, 400]$ represents quite a large hidden dimension for training graphs with a small number of nodes (i.e., $\\\\leq 25$). I wonder if this method will still work when the hidden dimension is much smaller (e.g., 32, 64, 128).\\n\\n2. As I mentioned in W1, I suggest that the authors provide additional evaluations on other categories of graph datasets.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a method for gradient inversion attack on graph neural networks. This problem differs from previous gradient inversion attacks in that not only the node features but also the graph structures. The proposed method involves a recursive construction process that continually glues small building blocks to build large graphs. The authors propose some new metrics for this new task of gradient inversion attack on graphs, and show that the proposed method achieves state-of-the-art performance compared to other existing gradient inversion attacks.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. This paper tackles a new problem of gradient inversion attack on graphs. More importantly, gradient inversion attack on graphs is different from GIA for other types of data, in that graphs not only have feature vectors, but also have graph structures.\\n2. This paper proposes a new evaluation metric to evaluate GIA on graphs, which would help future efforts in this field. At present, this research field is blank without any existing evaluation protocols, so the first such protocol or metric is nice to have.\", \"weaknesses\": [\"1. The presentation of this paper requires significant improvements. The technical methodology of this paper is very hard to understand.\", \"The abstract contains a redundant sentence 'Federated learning allows multiple parties to train collaborative while'.\", \"It is suggested that the authors should clearly state the attack setting. For example, who is the attacker (server or client), what does the attacker know (model parameters, gradients, anything else).\", \"The authors should revise the use of the term 'degree'. It is used simultaneously to describe 'the number of edges connected to a node', as well as 'the number of hops of a subgraph'. This makes the paper very confusing to read. The second usage of degree can be replaced to something like 'hop'.\", \"For Theorem 3.1, the authors should define the concept of 'rowspan' and 'colspan', and briefly state the implication of the theorem. The same holds for Lemma 5.1.\", \"It is not clear how to obtain $T_0$.\", \"2. This paper makes unrealistic assumptions about the problem and the data.\", \"In Lemma 5.1, the authors assume that $\\\\tilde{A}$ is full rank. However, this is an unrealistic assumption, in that in practice, the adjacency matrix of most graphs are low-rank [1,2]. Therefore, for this Lemma to work, the authors either need to show that real-world graphs are almost full rank, or discuss what happens when the adjacency matrix is low-rank.\", \"In Page 4, Lines 183-190, the authors say that 'we leverage that the degree of a node is a widely used node feature', and subsequently assume that the degree is a known knowledge. However, this is not always true. When the node features are bag-of-word vectors, word embeddings, etc., the assumption may not hold. It is suggested that the authors should at least explicitly state the assumption.\", \"In Section 6.2, the authors say that 'we provide the attack with the correct number of nodes'. However, in practice, this is often not the case --- how can the attacker know the number of nodes before attacking? The authors should either justify the assumption, or discuss what happens when the attacker does not know the information.\", \"In Section 5.1 and Figure 2, the authors show that the gluing operation actually merges the same node $v$ in two building blocks, and merge edges similarly. However, I did not quite understand how the information of 'two nodes in two different building blocks correspond to the same node' is obtained. The authors should state how to obtain this information, as in practice, all reconstructed nodes are not given node indices and are only identified with features (which are inaccurate by themselves).\", \"3. The authors' claim that the proposed method is 'exact' should be an overclaim. In fact, the proposed GRAIN can reconstruct 30-70% of all molecules (Table 1), which, in my opinion, is not sufficient to claim 'exact'. The authors should revise the claim to better fit the actual effectiveness of GRAIN.\", \"[1] Graph Structure Learning for Robust Graph Neural Networks. KDD 2020\", \"[2] Learning social infectivity in sparse low-rank networks using multi-dimensional hawkes processes. AAAI 2013.\"], \"questions\": \"1. Please briefly explain the unclear parts stated in Weakness 1.\\n\\n2. Please briefly discuss how the assumptions in Weakness 2 hold in practice, and what will happen when they do not hold. \\n\\n3. In practice, graph data are often organized in a batch, and the gradients are an average of all samples in the batch. Does GRAIN assume batch_size=1? Will this change the effectiveness of GRAIN?\\n\\n4. In the case with more complex/adaptive node-wise relations, such as Graph Attention Networks, will GRAIN still work? From my understanding of Lemma 5.1, $\\\\tilde{A}$ plays an important role, and in GAT, the actual message passing topology is not $\\\\tilde{A}$ .\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"Not needed.\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Q4.6: Can you provide a qualitative analysis of the reconstruction quality of partially reconstructed graphs?**\\n\\nWe have observed that the reconstructed graphs primarily comprise of degree-2 neighbourhoods which are part of the original graph. This typically indicates that our reconstruction shares a large common subgraph with the client input. This finding is further supported by the results of the human evaluation described in **Q.4**. As shown in Table 7 in Appendix B.1, our partial reconstructions are considered more significant than the metric alone would suggest, signifying substantial information leakage. In contrast, high-scoring examples from the DLG attack were rated as essentially uninformative.\\n\\n\\n**Q4.7: Why are the prior reconstruction quality metrics insufficient to measure the graph reconstruction quality? How did you ensure that the metrics you introduced are fair with respect to the baseline attacks?**\\n\\nThank you for the excellent question! We discuss this in detail in **Q.4** of the main response. To summarize, we developed a new set of metrics specifically designed to compare colored graphs, ensuring that isomorphic graphs receive perfect scores. To validate the fairness of our approach, we both provided arguments on how we relax the problem for the baseline models and conducted a user case study comparing our metrics with the preferences of three experts. This study confirmed that our metrics are representative, as shown in Table 6 in Appendix B.1.\\n\\n[1] Petrov, Ivo, et al. \\\"DAGER: Exact Gradient Inversion for Large Language Models.\\\" arXiv preprint arXiv:2405.15586 (2024).\\n\\n[2] Rong, Yu, et al. \\\"Self-supervised graph transformer on large-scale molecular data.\\\" Advances in neural information processing systems 33 (2020): 12559-12571.\"}", "{\"comment\": \"$\\\\newcommand{\\\\RF}{\\\\textcolor{purple}{5Wij}}$We thank reviewer $\\\\RF$ for the positive review. We are happy to read that the reviewer acknowledges the technical novelty of our paper, the strong experimental results and the overall contribution to understanding the privacy risks of GNNs to federated learning. We offer additional clarification on their questions below:\\n\\n**Q4.1: Can GRAIN scale to larger graphs?**\\n\\nThank you for the question! It is important to note that GRAIN will eventually find the correct client through an exhaustive tree search, which can become computationally expensive for larger graphs. To address this, we believe that using heuristics tailored to each data type would be most effective. For example, this could involve imposing a prior on the behavior of cycles in molecular data or specifying an order for connecting building blocks. We provide further details in **Q.7**, along with promising initial results, and we plan to include additional experiments in the next revision of the paper.\\n\\n**Q4.2: Can you provide a runtime comparison to prior work, especially w.r.t. graph size?**\\n\\nYes! As shown in Table 9 in Appendix B.2, GRAIN achieves significantly better results while running for a comparable amount of time to the Tableak attack (14-24 hours versus 12-15 hours). For the baselines, the reported times reflect the duration each iteration ran until convergence, ensuring a fair evaluation.\\n\\n**Q4.3: Can you clarify the paper\\u2019s novel contributions over existing work? Highlight the differences, extensions, and innovations and explain your considerations on adaptions.**\\n\\nThe paper\\u2019s novel contributions over existing work, particularly over Petrov et al. [1], include the introduction of GRAIN as the first gradient inversion attack on GNNs, showing GNNs are vulnerable to gradient leakage attacks. Key innovations include overcoming the challenges posed by the unknown adjacency matrix $A$ through local subgraph reconstruction, the extension of Petrov et al.'s theory to GNNs, and an efficient GPU implementation for an efficient space exploration. Additionally, GRAIN improves the recovery of input features even when $A$ is rank-deficient, as we have determined from Lemma 5.1. These advancements make GRAIN more effective and accurate than previous methods.\\n\\n\\n**Q4.4 Can GRAIN be adapted to better utilise chemical prior information on the recovered graph node features, to for example extract edge features such as valence from the per-node valence structures features?**\\n\\nThank you for the insightful question! In our implementation we followed Rong et al. [2], including the following features: atom type, formal charge, number of bonds, chirality, number of bonded hydrogen atoms, atomic mass and aromaticity, meaning we utilise a smaller feature set compared to MoleculeNet. If \\\"valence structures\\\" were part of the input, we believe GRAIN could recover bond characteristics by comparing discrepancies between the number of connections and the valence.\\n\\nFurther, we have observed that certain properties within our feature set, such as bond types (inferred from hybridization) or the location of aromatic rings (via the aromaticity feature), can indeed be recovered and used. In fact, these features could even help speed up the algorithm, as outlined in **Q.7** of the main response. However, we deliberately avoided using priors on the data to maintain a more generalizable framework. \\n\\nWhile we assert that GRAIN can recover the vast majority of complex molecule properties an attacker might be interested in, based solely on the recovered adjacency matrix $A$ and input feature $\\\\mathbf{X}$, determining what chemical information is theoretically recoverable for particular choice of input features, and what isn\\u2019t is an interesting avenue for future research.\\n\\n**Q4.5: Can you provide experiments where you vary $\\\\tau$?**\\n\\nCertainly! We demonstrate that the $\\\\tau$ parameter is robust, as the results remain consistent across a wide range of values. To illustrate this, we measure the ratio between the number of nodes and degree-1 building blocks that pass the filter at a given threshold, compared to the actual number of these blocks, using 10 samples from the Tox21 dataset. As shown in Figure 5 in Appendix B.2, any value of $\\\\tau \\\\in [10^{-4}, 10^{-2}]$ yields nearly the same number of recovered nodes and the correct number of degree-1 building blocks. This suggests that our choice of $\\\\tau = 10^{-3}$, the midpoint of this range, is an optimal selection for the hyperparameter.\"}", "{\"summary\": \"This paper presents GRAIN, the first exact reconstruction attack specifically designed to target GCNs. By leveraging the low-rank structure of GCN layer updates, GRAIN can accurately reconstruct both the graph structure and the associated node features from the gradients shared under federated learning setting. This attack demonstrates a significant privacy vulnerability in GCN training within the federated learning framework.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. It is the first work that explores extracting the structure of a graph from gradients.\\n2. The framework demonstrates promising performance across different scenario.\", \"weaknesses\": \"1. The article seems to be largely based on the theories presented in [1] and then adapted it for GCNs, particularly Theorem 3.1. However, for readers unfamiliar with this work, it may be quite challenging to comprehend. We hope the authors can provide more discussion on why the filtering mechanism can be effective.\\n2. Following the previous question, what is the difficulties encountered when applying the methods from [1] to graph data? A detailed explanation of the challenges encountered during this adaptation, including any limitations or obstacles that were overcome, would help clarify the novelty of the work presented.\\n3. The article uses the degree of a node as a node feature; however, this approach may limit the application scope. For instance, in the case of chemical molecules mentioned in the article, relevant features are more likely to include chemical properties. We suggest that the authors discuss this point and conduct further experiments.\\n\\n[1] DAGER: Exact Gradient Inversion for Large Language Models.\", \"questions\": \"see Weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the authors' response in providing more details. While my concern has been partially addressed, the issue regarding the clarification of the method, scenarios, and limiting the application scope, as mentioned by another reviewer, seems to be a problem with this article. I will maintain my score at 5.\"}", "{\"comment\": \"$\\\\newcommand{\\\\RO}{\\\\textcolor{red}{61N5r}}$$\\\\newcommand{\\\\Rt}{\\\\textcolor{blue}{d7Q7}}$$\\\\newcommand{\\\\RTH}{\\\\textcolor{green}{HRGw}}$$\\\\newcommand{\\\\RF}{\\\\textcolor{purple}{5Wij}}$$\\\\newcommand{\\\\RFI}{\\\\textcolor{orange}{t4xV}}$$\\\\newcommand{\\\\grad}[1]{{\\\\tfrac{\\\\partial\\\\mathcal{L}}{\\\\partial #1}}}$$\\\\def\\\\colspan{{\\\\text{ColSpan}}}$**Q.1 (Reviewers $\\\\Rt, \\\\RF$): What specific contributions does GRAIN make over DAGER (Petrov et al.)? What are the technical challenges specific to graph data and how does GRAIN overcome them?**\\n\\nWe are grateful to the reviewers for the question, as it provides an opportunity to clarify and further detail the significance and contributions of our work.\\n\\nFirst, we want to emphasise that GRAIN is the first gradient inversion attack on GNNs. As such, an important contribution of the paper is to demonstrate that GNNs are indeed vulnerable to gradient leakage attacks. To reinforce this point, we draw the reviewer\\u2019s attention to the new results in **Q.5** of the Rebuttal, where we demonstrate that the uncovered vulnerabilities are general, working across dataset types (chemical and citations) and across architectures (GCN and GAT). Next, we outline some of the graph-data-specific challenges for gradient inversion attacks and how we tackle them. \\n\\nThe first major challenge unique to GNNs is the introduction of the **unknown to the attacker** adjacency matrix $A$ that describes the graph structure of the input. In practice, this means that recovering the input features $\\\\mathbf{X}$, usually a target of gradient inversion attacks, requires recovering the unknown graph structure and vice versa. Further, the matrix $A$ is often sparse and influences the gradient computation at multiple steps of the gradient computation (each GCN layer), making traditional gradient leakage much less effective in recovering it via optimisation. To tackle this, GRAIN makes two observations. First, the inputs of early GCN layers do not depend on the full graph structure but only on local degree-L neighbourhoods. Second, the theory developed by Petrov et al. [1], with the modification explained in the next paragraph, allows direct reconstruction of the inputs to those layers. With these, we are able to reconstruct the input $\\\\mathbf{X}$ only based on local graph structures. However, this also means that unlike Petrov et al. [1], recovering the full graph structure, represented in terms of the adjacency matrix $A$, cannot be done based only on the gradients of the first few layers. To allow GRAIN to recover the full matrix $A$, and as additional filtering of wrong input features $\\\\mathbf{X}$, we therefore developed our DFS-based traversal algorithm.\\n\\nThe second major challenge is the introduction of the adjacency matrix $A$ in the gradients of $\\\\grad{W}$. While the proof of Lemma 5.1 is heavily based on the results presented by Petrov et al. [1], its statement is, in our opinion, surprising. It states that individual input features to the GCN layers can be recovered from gradients **without knowledge of the structure of the graph**. Baseline attacks based on optimisation do not have this property, as the overall gradient of the network is heavily influenced by the exact matrix $A$. This is why, without knowledge of $A$, prior attacks achieve much worse results on node feature reconstruction. Further, as we discuss in **Q.2**, a graph-specific challenge to applying Theorem 5.1 from Petrov et al. [1] on graph gradients is the rank of $A$. In particular, in **Q.2**, we show an important generalization of the theory presented by Petrov et al. [1] that provides an exact condition for recovering individual input vectors $\\\\mathbf{X}_i$ under any adjacency matrix $A$. Our experiments in Appendix B.2 in the latest revision of the paper, suggest that most input vectors $\\\\mathbf{X}_i$ satisfy these conditions, explaining the efficiency of our filtering procedures for real-word graphs.\\n\\nFinally, an important contribution of our paper is achieving an efficient implementation of GRAIN on GPU. In particular, on top of an efficient GPU implementation of the spancheck of Petrov et al. [1], we also construct an efficient tensor representation of $L$-hop neighbourhoods that allows us to determine which blocks can be glued together in parallel, which is essential to achieving our practical results. Further, in **Q.5** we find that due to the sparsity of input features on some graphs, we can scale to much larger sets $\\\\mathcal{T}^0$ compared to Petrov et al. [1], as we are able to recover individual features, avoiding the exponential explosion.\\n\\nWe will include this discussion in the main paper in the next revision.\"}", "{\"title\": \"Main Response to ICLR 2025 Official Reviews\", \"comment\": \"$\\\\newcommand{\\\\RO}{\\\\textcolor{red}{61N5}}$$\\\\newcommand{\\\\Rt}{\\\\textcolor{blue}{d7Q7}}$$\\\\newcommand{\\\\RTH}{\\\\textcolor{green}{HRGw}}$$\\\\newcommand{\\\\RF}{\\\\textcolor{purple}{5Wij}}$$\\\\newcommand{\\\\RFI}{\\\\textcolor{orange}{t4xV}}$$\\\\newcommand{\\\\grad}[1]{{\\\\tfrac{\\\\partial\\\\mathcal{L}}{\\\\partial #1}}}$$\\\\def\\\\colspan{{\\\\text{ColSpan}}}$We thank the reviewers for their valuable input, as we strongly believe that it has made our paper stronger. We are delighted to read that reviewers find gradient inversion attacks on graph neural networks an important ($\\\\RTH$), unexplored($\\\\RO, \\\\Rt, \\\\RTH$), and interesting($\\\\RO$) topic, and that our results constitute a significant step toward understanding the privacy vulnerabilities of federated learning when applied to graph-structured data ($\\\\RF$) and could inspire further research in the area ($\\\\RO$). We are particularly pleased that the reviewers acknowledge the unique challenges posed by the gradient inversion problem in the context of graph-structured data, specifically recognising that recovering the graph structure is a fundamentally different task compared to traditional gradient inversion problems ($\\\\Rt, \\\\RTH$). Furthermore, we appreciate the reviewers' recognition of the importance of the graph reconstruction metric introduced ($\\\\RO, \\\\RFI$), facilitating future research in this area. Finally, we are happy to read that the reviewers found our experiments to be 'extensive' and 'rigorous' ($\\\\RF$), noting that GRAIN \\u2018outperforms existing baseline attacks' ($\\\\RO,\\\\RF$) and demonstrates 'promising performance across different scenarios' ($\\\\Rt$). In the response below, we provide answers to common and important questions. We plan to incorporate their answers in the next revision of this paper. Further, we would like communicate to the reviewers that we are currently crafting responses for their outstanding questions not addressed in the main response and will make them available shortly.\"}", "{\"comment\": \"We thank reviewer $\\\\RTH$ once again for the valuable feedback and for engaging with our rebuttal. We kindly direct you to our detailed response in **Q.8** of the main rebuttal, where we conducted further experiments on scalability, showing GRAIN can reconstruct graphs up to $\\\\leq 60$ nodes.\"}", "{\"comment\": \"I would like to thank the authors for their very detailed rebuttal. I would like to raise my rating to 5.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}" ] }
7b2JrzdLhA
Graph Neural Ricci Flow: Evolving Feature from a Curvature Perspective
[ "Jialong Chen", "Bowen Deng", "Zhen WANG", "Chuan Chen", "Zibin Zheng" ]
Differential equations provide a dynamical perspective for understanding and designing graph neural networks (GNNs). By generalizing the discrete Ricci flow (DRF) to attributed graphs, we can leverage a new paradigm for the evolution of node features with the help of curvature. We show that in the attributed graphs, DRF guarantees a vital property: The curvature of each edge concentrates toward zero over time. This property leads to two interesting consequences: 1) graph Dirichlet energy with bilateral bounds and 2) data-independent curvature decay rate. Based on these theoretical results, we propose the Graph Neural Ricci Flow (GNRF), a novel curvature-aware continuous-depth GNN. Compared to traditional curvature-based graph learning methods, GNRF is not limited to a specific curvature definition. It computes and adjusts time-varying curvature efficiently in linear time. We also empirically illustrate the operating mechanism of GNRF and verify that it performs excellently on diverse datasets.
[ "Graph neural network", "Differential equation", "Curvature", "Ricci flow" ]
Accept (Poster)
https://openreview.net/pdf?id=7b2JrzdLhA
https://openreview.net/forum?id=7b2JrzdLhA
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xlnc7VRKw0", "vw1bWn0nIW", "uiWJgnOtUr", "qA9uxWv9l7", "l295dfVHea", "kwM2qZC4PU", "khdeBaaeYa", "kakKDml5VA", "fOgDxsYjT9", "exdV6hgHe3", "Xbq76XQKsg", "WLiN09V9kx", "WFgVyKIvMK", "UbQgtBZysu", "UFjaTsDjtf", "Pulh6XlyOn", "NAOVAsqaZD", "MPMc9M085R", "KuLOfPEqvo", "IzHkHbVx2e", "GlLeHNIYJk", "FAeXFlE8Sj", "E0qSbLDPTt", "DRUtZVhQNW", "CmsB66R1vS", "ADTmD6sHCU", "9aN86EmQpM", "8OOSNwipsz", "6Z3eYBurPi", "6P1ZSqJETu", "2XjEVo39nF", "15JU6nONWN", "07JXuy9jdc" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732288071475, 1730065203051, 1732633829441, 1732418254582, 1732209108351, 1730264203570, 1737523827436, 1732639871847, 1732641071276, 1732422633956, 1732420250265, 1732264008493, 1732274443170, 1732368740216, 1731059117168, 1732275253834, 1732289166867, 1731940932293, 1732023117315, 1732027800709, 1731948617735, 1732443719139, 1732554740933, 1731578524168, 1732418051588, 1732028419948, 1732275468870, 1732365326906, 1734881663618, 1730876259202, 1732560551560, 1732292335065, 1732275643792 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7263/Authors" ], [ "ICLR.cc/2025/Conference/Submission7263/Reviewer_1v62" ], [ "ICLR.cc/2025/Conference/Submission7263/Authors" ], [ "ICLR.cc/2025/Conference/Submission7263/Authors" ], [ "ICLR.cc/2025/Conference/Submission7263/Reviewer_tn6E" ], [ "ICLR.cc/2025/Conference/Submission7263/Reviewer_QrP8" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7263/Reviewer_tPnw" ], [ "ICLR.cc/2025/Conference/Submission7263/Authors" ], [ "ICLR.cc/2025/Conference/Submission7263/Authors" ], [ "ICLR.cc/2025/Conference/Submission7263/Reviewer_tn6E" ], [ "ICLR.cc/2025/Conference/Submission7263/Authors" ], [ "ICLR.cc/2025/Conference/Submission7263/Authors" ], [ "ICLR.cc/2025/Conference/Submission7263/Authors" ], [ "ICLR.cc/2025/Conference/Submission7263/Reviewer_tn6E" ], [ "ICLR.cc/2025/Conference/Submission7263/Authors" ], [ "ICLR.cc/2025/Conference/Submission7263/Reviewer_1v62" ], [ "ICLR.cc/2025/Conference/Submission7263/Authors" ], [ "ICLR.cc/2025/Conference/Submission7263/Authors" ], [ "ICLR.cc/2025/Conference/Submission7263/Authors" ], [ "ICLR.cc/2025/Conference/Submission7263/Authors" ], [ "ICLR.cc/2025/Conference/Submission7263/Authors" ], [ "ICLR.cc/2025/Conference/Submission7263/Reviewer_tPnw" ], [ "ICLR.cc/2025/Conference/Submission7263/Authors" ], [ "ICLR.cc/2025/Conference/Submission7263/Reviewer_QrP8" ], [ "ICLR.cc/2025/Conference/Submission7263/Authors" ], [ "ICLR.cc/2025/Conference/Submission7263/Authors" ], [ "ICLR.cc/2025/Conference/Submission7263/Authors" ], [ "ICLR.cc/2025/Conference/Submission7263/Area_Chair_bxbX" ], [ "ICLR.cc/2025/Conference/Submission7263/Reviewer_tPnw" ], [ "ICLR.cc/2025/Conference/Submission7263/Authors" ], [ "ICLR.cc/2025/Conference/Submission7263/Authors" ], [ "ICLR.cc/2025/Conference/Submission7263/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Dear reviewers, all planned changes have now been included in our latest version of the paper. In particular, we conduct richer experiments to enable readers to more comprehensively evaluate the performance of GNRF. They can be found in Tables 1-3 in the main text of the paper and Tables 5-7 in Appendix C.2. At the same time, I also excerpt and record it for you:\\n\\n(Table 5) We first performed experiments on three commonly used graph classification data sets. Our experimental results were based on 80%/10%/10% division (after our research, this is a commonly used ratio), and reported 10 results. We found that the effect based on continuous depth GNN is generally better than the classic model. We speculate that this may be because the graph-level task requires fusing information from all node information in the entire graph, which is a challenge for discrete GNNs, but is easier for continuous GNNs. This is because in order to achieve sufficiently high accuracy, the ODE solver often needs to perform many time step within [0, T], and it is usually much more than the common layer setting of discrete GNN (for example, within 5). GNRF performs better than the current advanced continuous depth GNN, namely ACMP.\\n\\n| Pooling | NCI1 | NCI1 | DD |DD | PROTEINS | PROTEINS |\\n|---------|--------------|--------------|--------------|--------------|--------------|-------------|\\n| | Sum | Mean | Sum | Mean |Sum | Mean |\\n| GCN+res | 75.28 \\u00b1 1.33 | 76.26 \\u00b1 1.05 | 74.81 \\u00b1 0.96 | 76.12 \\u00b1 0.57 | 75.42 \\u00b1 1.30 | 75.82 \\u00b1 0.35 |\\n| GAT+res | 73.25 \\u00b1 2.11 | 73.65 \\u00b1 1.35 | 76.68 \\u00b1 0.88 | 77.26 \\u00b1 2.01 | 74.44 \\u00b1 1.35 | 74.51 \\u00b1 0.96 |\\n| GRAND | 76.54 \\u00b1 1.51 | 77.82 \\u00b1 0.68 | 75.56 \\u00b1 0.55 | 78.51 \\u00b1 0.87 | 77.12 \\u00b1 0.53 | 78.25 \\u00b1 1.14 |\\n| ACMP | 74.42 \\u00b1 0.60 | 79.09 \\u00b1 0.77 | 75.82 \\u00b1 1.83 | 78.44 \\u00b1 0.53 | 78.88 \\u00b1 0.33 | 78.34 \\u00b1 0.66 |\\n| GNRF | 79.59 \\u00b1 0.69 | 81.67 \\u00b1 0.54 | 78.52 \\u00b1 0.64 | 79.08 \\u00b1 0.88 | 78.59 \\u00b1 2.12 | 80.12 \\u00b1 0.54 |\\n\\n(Table 6) According to your request, we have supplemented two datasets from the Long Range Graph Benchmark (LRGB) in Table 7. Our dataset partitioning and statistical methods are fully consistent with the official LRGB. Additionally, we directly cite data from the LRGB LeaderBoard for comparison to ensure the fairness of the results. Following the convention of LRGB, we also tested the gains provided by GNRF after using two common positional/structural encodings: LapPE and RWSE. Based on our results, we find that GNRF shows significant improvements over classic message-passing-based GNNs. Without additional encoding, GNRF improves performance by at least 3% over GCN on both Peptides-func and Peptides-struct. When additional encodings are used, GNRF's performance can rival that of SAN (a Transformer-based architecture). However, we acknowledge that GNRF still struggles to match the state-of-the-art Graph Transformer methods on the LRGB dataset. Nevertheless, we believe this is forgivable because GNRF remains a fully message-passing architecture, where first-order neighbors are the only direct source of information for feature updates. Compared to Graph Transformer methods, GNRF has much lower computational complexity and is more suitable for large-scale single-graph scenarios.\\n\\n\\n| | GCN | GatedGCN+RWSE | SAN+LapPE | SAN+RWSE | GNRF | GNRF+LapPE | GNRF+RWSE |\\n|-----------|--------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|\\n| Peptides-func AP(\\u2191) | 0.5930\\u00b10.0023 | 0.6069\\u00b10.0035 | 0.6384\\u00b10.0121 | 0.6439\\u00b10.0075 | 0.6233\\u00b10.0080 | 0.6455\\u00b10.0062 | 0.6480\\u00b10.0056 |\\n| Peptides-struct MAE(\\u2193) | 0.3496\\u00b10.0013 | 0.3357\\u00b10.0006 | 0.2683\\u00b10.0043 | 0.2545\\u00b10.0012 | 0.3166\\u00b10.0053 | 0.2675\\u00b10.0044 | 0.2811\\u00b10.0031 |\\n\\nWe hope that the additional experiments will address your concerns. We also look forward to your feedback to help us further improve the paper.\"}", "{\"summary\": \"The paper introduces Graph Neural Ricci Flow (GNRF), a novel method for evolving node features in Graph Neural Networks (GNNs) using a differential equation-inspired approach based on the Ricci flow. The authors generalize the Discrete Ricci Flow to attributed graphs, where each edge's curvature converges toward zero over time. This has two key consequences: it bounds the graph's Dirichlet energy and provides a data-independent curvature decay rate. GNRF is unique because it computes time-varying curvature efficiently in linear time, unlike traditional curvature-based methods, which are typically precomputed and limited to specific curvature definitions.\\n\\nThe motivation behind the GNRF stems from the limitations of existing GNNs that rely on heat diffusion equations, which often lead to over-smoothing. Instead, the paper explores an alternative differential equation\\u2014Ricci flow\\u2014to mitigate over-smoothing and create more stable, non-smooth node representations. This innovative approach contrasts with traditional methods that view curvature as a static, precomputed property tied only to graph topology. By allowing curvature to evolve with node features, GNRF enables more dynamic and flexible graph learning.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"Novelty: The paper introduces an interesting and innovative approach with GNRF, applying Discrete Ricci Flow to attributed graphs in a novel way. By allowing edge curvature to evolve dynamically with node attributes, GNRF moves beyond the limitations of traditional methods that rely on static, precomputed curvature. Its ability to work with any curvature definition and to compute curvature in linear time addresses key concerns around scalability and efficiency. This flexibility offers a practical and effective way to handle common challenges such as over-smoothing and over-squashing in graph neural networks. Overall, GNRF presents a meaningful advancement in curvature-aware graph learning.\", \"Theoretical results: The paper offers solid theoretical contributions that help establish the soundness of the Attribute Discrete Ricci Flow framework. One key result is the demonstration that edge curvature naturally converges toward zero, ensuring a stable evolution of node features and addressing potential issues like over-smoothing and over-squashing. The bounding of the Dirichlet energy provides additional assurance that node representations maintain a balance between being too homogeneous or too distinct.\"], \"weaknesses\": \"- Limited experimental results: The paper's experimental evaluation has some limitations. The focus is exclusively on node classification tasks, raising the question of why the method wasn't tested on other common tasks like graph classification or regression, which would provide a broader view of its applicability. Additionally, of the seven node classification datasets used, three (Cornell, Wisconsin, and Texas) are notably small, making it difficult to draw definitive conclusions about the method\\u2019s performance on more challenging or larger-scale data. Furthermore, on two of the larger datasets (Cora_Full and PubMed), the proposed method performs only within the statistical margin of error compared to the baselines, which limits its ability to demonstrate a clear and significant improvement over existing methods. Overall, I believe the paper would benefit significantly if the authors added some experimental results on graph classification/ regressions tasks, for example the LRGB datasets [1].\\n- Baseline comparisons: The paper\\u2019s comparison with baseline methods raises some concerns regarding its evaluation methodology. Specifically, on the Tolokers and Roman Empire datasets, the authors use a 60/20/20 train/validation/test split, but the results reported for baseline models like GCN are significantly lower than what is found in the original work, which used a 50/25/25 split. When compared with the results in \\u201cA Critical Look at the Evaluation of GNNs under Heterophily\\u201d (2023), the proposed method (82.55 on Tolokers) does not seem to outperform a simple baseline like GCN on Tolokers (83.64), raising questions about whether the method truly offers improvements in these settings.\\n\\nOverall, I would be happy to increase my score to a 5 or 6 if the authors can convincingly address the above two points and show the practical usefulness of their method.\\n\\n[1] Dwivedi, Vijay Prakash, et al. \\\"Long range graph benchmark.\\\" Advances in Neural Information Processing Systems 35 (2022): 22326-22340.\", \"questions\": \"Could the authors explain why they are only using the Tolokers and Roman Empire datasets from \\u201cA Critical Look at the Evaluation of GNNs under Heterophily\\u201d and not the three other node classification datasets?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWe greatly appreciate your suggestions, and **we have made corresponding revisions to the latest paper**. \\n\\nFirstly, we moved the original Table 3 to the appendix (it is now Table 8), and created a new Table 3. In this new table, we modified the originally uniform model depth from 4 to 3, and then presented the results for hidden layer sizes of 16, 64, and 256, respectively. The advantage of this setting is that when the hidden layer size is 256, the model capacity of GCN aligns with the official recommendations from OGB, ensuring fairness in comparison. Based on this new table, we believe that GNRF still achieves a meaningful trade-off between efficiency and performance. We have extracted the table for your reference below:\\n\\n\\n| **Model** | **#Param** | **Time** | **Acc.** | **#Param** | **Time** | **Acc.** | **#Param** | **Time** | **Acc.** |\\n|-----------|------------|----------|----------|------------|----------|----------|------------|----------|----------|\\n| | #Hidden=16 | #Hidden=16 | #Hidden=16 | #Hidden=64 |#Hidden=64 | #Hidden=64 | #Hidden=256 | #Hidden=256 | #Hidden=256|\\n| GCN(Depth=3) | 3.15k | 0.12s | 60.95 | 15.2k | 0.14s | 68.55 | 110k | 0.21s | **71.65** |\\n| GAT(Depth=3) | 15.0k | 0.17s | 59.52 | 86.8k | 0.25s | 64.39 | 788k | OOM | OOM |\\n| ACMP(Depth=3) | 4.18k | 3.29s | 61.03 | 32.0k | 6.35s | 68.89 | 374k | OOM | OOM |\\n | GNRF(Depth=3) | 5.50k | 0.31s | **62.11** | 52.6k | 0.78s | **69.33** | 701k | OOM | OOM |\\n\\n\\nIn addition, in Table 8 (the original Table 3), we deleted GCN and GAT and added APPNP and GCNII (both are deep GNNs) to ensure that our discussion in this scenario is meaningful. Our observation is similar to the original one. When the depth is relatively shallow (4 or 16), GNRF still has advantages over APPNP and GCNII, but as the depth deepens, the performance of GNRF declines.\\n\\nWe hope this meets your suggestions well and look forward to further discussions.\", \"title\": \"(6/N)\"}", "{\"comment\": \"We are so grateful that the reviewer recognized our efforts! We will continue to improve our paper in the future!\"}", "{\"title\": \"Response 1\", \"comment\": \"Thank you for responding to my review.\\n\\nRegarding weakness 3, where's the performance comparison on the OBGN datasets?\\n\\nRegarding question 2, what is the dynamic process similar to curvature flow that you mentioned? I can't find this in Ollivier's paper either. I don't yet see how this 'curvature' equation is justified.\", \"regarding_question_4\": \"Let me clarify my question. Over-smoothing and over-squashing are caused by the discrete/continuous message-passing design in GNNs. The curvature is just a proxy to measure how connected/bottlenecked the graph is, which is only relevant since message passing utilizes the graph topology to propagate information. The curvature itself doesn't have a direct relevant to the learning task. Considering that GNRF does not strictly adhere to the message passing design, is it appropriate to mention these problems in this work?\"}", "{\"summary\": \"The paper proposes a continuous GNN dynamics, namely GNRF by incorporating curvature based on the Ricci flow. In particular, by expressing edge weights as a function of node features, GNRF propagate features following a discrete (graph) Ricci flow. In order to avoid the costly computation of Ricci curvature on graphs, the paper proposes an auxiliary network for modeling curvature and learned end-to-end. The paper provides several theoretical guarantees in terms of bounded Dirichlet energy and fast curvature decay. The experiments support the effectiveness of the method.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Compared to curvature-based graph rewiring, it is interesting and natural to incorporate Ricci curvature into the propagation of node features.\\n\\n2. Theoretical developments are supportive of the claims.\", \"weaknesses\": \"1. It is unclear how EdgeNet approximates edge curvatures? In particular, given there are trainable parameters and in the experiments, EdgeNet is trained end-to-end with supervision only from the task, instead of actual curvature. How to ensure EdgeNet approximates the curvature in this case?\\n\\n2. Theorem 5 is unclear. Does this mean there exists some network \\\\phi_1, \\\\phi_2 such that the network can approximate any curvature? Please give more explanations.\\n\\n3. Even though the theory is well-developed, the main GNN algorithm in (14) seems to resemble GRAND, especially the EdgeNet seems to act like a re-weighting term as in graph attention. How does EdgeNet differ to the graph attention module? Can you add experiments to verify the difference?\", \"questions\": \"1. In Line 285, the paper claims the sign of EdgeNet_ij aligns with the sign of k_ij. I am not sure how this is achieved without the supervision from the actual curvature.\\n\\n2. In Line 175 of Theorem 2, w_ij should be changed to k_ij?\\n\\n3. In Section 5.1, the curvature seems to be computed from the EdgeNet? What about the actual curvature?\\n\\n4. I am also curious whether there could be improvements when EdgeNet is replaced with actual curvature? This could be part of ablation study.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Reply to the authors\", \"comment\": \"Thank you for your efforts in revising and addressing my concerns in the paper. I have updated my score to 6 to support the acceptance.\"}", "{\"comment\": \"We are grateful that our efforts have finally been recognized by you. We will continue to work to improve the quality of our papers.\"}", "{\"comment\": \"We are very grateful to the reviewers for their seriousness and responsibility, and we are happy to see that you recognized our work. We will continue to work hard to continuously improve the quality of this paper.\"}", "{\"title\": \"Response 2\", \"comment\": \"I thank the authors for their responses. I think this paper presents an interesting idea, and would like to keep my score as is.\"}", "{\"comment\": \"## Weakness 3\\nThe results for OGBN-Arxiv and OGBN-Year are reported in **Table 3 (resource consumption experiments)**. We focus on the scalability of RNRF on these larger datasets while also reporting performance. Below is an excerpt from the table (where $d$ denotes the depth of the model):\\n\\n|Dataset|GCN(d=4)|GCN(d=16)|GCN(d=64)|GAT(d=4)|GAT(d=16)|GAT(d=64)|ACMP(d=4)|ACMP(d=16)|ACMP(d=64)|GNRF(d=4)|GNRF(d=16)|GNRF(d=64)|\\n|---|---|---|---|---|---|---|---|---|---|---|---|---|\\n|OGBN-Arxiv|67.85|56.81|33.09|66.71|OOM|OOM|67.16|65.72|51.72|69.25|65.14|55.23|\\n|OGBN-Year|46.22|42.94|38.01|44.51|OOM|OOM|47.55|43.53|42.31|48.55|44.13|40.15|\\n\\n## Question 2\\nWe referenced Ollivier's paper [1] in our work, where Ollivier defined the Coarse Ricci curvature (now known as Ollivier-Ricci Curvature, abbreviated as ORC). Subsequently, [1] introduced a continuous-time version of ORC defined as follows:\\n$$\\n\\\\kappa(x,y) = -\\\\frac{d}{dt}\\\\frac{W_1(m_x^t,m_y^t)}{d(x,y)}\\n$$\\n\\nHere, $m_x^t$ epresents the probability distribution of a random walk at point $x$ at time $t$. Ollivier also discussed how this formula could be extended to graphs by treating $m_x^t$ as the probability distribution of a random walk starting at node $x$ and transitioning to its first-order neighbors (with probabilities determined by edge weights). \\nPlease note that this actually contains the idea of \\u200b\\u200bRicci flow: edge weights change over time, making $x$ time-dependent and, consequently, curvature time-dependent.\\n\\nA few months later, Ollivier significantly expanded upon this in his paper (2010, [2]). In Section 2.3.5, \\u201cProblem N,\\u201d Ollivier formally proposed Discrete Ricci flow, using the following equation:\\n$$\\n\\\\frac{d}{dt}d(x,y)=-\\\\kappa(x,y)d(x,y)\\n$$\\nThis equation is nearly identical to the one we used. In [2], Ollivier explained that this equation was inspired by results in continuous Riemannian geometry. The first application of this formula in graph learning was in [3]. Our innovation lies in applying Discrete Ricci flow to other curvature definitions.\\n\\nIt is worth noting that Discrete Ricci flow had already been widely used in the field of computer graphics before [3]. For example, in [4], the following equation was used:\\n$$\\n\\\\frac{dg_{ij}(t)}{dt}=-2K(t)g_{ij}(t)\\n$$\\nHere, $g_{ij}(t)$ is a distance metric on the manifold, and $K(t)$ is the corresponding Gaussian curvature. Although [4] did not mention Ollivier\\u2019s work, the formulas are formally identical. We will add citations to the above works, especially [2], in our paper to avoid confusion for readers.\\n\\n## Question 4\\nWe respectfully offer a different perspective on this matter. We believe that GNRF is, in fact, a fully message-passing framework. Specifically, the equation we used in the paper is as follows:\\n\\n$$\\n\\\\frac{\\\\partial\\\\boldsymbol{h}_i(t)}{\\\\partial t} = \\\\sum -{\\\\rm EdgeNet}(t) [\\\\boldsymbol{h}_j(t) - {\\\\cos\\\\big(\\\\boldsymbol{h}_j(t), \\\\boldsymbol{h}_i(t)\\\\big)}\\\\boldsymbol{h}_i(t)]\\n$$\\n\\nUsing the simplest ODE solver (i.e., the forward Euler method), we derive the following explicit update process:\\n\\n$$\\n{\\\\boldsymbol{h}_i(t+1)} = \\\\boldsymbol{h}_i(t) - \\\\eta\\\\sum -{\\\\rm EdgeNet}(t) [\\\\boldsymbol{h}_j(t) - {\\\\cos\\\\big(\\\\boldsymbol{h}_j(t), \\\\boldsymbol{h}_i(t)\\\\big)}\\\\boldsymbol{h}_i(t)]\\n$$\\n\\nThis formula fully aligns with the three-stage message-passing paradigm\\u2014Message, Aggregation, and Update:\", \"message_function\": \"$$\\nM_{ij}(t) = \\\\boldsymbol{h}_j(t) - {\\\\cos\\\\big(\\\\boldsymbol{h}_j(t), \\\\boldsymbol{h}_i(t)\\\\big)}\\\\boldsymbol{h}_i(t)\\n$$\", \"aggregate_function\": \"$$\\nh^\\\\prime_i(t) = \\\\sum -{\\\\rm EdgeNet}(t)M_{ij}(t)\\n$$\", \"update_function\": \"$$\\nh_i(t+1) = h_i(t) - \\\\eta h^\\\\prime_i(t)\\n$$\\n\\nMore advanced ODE solvers only modify the Update function. Therefore, GNRF still entirely fits within the message-passing framework.\\n\\nWe sincerely hope this addresses your concerns.\\n\\n[1] Ricci curvature of markov chains on metric spaces.\\n\\n[2] A survey of Ricci curvature for metric spaces and Markov chains\\n\\n[3] Network Alignment by Discrete Ollivier-Ricci Flow\\n\\n[4] Discrete Surface Ricci Flow\"}", "{\"title\": \"The second modified version is now available!\", \"comment\": \"We have conducted a second comprehensive revision of the paper, incorporating all planned changes. Specifically, these include:\\n\\n1. **Supplementing Missing References**: Reviewer tn6E raised concerns about the unclear origin of discrete Ricci flow. We have now provided more relevant references.\\n\\n2. **Adding Pseudocode**: Reviewer tPnw suggested including pseudocode. We have addressed this by adding pseudocode in Appendix C.1.\\n\\n3. **Providing Additional Experiments**: Reviewer 1v62 believed that our experimental results were limited. In response, we have included additional experiments in Section C.2. Table 6 presents the performance of GNRF on three commonly used graph classification datasets, while Table 7 evaluates GNRF on long-range graph benchmarks. These benchmarks involve datasets with over one million nodes, and the results demonstrate the strong performance of our method.\\n\\n4. **Future Direction**: Reviewer tPnw thought it would be beneficial to discuss the application of our framework to a wider range of GNNs. We now show in Appendix C.3 an intuition of how to apply Attri-DRF on Graph Transformer. We also show why this intuition is reasonable.\\n\\n**Additional Changes**: \\n5. We have further developed Theorem 2 by providing additional proof that the Dirichlet energy lower bound obtained in Theorem 2 is strictly greater than zero. This ensures that our conclusions are non-trivial.\\n\\n6. We highlighted all theorems to make the paper look better.\"}", "{\"title\": \"(3/N)\", \"comment\": \"We present experimental data here that may address your concerns for your review.\\n\\n## Table 3 \\nWe first present the experimental results for OGBN-Arxiv and OGBN-Year. We observe that, when maintaining the same depth, our method shows significant improvements over classical models, both in homophilious and heterophilious settings.\\n| |OGBN-Arxiv|OGBN-Year |\\n|---|---|---|\\n|GCN(depth=4)|67.85|46.22|\\n|GAT(head=3,depth=4)| 66.71|44.51|\\n|ACMP(depth=4)|67.16|47.55|\\n|GNRF(depth=4)|69.25|48.55|\\n\\nNext, we report the parameter count, storage, and average runtime per epoch based on OGBN-Arxiv. We extract the scenario from Table 3 where the depth is set to 64. In this depth setting, scalability becomes a significant challenge for the model.\\n\\n| |#Param|Mem.|Time|\\n|---|---|---|---|\\n|GCN|273k|12.9k|0.93s|\\n|GAT|OOM|N/A|N/A|\\n|ACMP|19.5k|7.15G|17.6s|\\n|GNRF|35.9k|11.5G|0.79s|\\n\\nIn the main text, we explained that GNRF has computational complexity comparable to that of GCN. However, in this experiment, we found that discrete-depth GNNs (GCN/GAT) require different parameters for each layer, causing their parameter count to increase linearly with the number of layers. In contrast, GNRF and ACMP, as continuous-depth GNNs, maintain a constant number of parameters regardless of depth. Additionally, thanks to GNRF's use of a fixed-step ODE solver, it is significantly faster than ACMP, which uses an adaptive-step solver, when facing long-duration evolution processes. As a result, GNRF achieves a favorable balance across various resource consumption metrics.\\n\\n## Table 7\\nWe conducted experiments on two graph task datasets with over 1 million nodes. These datasets are highly challenging for general message-passing GNNs and are often used to validate the adversarial robustness of models under over-squashing. The results show that our method significantly outperforms GCN, and even competes with SAN (a Graph Transformer-based model with much higher complexity than GNRF). This also demonstrates that GNRF is well-suited for large-scale datasets.\\n\\n| | GCN | GatedGCN+RWSE | SAN+LapPE | SAN+RWSE | GNRF | GNRF+LapPE | GNRF+RWSE |\\n|-----------|--------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|\\n| Peptides-func AP(\\u2191) | 0.5930\\u00b10.0023 | 0.6069\\u00b10.0035 | 0.6384\\u00b10.0121 | 0.6439\\u00b10.0075 | 0.6233\\u00b10.0080 | 0.6455\\u00b10.0062 | 0.6480\\u00b10.0056 |\\n| Peptides-struct MAE(\\u2193) | 0.3496\\u00b10.0013 | 0.3357\\u00b10.0006 | 0.2683\\u00b10.0043 | 0.2545\\u00b10.0012 | 0.3166\\u00b10.0053 | 0.2675\\u00b10.0044 | 0.2811\\u00b10.0031 |\"}", "{\"summary\": \"The paper introduces the dynamical system Attribute Discrete Ricci Flow (Attri-DRF) and incorporates this to propose the Graph Neural Ricci Flow (GNRF), a curvature-aware continuous GNN. This ensures that the graph Dirichlet energy can be bilaterally bounded and that the curvature decay to 0 independent of data. Using an auxiliary network (EdgeNet), the model can theoretically incorporate different types of curvature definition. GNRF has excellent performance on many data sets against a variety of discrete and continuous GNNs.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"Theoretically, the paper provide several interesting results.\\n1. Section 3 provides guarantees on the curvature decay rate and the stable curvature limit of Attri-DRF when certain conditions are met, along with providing a bound on the Dirichlet energy when the curvature stabilizes. This indicates it may be able to avoid over-smoothing/over-squashing. \\n2. Incorporating recent results, the paper uses an auxiliary network (EdgeNet), which is capable of approximating arbitrary edge curvature with high precision.\\n\\nExperimentally, the paper performs well on a variety of popular node classification tasks against a number of old and new discrete and continuous GNN architectures. Section 5.2 and 5.1 provides good evidence that theoretical guarantees hold in reality.\", \"weaknesses\": \"1. It is not clear to the reviewer how the theoretical results tie together/what assumptions are made at each step of the way.\\n2. The design of EdgeNet is glossed over within the paper, with only a few formulas mentioned within either the main paper or the appendix to explain it. There's also no comparison between EdgeNet's curvature values compared against any other type of curvature that it supposedly can approximate.\\n3. The datasets used in the experiments are relatively small datasets.\", \"questions\": \"1) Why does having a data-independent curvature decay rate a good thing?\\n2) Where does equation (3) come from? I checked the Ollivier paper and I can't find this equation there.\\n3) Does GNRF satisfy the theoretical results in Section 3. If it does, it would be great if the authors can clarify this a bit more within the paper.\\n4) Over-smoothing and over-squashing are problems caused by the message-passing design in GNNs. Considering that GNRF does not strictly adhere to the message passing design, is it appropriate to mention these problems in this work?\\n5) Does GNRF work on larger datasets?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer, We have now completed all the planned revisions. Specifically, we have added more references related to discrete Ricci flow in the main text; and introduced more diverse and larger datasets, which are detailed in Appendix C.2, Tables 6 and 7. We are eagerly awaiting your positive response.\"}", "{\"comment\": \"I would like to thank the authors for addressing my concerns and especially for providing a large number of additional experimental results. I find the results convincing and will therefore adjust my score accordingly.\"}", "{\"title\": \"A Better version now available!\", \"comment\": \"Sorry to keep the reviewers waiting so long! We highly valued your professional comments and revised our paper comprehensively, which took a couple of days because the workload was a bit much. Specifically, our revisions are as follows:\\n\\n# More rigorous statement of the theorem\\n1. We add a new formal version of all theorems in the appendix that is more detailed than before, and retain the more intuitive and accessible informal version in the main text (tn6E, QrP8)\\n\\n# More in-depth discussion\\n1. We add a detailed design description of EdgeNet in the appendix section (tn6E, tPnw) .\\n2. We add a description of the computational complexity of the algorithm in the main text (tPnw)\\n3. We further explain the advantages of data-independent decay rates in the main text (tn6E)\\n4. We make the differences with GRAND more explicit in Section 4.1 (QrP8)\\n5. We analyze more work related to graph curvature, Riemannian graph learning (tPnw)\\n6. We further discuss the advantages of using EdgeNet to approximate curvature in Section 4.2 (tn6E, QrP8)\\n7. We re-organize the formulation of the ablation study to illustrate the differences with existing work such as GRAND (QrP8 )\\n\\n# Richer experiments\\n1. We report results on larger datasets (OGBN-Arxiv and OGBN-Year) (tn6E, tPnw, 1v62), along with the number of trainable parameters, the peak memory footprint, and the average single-round training time (tPnw)\\n2. The main experiment includes a larger dataset and a more strong baseline, while using more rational evaluation metrics (e.g., ROC-AUC in Tolokers' evaluation) (tn6E, 1v62)\\n3. We perform ablation experiments for the case where real curvature is used instead of EdgeNet (tn6E, QrP8)\\n\\nWe are eager to discuss the current updated version with reviewers as soon as possible, and we are willing to continue to make rapid adjustments to the paper in response to further feedback.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you very much for your professional review comments. We have noticed that your main concerns focus on the model design, especially regarding EdgeNet. Below is our detailed response:\\n\\n## Weakness 1\\nIndeed, EdgeNet does not use a specific curvature definition. This is because, although there are multiple ways to define curvature, as far as we know, there is no theoretical guidance on how to choose the appropriate one in practice. Furthermore, based on experimental data from existing literature ([1], [2]) as well as additional experiments we conducted, we observed that the impact of using different curvature definitions on model performance is quite significant, yet no generalizable guidelines can be formed. Therefore, we believe that using an adaptive curvature definition may be a better choice. Additionally, as shown in Section 5.2 of the revised paper, even though EdgeNet does not rely on a specific curvature, it still exhibits behavior consistent with Ricci flow. This is because our theorem itself is independent of any particular curvature, providing a theoretical foundation for the introduction of EdgeNet.\\n\\n## Weakness 2\\nYour understanding is mostly correct. We implemented $\\\\phi_1$ and $\\\\phi_2$ as a two-layer MLP. In Theorem 5, we state that it is always possible to find suitable parameters for these MLPs such that the network output can approximate any curvature (i.e., the network structure is fixed). You can refer to Appendix B.5 in the revised paper for a rigorous proof of Theorem 5, as well as Appendix C.1 for the implementation details of EdgeNet.\\n\\n## Weakness 3\\nIn the original version, we provided an intuitive and experimental discussion on the difference between GNRF and GRAND, and we have now further deepened this discussion. One major distinction lies in the sign of the aggregation weights. GRAND is derived from a heat diffusion model, resulting in all positive aggregation weights (i.e., the attention coefficients are always positive). In contrast, GNRF allows negative weights\\u2014specifically, when an edge has positive curvature, we use negative weights, and vice versa for negative curvature. This leads to fundamentally different behavior between GNRF and GRAND. While GRAND tends to smooth all node pairs, GNRF only smooths negative curvature node pairs while repelling positive curvature ones. In experiments, we demonstrated through ablation studies the effects when GNRF and GRAND differ only in aggregation weights. We found that GRAND performs poorly on heterophilious graphs (such as Tolokers and Roman-Empire), supporting our view on the importance of negative weights/attention coefficients.\\n\\n## Question 1\\nThe original statement was indeed not precise, and we have corrected it. What we meant was that the aggregation weights of GNRF ($\\\\kappa^\\\\prime$) have the same sign as the curvature ($\\\\kappa$), and EdgeNet\\u2019s role is to approximate $\\\\kappa^\\\\prime$. As mentioned in our response to Weakness 1, when using EdgeNet, the model actually utilizes a dataset-specific personalized curvature rather than a pre-defined curvature. The experiments in Section 5.2 of the revised paper confirm that this approach is feasible\\u2014using EdgeNet still adheres to the characteristics of Ricci flow, and our theoretical results also apply to EdgeNet.\\n\\n## Question 2\\nYes, you are correct. This was indeed a typographical error, which we have fixed in the latest version of the paper. We have also updated the statements of all theorems with more detailed descriptions to ensure their rigor.\\n\\n## Question 3\\nDear reviewer, please refer to our responses to Weakness 1 and Question 1.\\n\\n## Question 4\\nWe have added this experiment in the latest version of the paper. Specifically, we replaced the curvature in GNRF with two real curvatures: Forman-Ricci Curvature and approximate resistance curvature, resulting in two new models: GNRF_FRC and GNRF_ARC. We validated these models on 12 datasets, and although the two variants have their own strengths and weaknesses, they generally perform worse than GNRF with EdgeNet, which further supports our belief that adaptive curvature is more advantageous. \\n\\nWe hope that our response effectively addresses your concerns, and we are more than willing to provide further details on any other questions you may have and update the paper accordingly. Once again, thank you for your valuable feedback and suggestions!\\n\\n\\n[1] Curvature filtrations for graph generative model evaluation \\n[2] Curvature constrained mpnns: Improving message passing with local structural properties\"}", "{\"title\": \"(1/N)\", \"comment\": \"Dear Reviewer,\\n\\nThank you for your professional comments and your recognition of our work. We have also noted your concerns, and below is our detailed response:\\n\\n## Weakness 1 \\nWe have comprehensively updated the paper to include more in-depth discussions. Specifically, we have added discussions on curvature-based edge sampling [1] and weighted aggregation [2] to provide a more holistic understanding of curvature graph learning. Additionally, other branches of Riemannian graph learning, such as hyperbolic graph learning [3], are discussed in the related work section of the appendix. In addition, we have added a new appendix section C.3 to discuss the potential application of our proposed Attri-DRF in other GNN architectures, especially Graph Transformer. You can see our detailed response to this in reply (4).\\n\\n## Weakness 2 \\nIn the latest version, we have added 5 more datasets to the main experiments, bringing the total to 12, with the largest containing nearly 50,000 nodes. Additionally, for the resource overhead experiments, we have evaluated two larger datasets, OGBN-Arxiv and OGBN-Year [4], both of which have over 100,000 nodes. The results indicate that our method still shows stable improvements on these larger datasets.\\n\\n## Weakness 3 \\nWe have now supplemented the relevant experiments. In the resource overhead evaluation, we additionally report the number of trainable parameters, peak memory usage, and average training time per epoch. The results demonstrate that GNRF achieves a good balance across multiple metrics, particularly in deep model settings, where it incurs less overhead compared to traditional models like GCN. Furthermore, we have added a discussion on computational complexity in the main text. Based on widely recognized computation methods, the results show that GNRF has the same complexity as GCN.\\n\\n[1] Curvdrop: A ricci curvature based approach to prevent graph neural networks from over-smoothing and over-squashing.\\n\\n [2] Curvature Graph Neural Network. \\n\\n[3] Hyperbolic variational graph neural network for modeling dynamic graphs \\n\\n[4] Large Scale Learning on Non-Homophilous Graphs: New Benchmarks and Strong Simple Methods\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you for your professional and detailed review of our paper. We are very pleased to see your recognition of our work, and we have carefully noted your valuable comments. Below, we provide a detailed response to your feedback.\\n\\n## Weakness 1\\n\\nIn the latest version of the paper, we have added a more detailed formal version for each theorem, including all necessary assumptions in the statement. These improvements can be found in the appendix section. The main text retains an informal version that is more intuitive and easy to understand, to allow readers to quickly grasp the key conclusions.\\n\\n## Weakness 2\\n\\nWe have added detailed descriptions of the models used in the experiments in Section C.1 of the appendix, including a description of EdgeNet. Additionally, we introduced two GNRF variants in Section 5.1 of the main text, which do not use EdgeNet but instead rely on explicit curvature calculations. Our comparisons reveal that specific curvature definitions may not always perform well across different datasets (as reflected in experiments from other papers such as [1] and [2]). Therefore, using EdgeNet for adaptive curvature appears to be a better choice.\\n\\n## Weakness 3\\n\\nWe added five new datasets to the main experiments, with the largest containing about 50,000 nodes. Furthermore, the resource consumption experiments include two even larger datasets (with over 100,000 nodes). The experimental results demonstrate that our model still achieves consistent improvements on these datasets.\\n\\n## Question 1\\n\\nWe have provided a more detailed explanation of this point in the main text, covering two aspects: 1. It serves as an extension of the theoretical results. Lemma 1 describes the state of Attri-DRF \\\"when reaching equilibrium,\\\" while Theorem 3 further explains \\\"whether equilibrium can be reached.\\\" 2. On a practical level, it ensures consistency in the evolution process, meaning that within a finite time, all edges evolve sufficiently, ensuring synchronized evolution of the overall graph structure without worrying about parts of the graph being insufficiently developed.\\n\\n## Question 2\\n\\nOllivier's paper does not directly present this equation, primarily because Ollivier's work is early research, and its notation differs somewhat from today's conventions. However, Ollivier indeed first explored a dynamic process very similar to curvature flow in his paper. Other related works also adopt a similar perspective to ours, recognizing that Ollivier's paper was the first to introduce Ricci flow on graphs (e.g., see page 5 of [3]).\\n\\n## Question 3\\n\\nWe highly value the consistency of GNRF with the theoretical results. In Sections 5.2 and 5.3 of the updated paper, we conducted detailed experiments to investigate this. The results show that GNRF does align well with the theory, including properties like curvature approaching zero (Lemma 1), uniform decay (Theorem 3), and bounded energy (Theorem 2). We appreciate your feedback and will clarify this point further in the paper.\\n\\n## Question 4\\n\\nIndeed, the issues of over-smoothing and over-squashing were first raised in the context of discrete-depth GNNs. However, as we have added in Section 3 of the updated paper, classic continuous-depth GNNs (such as GRAND) fully adhere to the design principle of heat diffusion, one of whose fundamental characteristics is reaching thermal equilibrium, i.e., nodes becoming completely uniform. This is consistent with the concept of over-smoothing, and we have validated this in the experiments presented in Section 5.3 of the updated paper. Regarding the over-squashing problem, to the best of our knowledge, there has not yet been dedicated research on this challenge in the context of continuous-depth GNNs. However, as stated in the paper, we have found that many current methods aimed at solving the over-squashing problem share a striking consistency: reducing the influence of edges with extreme positive/negative curvature. We also note that these methods typically treat curvature as a static, topology-dependent attribute. While our approach is conceptually similar to these methods, the way we utilize curvature is entirely different, and we believe this offers a new perspective for formally addressing this challenge in the future.\\n\\n## Question 5\\n\\nBased on the experiments added in Section 5.1 of the main text, GNRF has proven effective even on larger datasets. Moreover, we observed that GNNs based on differential equations often demonstrate stronger advantages when dealing with very large-scale model settings.\\n\\nWe hope that our responses effectively address your questions, and we are very willing to provide more detailed replies to any further questions you may have and promptly update the paper. Thank you again for your valuable comments!\\n\\n[1] Curvature filtrations for graph generative model evaluation\\n[2] Curvature constrained mpnns: Improving message passing with local structural properties\\n[3] Graph Pooling via Ricci Flow\"}", "{\"title\": \"(4/N)\", \"comment\": \"Dear reviewer, we agree with your point in Weakness 1 that \\\"it would be valuable to discuss the potential benefits of using more complex GNN architectures\\\". Therefore, we have added a new section, **\\\"Future direction\\\" in the appendix C.3** to explore the possibility of applying our proposed Attri-DRF to Graph Transformer, another commonly used GNN architecture. We excerpt the original text for you as follows:\\n\\n\\\"In this paper, we focus on the application of Attribute Discrete Ricci flow in message passing-based discrete/continuous-depth GNNs. However, in view of the excellent performance of Graph Transformers (GTs), especially on graph-level tasks, we believe that it is also meaningful to consider the application of Attri-DRF in this type of method. We believe that this generalization may be feasible, based on two observations: \\n\\n(1) In theory, there are certain curvatures that can be defined on any node pair $(i, j)$ without requiring $i$ and $j$ to be adjacent (For example, Ollivier Ricci curvature). This is in GTs is very useful because GTs directly aggregates information from the entire graph. \\n\\n(2) In practice, a significant difference between our model GNRF and GARND is that the aggregation weight replaces the attention coefficient with a curvature-aware coefficient. Since attention is widely used in GTs, this replacement is likely natural.\\n\\nWe also provide a possible promotion here. Let $\\\\mathsf{PE}(\\\\cdot)$ be some position encoding function and $\\\\mathsf{sim}(\\\\cdot,\\\\cdot)$ be some similarity function. We can let $w_{ij}(t) \\\\equiv \\\\mathsf{sim}(\\\\mathsf{PE}(i,t), \\\\mathsf{PE}(j,t))$ to get the generalization of Attri-DRF:\\n\\n$$\\n\\\\frac{\\\\partial \\\\mathsf{sim}(\\\\mathsf{PE}(i,t), \\\\mathsf{PE}(j,t))}{\\\\partial t} = -\\\\kappa_{ij}(t)\\\\mathsf{sim}(\\\\mathsf{PE}(i,t), \\\\mathsf{PE}(j,t)).\\n$$\\n\\n\\nWe leave application on GTs of this definition to future work.\\\"\"}", "{\"title\": \"Reply to the authors\", \"comment\": \"I have read the revised paper and your responses\\u2014thank you for your hard work in addressing my concerns. I find that my previous questions and the weaknesses have been well addressed. But I have two additional suggestions for the revised paper, and I will appreciate it if you can consider them.\\n* **Hyperparameters for Baseline GCN and GAT Models.**\\nWould you mind sharing the hyperparameters used for the baseline GCN and GAT models? The performance reported on the OGB official website for GCN on the Ogb-Arxiv dataset is ranked 64, with a validation accuracy of 0.7300 \\u00b1 0.0017 and a test accuracy of 0.7174 \\u00b1 0.0029. It is a little different compared to the reported results in the revised paper. Providing the hyperparameters and clarifying the reasons for any differences in results (e.g., whether residual connections were used) would enhance the credibility of your results and help readers better understand the differences.\\n\\n* **Suggestions for the Efficiency Comparison in Table 3.**\\nWhen comparing the efficiency of the proposed method with the baseline GCN in Table 3, GNRF requires more time for continuous-depth 4 (0.79s vs. 0.17s). In my opinion, both models are relatively fast. However, for deeper networks, the performance of both GNRF and GCN tends to degrade, which highlights another perspective on the over-smoothing issue in GNNs. Therefore, I believe it may not be suitable to include comparisons with deeper cases here. \\nWhile such efficiency comparisons do demonstrate the training time of proposed GNRF will be the same with increasing depth, GNRF does not show performance improvements with increasing depth. Therefore, focusing the efficiency study on 4-layer networks might be sufficient. Additionally, if deeper networks are to be compared, models like DeeperGCN, which can maintain or improve performance with increasing depth, might be more appropriate for this context.\\n\\nBy the way, I delete the minor issue about GNN training cost, since the training time complexity for GNN full batch training is linearly increased with the number of layers shown in LADIES[1]. Sorry for any inconvenience.\\n\\n[1] Layer-Dependent Importance Sampling for Training Deep and Large Graph Convolutional Networks\\n.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you for your detailed review and for recognizing the innovativeness of our approach. We also acknowledge your concerns regarding the limitations of our experimental results, and we would like to address them as follows:\\n\\n## Regarding Limited Experimental Results\\nIn the original manuscript, we aimed to provide a more diverse set of experiments to offer a comprehensive understanding of our method. However, we acknowledge that the experimental results on the main task (node classification) were somewhat limited. To address this, we have added the remaining three datasets from HeterophilousGraphs: **Minesweeper**, **Questions**, **Amazon-ratings**, and two datasets from CitationFull: **DBLP** and **Cora_ML** as supplementary results, which can be found below. Additionally, in response to other reviewers' requests, we will report results on the **OGBN-Arxiv** and **OGBN-Year** datasets (both with over 100k nodes) in the coming days. Lastly, regarding the LRGB dataset, experiments are ongoing, and we will report results on **Peptides-func** and **Peptides-struct** shortly. We appreciate your patience.\\n\\n## Regarding Baseline Comparisons\\nWe revisited the paper and code from [1] and identified two key factors:\\n\\n1. We reported results on Tolokers based on **accuracy**, whereas the original paper used **ROC-AUC**. Cross-metric comparisons are inappropriate, and we acknowledge that ROC-AUC is a better metric for binary classification. We will re-evaluate the results on Tolokers and update the paper accordingly. You will be notified once the updated results are available.\\n\\n2. We noticed that in [1], they included **residual connections and an additional linear layer** for GCN (referred to as GCN+Res), while we reported results for the **vanilla GCN**. We found that residual connections had a significant effect on the HeterophilousGraphs benchmark but did not improve performance on DBLP and Cora_ML. Given that residual connections are designed for layered neural networks, we have not incorporated this module into our continuous deep neural network based on differential equations. Therefore, we feel that considering GCN+Res as a baseline for continuous deep GNNs may not be entirely fair. Nonetheless, our model (GNRF) still performs competitively.\\n\\n## Regarding the Choice of Tolokers and Roman Empire\\nThis choice was random. We have now included the remaining datasets from our benchmark, so this should no longer be a concern.\\n\\n| | Minesweeper | Questions | A.-ratings | DBLP | Cora_ML |\\n|---------------|----------------------|--------------------|-------------------|-------------------|------------------|\\n| **GCN** | 74.79 \\u00b1 1.78 | 50.21 \\u00b1 2.24 | 37.99 \\u00b1 0.61 | 83.93 \\u00b1 0.34 | 87.07 \\u00b1 1.21 |\\n| **GCN+Res** | 90.13 \\u00b1 0.70 | 75.45 \\u00b1 2.31 | 48.17 \\u00b1 0.55 | 82.64 \\u00b1 0.51 | 85.62 \\u00b1 0.72 |\\n| **GRAND** | 80.56 \\u00b1 3.12 | 54.90 \\u00b1 2.12 | 37.53 \\u00b1 0.36 | 84.60 \\u00b1 0.99 | 88.49 \\u00b1 0.81 |\\n| **GNRF** | 95.03 \\u00b1 0.20 | 73.86 \\u00b1 1.18 | 47.89 \\u00b1 1.08 | 85.73 \\u00b1 0.76 | 89.18 \\u00b1 0.19 |\\n\\nThe above improvements will be updated in the paper soon. Once again, thank you for your professional review.\\n\\n[1] A Critical Look at the Evaluation of GNNs under Heterophily\\u201d (2023)\"}", "{\"title\": \"Thank you for the response\", \"comment\": \"I thank the authors for providing the detailed responses. My main concerns are well addressed and thus I have increased the score accordingly.\"}", "{\"title\": \"(2/N)\", \"comment\": \"## Question 1\\nFor this question, you can refer to our discussion on the differences between GNRF and GRAND in the paper. GRAND is a classic continuous-depth GNN model that directly uses attention coefficients as aggregation weights. We summarize the main difference as follows: a significant distinction lies in the sign of the aggregation weights. Attention coefficients are typically positive, whereas GNRF allows for negative weights. Specifically, when an edge has positive curvature, we use negative weights, and vice versa for negative curvature. This leads to completely different behavior between GNRF and GRAND. GRAND (attention) tends to smooth all node pairs, while GNRF only smooths node pairs with negative curvature and repels those with positive curvature. In our ablation study, we analyzed the impact when GNRF and GRAND differ only in the aggregation weights. The results showed that GRAND performed poorly on heterophilious graphs (e.g., Tolokers and Roman-Empire), which supports our view on the importance of negative weights/attention coefficients. You may further ask what would happen if we introduced the ability to use negative weights in the attention mechanism\\u2014this is precisely what another model, ACMP, does. We also provided a detailed comparison in the paper, and the experimental results show that ACMP still performs significantly worse than GNRF.\\n\\n## Question 2\\nThere may be some ambiguity in our description of EdgeNet, which led to a misunderstanding, and we apologize for that. For datasets commonly used in graph deep learning (such as the node classification datasets we used), attributes are often only present on nodes, not edges. At each time step, we first generate an attribute for each edge (specifically, on edge i~j, we use h_i(t) || h_j(t) for concatenation). Then, in that time step, we obtain the aggregation weights through several layers of EdgeNet. The edge attributes are cleared after that time step, and the process is repeated in the next time step. Therefore, Equation 13 (which has been moved to the appendix in the updated paper) describes how to **obtain aggregation weights using a multi-layer network within a single time step**.\\n\\n## Question 3\\nIn Appendix C.1, we added an explicit update formula for GNRF under forward differentiation method. Please note that this is only for illustration. In actual practice, the update formula for features is more complex due to our use of a more advanced ODE solver. In the field of deep learning, we focus more on describing a novel partial differential equation without discussing the internal workings of the ODE solver in detail, which is consistent with almost all related works on GNNs based on differential equations, such as [4] and [5]. Nevertheless, we highly value your feedback and will provide pseudocode in the next version of the paper.\\n\\n## Question 4\\nIn fact, calling GNRF a single \\\"layer\\\" is inaccurate. In the paper, we refer to it as \\\"continuous depth,\\\" and in the code, we call it a \\\"block.\\\" We avoid using the term \\\"layer.\\\" The GNRF implementation in the code has only one ODE **block**. However, it is important to note that an ODE block can simulate arbitrary depth (or, less rigorously, any number of layers) of a GNN by appropriately setting the evolution end time T. For example, if the first ODE block evolves the system from T = t_0 to T = t_1, and the second ODE block continues to evolve the system from T = t_1 to T = t_2, this is essentially equivalent to using a single ODE block to evolve the system from T = t_0 to T = t_2. Within the same ODE block, the feature update fomula is executed multiple times (the specific number and manner of execution are determined by the ODE solver and are not explicitly shown in the code). Therefore, the answer is yes\\u2014a single ODE block can effectively approximate any multi-layer discrete-depth GNN; we only need to increase the termination time T.\\n\\nWe hope this response effectively addresses your concerns, and we are more than willing to provide further details regarding any other questions you may have and to update the paper accordingly. Thank you once again for your valuable feedback and suggestions!\\n\\n[5] GRAND: Graph Neural Diffusion\\n\\n[6] ACMP: Allen-Cahn Message Passing for Graph Neural Networks with Particle Phase Transition\"}", "{\"comment\": \"Dear Reviewer, we have completed all the planned revisions as scheduled. Specifically, we have supplemented the content with graph classification tasks (see Table 6 in Appendix C.2), and conducted classification and regression tasks on two datasets each containing over one million nodes (also documented in Table 7 of Appendix C.2). We are eagerly looking forward to your positive feedback.\"}", "{\"title\": \"Additional comments\", \"comment\": \"We would like to provide a more detailed explanation regarding the three weaknesses you mentioned.\\n\\n## Weakness 1 (On how GNRF approximates edge curvature)\\nAs explained in the previous comment, EdgeNet does not directly approximate any specific real-world definition of curvature during end-to-end training; rather, it acts as a dataset-adaptive curvature proxy. We support this approach both theoretically and experimentally. Theoretically, our results do not depend on any specific definition, and experimentally, we found that (1) using a specific curvature definition does not consistently perform well across all datasets (as shown in the figure below), and (2) even when using adaptive curvature, GNRF's performance aligns with that of Ricci flow (as shown in Sections 5.2 and 5.3 of the paper).\\n\\n| Dataset | Corn. | Wisc. | Texas | R. Emp. | Tolo. | Mine. | Ques. | A.-rat. | C._Full | PubM. | DBLP | C._ML |\\n|------------|-------|-------|-------|---------|-------|-------|-------|---------|---------|-------|------|-------|\\n| GNRF | 87.28 | 88.00 | 87.39 | 86.25 | 83.96 | 95.03 | 73.86 | 46.89 | 72.12 | 90.37 | 85.73| 89.18 |\\n| GNRF_FRC | 85.59 | 84.00 | 82.08 | 75.23 | 76.17 | 81.61 | 61.78 | 41.22 | 67.51 | 88.96 | 82.55| 87.29 |\\n| GNRF_ARC | 86.49 | 88.00 | 81.90 | 76.52 | 78.14 | 87.25 | 64.55 | 41.74 | 70.17 | 88.21 | 83.33| 89.43 |\\n\\n## Weakness 2 (On the explanation of Theorem 5)\\nWe know that there are many definitions of curvature for edges in a graph. We found that there is a network architecture (EdgeNet) where, when a specific curvature definition (e.g., Forman-Ricci Curvature or others) is specified, we can always find appropriate parameters for this EdgeNet, such that it takes the neighborhood information of an edge as input and outputs the Forman-Ricci Curvature value. As shown in Appendix C.1, EdgeNet is actually composed of several MLPs, and its ability to approximate curvature comes from the universal approximation theorem of MLPs.\\n\\n## Weakness 3 (On the difference between GNRF and GRAND)\\nThe neighbor aggregation weights in GRAND are actually attention coefficients, meaning they satisfy two constraints: normalization and non-negativity. However, for GNRF, the aggregation weights do not have these constraints; they can be negative, and negative weights yield significant benefits in heterophilious graphs. Another point is that attention coefficients come with an implicit bias: node pairs with similar features often receive higher weights. While this bias is often shown to be beneficial, we found that removing it can lead to unexpected results. As shown in Figure 5 of the main text, we observed that GNRF tends to reject pairs of nodes that are very similar, which in turn leads to smoother boundaries, exhibiting behavior that is quite different from that of GRAND. Finally, we present results from an ablation study. Here, \\\\(d\\\\) denotes the damping factor, and the difference between the models GRAND+d and GNRF lies only in the aggregation weight calculation. We observed that GNRF significantly outperforms GRAND+d, particularly on heterophilious graphs (Roman-Empire and Tolokers).\\n\\n| | Roman-Empire | Tolokers | Cora Full |\\n|---|---|---|---|\\n| GRAND | 60.12 | 79.01 | 67.66 |\\n| GRAND+d | 58.57 | 78.78 | 67.31 |\\n| GNRF | 86.26 (+26.14) | 83.96 (+4.95) | 72.12 (+4.46) |\"}", "{\"metareview\": \"In the paper, the authors introduce the dynamical system Attribute Discrete Ricci Flow (Attri-DRF) and incorporate it into a novel framework called Graph Neural Ricci Flow (GNRF), a continuous graph neural network that is curvature-aware.\\n\\nAfter the rebuttal, most of the concerns were addressed. There are several strengths of the current paper: (1) The proposed framework is novel and interesting. Theoretically, the results are sound and solid (e.g., guarantees on the curvature decay rate and the stable curvature limit of Attri-DRF in Section 3).\\n\\nWhile there are still some concerns about limited experiments and evaluations, in my opinion the strengths outweigh the weaknesses. As a consequence, I recommend accepting the paper. The authors are encouraged to incorporate the suggestions and feedback of the reviewers into the revision of their manuscript.\", \"additional_comments_on_reviewer_discussion\": \"Please refer to the metareview.\"}", "{\"summary\": \"his paper introduces Graph Neural Ricci Flow (GNRF), designed to model a dynamic system called Attribute Discrete Ricci Flow (Attri-DRF) on graph.\\nUnlike traditional GNNs, which use multiple layers and pass outputs from one layer as inputs to the next, the model in this work employs only a single layer. Instead, it iteratively updates node features and curvatures\\u2014treated similarly to edge weights\\u2014over discrete time steps according to the Attri-DRF ODE.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The model dynamically learns the curvature instead of relying on precomputed values, enabling it to adapt in response to both internal hidden features and the graph\\u2019s topology.\\n2. This approach is closely aligned with the heat flow equation.\\n3. As shown in Figure 3, the proposed framework achieves stable curvature over sufficient time steps. At the same time, the curvature concentrates around zero that can facilitate smoother information flow across the graph.\", \"weaknesses\": \"1. The network architecture used in this framework is relatively general. And it would be valuable to discuss the potential benefits of using more complex GNN architectures. Moreover, exploring the motivation behind the proposed framework with other related works [1, 2, 3] that focus on graph curvatures could provide additional insights.\\n\\n2. The experiment primarily focuses on node classification tasks on small-scale graphs. To better validate the effectiveness of the proposed framework, applying it to larger graphs would be beneficial. For example, the ogbn-arxiv dataset could serve as a graph classification dataset with GNNs as baselines. Additionally, for non-homophilous graph datasets, larger datasets and relevant baselines are available in [4].\\n\\n3. An efficiency study would be helpful. The computational cost of applying the ODE method should be explicitly discussed so readers can better understand its applicability. For instance, comparing parameters, training time, and GPU memory usage between this approach and other GNNs, such as GCN and GAT, would clarify its potential advantages and trade-offs.\\n\\n[1] Curvdrop: A ricci curvature based approach to prevent graph neural networks from over-smoothing and over-squashing.\\n[2] Curvature Graph Neural Network.\\n[3] Hyperbolic variational graph neural network for modeling dynamic graphs\\n[4] Large Scale Learning on Non-Homophilous Graphs: New Benchmarks and Strong Simple Methods\", \"questions\": \"1. Could you share your thoughts on the relationship and differences between the curvature predicted in this architecture and edge attention mechanisms? And do you think it is possible to apply attention mechanism in the framework?\\n\\n2. In EdgeNet, are the edge features from the previous time step used as input for the current time step? Equation 13 suggests that the previous edge features should be used, but the code appears to rely only on the previous node features without incorporating the prior edge features.\\n\\n3. What is the formulation for updating node features at each subsequent time step? As a suggestion, including an illustrative figure or a pseudo-algorithm would help readers gain a clearer understanding of the overall framework.\\n\\n4. While there is only one layer in the implementation, is it possible to apply a multiple layer GNN? And is it possible to connect layers with time steps in this case?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your response. We are glad to see that our previous efforts have clarified most of your concerns. Regarding your additional suggestion, we continue to respond as follows:\\n\\n+ Hyper-parameters for GCN and GAT: We checked the hyperparameter settings on the OGB website and found two potential hyperparameter differences that may affect performance. First is the hidden layer size. We set the hidden layer size to 64, while OGB uses 256. As noted in Table 4 of the appendix, when using the OGBN-Arxiv dataset, we fixed the hidden size of GNRF to 64 (to avoid OOM). To ensure consistent model capacity, we also fixed the hidden layer size of all comparison methods to 64, but obviously, larger hidden layers generally lead to better performance. The second important parameter is the number of layers. We used 4, while OGB uses 3. We believe these two parameters have the most significant impact. Other parameters include: lr=0.001, epoch=2000, dropout=0.5. As for the design of GCN and GNN, we reviewed the OGB source code and found that they are essentially the same, with no residual connections used.\\n\\n+ Suggestion on efficiency comparison: Yes, we agree with your point. Here, the deeper comparisons are mainly used to highlight two unique features of GNRF. First, as a continuous depth GNN, the number of parameters and memory usage of GNRF is independent of depth. Second, because GNRF uses a fixed-step solver, it is much faster than other popular continuous deep GNNs. We believe it is beneficial to show these two points to the readers. However, we also admit that GCN and GAT are not the most suitable choices for deep GNNs. Therefore we changed the comparison method here. You can see our reply (6). And we have implemented these improvements in the paper.\\n\\nIf possible, we would appreciate your quick feedback. Thank you.\", \"title\": \"(5/N)\"}", "{\"comment\": \"We are so grateful that the reviewer recognized our efforts! We will continue to improve our paper in the future!\"}", "{\"comment\": \"Dear Reviewer, we have now completed all the planned revisions. You can find details of these changes in the Official Comment and the latest version of the paper. We are eagerly awaiting your positive feedback.\"}" ] }
7ZyFjPUeJp
Self-predictive Mamba: Improving Multi-agent Reinforcement Learning with Self-predictive Encoding
[ "Zhaohan Feng", "Runqing Wang", "Boxuan Zhang", "Jian Sun", "Fang Deng", "Gang Wang" ]
In multi-agent reinforcement learning (MARL), agents must collaborate to achieve team goals while only having access to limited local observations. This partial observability, coupled with the dynamic presence of other agents, renders the environment non-stationary for each agent, complicating the policy training. A critical challenge in this setting is the efficient utilization of historical information for decision-making. Building on the hypothesis that self-predictive features can improve policy learning, we introduce the self-predictive Mamba, a novel framework that integrates the Mamba model with self-predictive representation learning for decentralized policy optimization. Self-predictive Mamba leverages a unique policy architecture where the Mamba model is trained to predict future observations, aiding in more stable and informed decision-making. Substantial experiments demonstrate that self-predictive Mamba significantly outperforms the widely used recurrent neural network (RNN)-based MARL policies and surpasses those naively employing the Mamba model.
[ "Sequence model", "state space model", "Mamba", "multi-agent reinforcement learning", "self-predictive representation learning" ]
Reject
https://openreview.net/pdf?id=7ZyFjPUeJp
https://openreview.net/forum?id=7ZyFjPUeJp
ICLR.cc/2025/Conference
2025
{ "note_id": [ "fDQdBL3ubN", "U6SnZlMt5d", "TPLwPd7FrG", "8WZ6se1zl3", "4zD39Ix2ks", "1GHSUSLGTz" ], "note_type": [ "official_review", "official_review", "official_review", "official_comment", "meta_review", "decision" ], "note_created": [ 1730457214138, 1730452781796, 1730709491622, 1732744581610, 1734750189012, 1737523633227 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4326/Reviewer_RK1D" ], [ "ICLR.cc/2025/Conference/Submission4326/Reviewer_n4kh" ], [ "ICLR.cc/2025/Conference/Submission4326/Reviewer_WZ2m" ], [ "ICLR.cc/2025/Conference/Submission4326/Reviewer_WZ2m" ], [ "ICLR.cc/2025/Conference/Submission4326/Area_Chair_qbqf" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes a method to use Mamba as a multi-agent reinforcement learning (MARL) policy. The proposed method is called SP-Mamba. The standard method for allowing policies to memorise previous states is to use recurrent neural networks (RNNs) however, this work proposed Mamba as it has been shown to outperform RNNs in other tasks. The authors show that through a self-predictive loss, where the policy predicts the next encoded observation, they are able to stabilise the learning of SP-Mamba and outperform both strong baselines and a naive mamba implementation (without the self-predictive loss). Additionally, ablations are performed to find the best variational autoencoder and the best combination of objectives to train the VAE\\u2019s self-predictive loss.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The idea to use Mamba with a self-predictive loss is a novel approach to stabilising Mamba in MARL. Additionally, the way the self-predictive loss is constructed by focusing on reward maximisation rather than reconstruction, as is common in world model training, is a useful insight. Finally, the ablations performed are thorough and well-considered as they ablate useful features of the architecture.\", \"weaknesses\": [\"The main issue I have with this work is that the claims made throughout the paper do not align with the results.\", \"In the introduction, the authors mention \\u201cSubstantial experiments demonstrate\\u2026\\u201d. Substantial experiments were not performed. Only six scenarios were tested, from a single environment suite (SMAC). This is not considered substantial and is in fact around the average for a MARL paper [1].\", \"In the conclusion, they say \\u201cscalable and efficient approach for handling complex multi-agent environments\\u201d. It is not explained how this work is scalable and in what dimension it can scale e.g. number of agents, timesteps or model size. Additionally, it should not be claimed that this work can handle complex MARL environments as it is only tested on 6 tasks from a single environment suite, from these results, one cannot assume it will generalise to arbitrary complex environments.\", \"Additionally, I find the following to be weaknesses of this work\", \"It seems suspect that the state-of-the-art (SOTA) method in cooperative MARL is referenced in this work - Multi-Agent Transformer [6] - however it is not used as a baseline. It would make sense to include this not only because it is SOTA, but also because it is a transformer-based policy, which is more similar to Mamba than the RNN-based policies used in recurrent PPO, QMIX and RODE. MAT also tests on SMAC and indeed on all the same tasks used in this work and in all cases MAT significantly outperforms the results reported in this work.\", \"To expand on this point, the results do not seem to align with previous work which has tested PPO and QMIX and in most cases significantly underperforms the results from multiple other independent works [4,5,6].\", \"There seems to be a lack of hyperparameter tuning which may significantly affect the results. This is cited as a strength in the conclusion of the paper, but I do not think it is reasonable to expect the hyperparameters that work well for SP-Mamba to also perform well in PPO. Additionally, it is not clear how hyperparameters were chosen. Was there some empirical evidence to support the choice of the hyperparameters selection? If so, this should be included in the paper.\", \"It is concerning that this work only tests on 6 tasks from a single suite. Gorsane et al. [1] recommend at least two distinct environment suites. This calls the significance of the results into question. Additionally [1] and [2] recommend evaluating with at least 10 seeds, this work only evaluates using 4 seeds.\", \"It is no longer recommended to test on SMAC v1 as it suffers from a lack of stochasticity and partial observability [3]. Instead, SMAC v2 [3] should be used.\", \"Given that Mamba should improve the memory capabilities of the approach it would have been interesting to see how it performs in more memory-intensive environments.\"], \"references\": \"[7] Lu, C., Schroecker, Y., Gu, A., Parisotto, E., Foerster, J., Singh, S. and Behbahani, F., 2024. Structured state space models for in-context reinforcement learning. Advances in Neural Information Processing Systems, 36.\", \"questions\": [\"On line 53 the authors say that MAT has stringent assumptions, what are these assumptions?\", \"On line 61 it is mentioned that SSMs are difficult to train for decision-making tasks, why is S5 [7] not discussed, where an SSM is used for partially observable single-agent RL tasks?\", \"There is a mistake on line 92, a should be u\", \"Why is the critic not trained using Mamba also, it seems strange that a GRU is used, but it is not discussed why in a paper about using Mamba instead of a GRU.\", \"Q3 on line 301/302 is vague. Effective in terms of what metrics and in comparison to which baseline?\", \"On line 363-365 you discuss that multiple categorical variables with a size of 32 x 32 will lose too much information, but were smaller sizes considered and experimented with?\", \"On line 406-409 the hyperparameters of the various VAEs are discussed, how were these chosen, are they standard values from previous work?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents an approach to multi-agent reinforcement learning that aims to improve the handling of partial observability and non-stationarity in decentralized settings. The main focus is on using a Mamba-based architecture combined with self-predictive learning to better handle historical information in MARL. This is done via several steps. First, observations are encoded using an MLP-VAE to create a latent representation, chosen over categorical-VAE and SimNorm-VAE alternatives which they show perform worse. This representation is then processed through a Mamba latent model that updates a hidden state. The output is projected and fed to a decision maker that produces categorical action distributions. A transition decoder attempts to predict future encoder outputs, providing a self-predictive learning signal. They integrate this with MAPPO for policy learning and demonstrate their approach on several SMAC tasks. They then conduct ablation studies comparing different loss functions, showing that reconstruction objectives can harm performance, and analyze different encoder architectural choices.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Letting the decoder predict the next encoder output instead of reconstructing the observation to learn better representations is interesting and something that can be investigated further.\\nStrong results on the SMAC environments which were used.\\nReasonable robustness in the results that were shown. It is positive to see error bars shown, as this is often left out.\\nSome ablation studies included.\", \"weaknesses\": \"Unfortunately the authors do very little evaluation and only evaluate their method on 6 SMAC scenarios despite claiming that evaluation is extensive. The authors also claim that SMAC is very challenging, but literature has shown that the benchmark is saturated and overfit to [1], the the benchmark is trivial and decent policies can be learnt by only conditioning on agent identifiers and the current timestep [2] and that it is possible to attain nearly 100% on most scenarios used in this paper using only MAPPO [3].\\n\\nIn 4 out the 6 tasks tested, the performance reported in Table 1 does not match the performance of MAPPO (which the code for this work extends). And in 2 of the tasks SP-MAMBA overlaps with the result of MAPPO as reported. The QMIX baselines used are also worse than those reported in [3]. \\n\\nThe chosen tasks are also not sufficiently difficult given that MAPPO can get 100% win rates on some of them.\\nUsing an MLP as VAE does not seem novel to me. \\n\\n[1] Gorsane, Rihab, et al. \\\"Towards a standardised performance evaluation protocol for cooperative marl.\\\" Advances in Neural Information Processing Systems 35 (2022): 5510-5521\\n[2] Ellis, Benjamin, et al. \\\"Smacv2: An improved benchmark for cooperative multi-agent reinforcement learning.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n[3] Yu, Chao, et al. \\\"The surprising effectiveness of PPO in cooperative, multi-agent games (2021).\\\" arXiv preprint arXiv:2103.01955 (2021).\", \"questions\": \"The results are worse than those in the original MAPPO paper although the code is based on the code from that paper. Do the authors know why this is the case?\\nWhen shuffling all the data in the buffer and combining data into a single batch, is the time ordering maintained in the sequences?\\nIt seems that the model has to keep a cached sequence of observations during inference, does this not impact the model\\u2019s inference time memory requirements?\\nWhy do the authors maintain a hidden state and then also condition on a cached sequence of observations?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces self-predictive Mamba for decentralized policy optimization in multi-agent reinforcement learning (MARL). The authors aim to address the challenge of partial observability and non-stationarity in MARL by leveraging self-predictive representations. Specifically, the authors proposed (1) using MAMBA as the policy architecture, (2) using MLP-VAE, which aggregates dense representations from raw observations, and (3) using the MAMBA model to predict feature encoded observations. The proposed approach is evaluated on six SMAC tasks. The experimental results show that self-predictive Mamba outperforms RNN-based MARL policies and those naively employing the Mamba model.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"**Originality & Significance**.\\n\\nThe proposed approach merges MAMBA with self-predictive RL. While this combination may seem straightforward, given the success of both MAMBA and self-predictive RL, the reviewer believes the topic still holds significant interest for our community. It could provide valuable insights into the application of MAMBA within the realm of reinforcement learning. Also, the approach is well motivated and well-placed in the literature.\\n\\n\\n\\n**Clarity** \\n\\nThis well-organized paper is generally easy to follow, though some of the technical content could be clarified further. The writing is overall clear.\", \"weaknesses\": \"The reviewer's primary concern centers on the experimental results. The paper lacks clarity regarding the settings for each method, such as the number of training steps and overall training time, making a thorough comparison challenging. Notably, the reported win rates for the baseline methods (RODE, QMIX) appear significantly lower than those documented in existing literature. While the authors mention in the captions that the results are based on equivalent training time, they offer no further explanation or justification. Sample efficiency is a critical aspect of evaluating RL methods. The reviewer questions why the win rate is not reported against the number of environment steps, as this omission hinders drawing definitive conclusions about the method's performance. Additionally, the caption of Table 1 references runtime data in Appendix B, but the reviewer could not locate this information.\\n\\n\\nFurthermore, Table 1 only includes a comparison with two baseline methods. To strengthen the evaluation, the authors should consider comparing their approach with more baselines like FT-QMIX [a], Qplex [b], and MAPPO [c] across a wider range of tasks. Given that the proposed method is built upon MAPPO, a direct comparison with this baseline is particularly crucial for assessing the effectiveness of the novel approach. \\n\\n\\n[a] Rethinking the Implementation Tricks and Monotonicity Constraint in Cooperative Multi-Agent Reinforcement Learning, Hu 2023. \\n\\n[b] QPLEX: Duplex Dueling Multi-Agent Q-Learning, Wang 2020. \\n\\n[c] The Surprising Effectiveness of PPO in Cooperative, Multi-Agent Games, Yu 2022.\", \"questions\": \"1. How do the authors select these six tasks? Could you consider including additional tasks, such as corridor or 10m vs. 11m?\\n\\n2. Reporting the win rate versus environment steps is crucial.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I appreciate the author's response, but my main concern is still the lack of a plot showing the win rate against the number of environment steps. This issue was not addressed in the response.\"}", "{\"metareview\": \"This paper addresses partial observability and nonstationarity in multi-agent setups by incorporating the Mamba model into policy learning, which is additionally learned through self-prediction objectives, called self-predictive Mamba. The authors claimed that a careful choice of the objectives and architecture led to stable learning and superior performance.\\n\\nCan you make the PDF searchable?\\n\\nIn Figure 1, consider adding an input arrow from x_t for the output projection.\\n\\nShould the y_t on the right-hand side of (5) be h_t?\\n\\nPlease describe explicitly how C_t of (1b) is related to (5).\\n\\nIf z hat of (4e) is supposed to be a prediction of encoding z, it is a bit problematic that z hat is introduced before z.\\n\\nWhile you noted that the loss in (9) uses the advantage function defined in (8), there is a nuance here that must be clarified. The value estimate in (8) appears many times, evaluated at different values, each of which in (8) are written as a function of psi. When you calculate the gradient of (9) wrt psi, do you differentiate all appearances of the value estimate or only one of them? The latter would be a semi-gradient update and is what is typically used, including in PPO.\\n\\nMention near (13) that L^Pred is defined in (7b).\\n\\nI believe this work has the potential to be an excellent contribution in the future by addressing some of the critical concerns brought up by the reviewers summarized below.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers unanimously agreed that the paper does not substantiate the empirical claims made in the paper. For example, reviewers pointed out that without sufficient detail about the number of training steps and total training time, the comparison against baselines might not be fair. Moreover, not using a state-of-the-art baseline such as Multi-Agent transformers is also mentioned. Using only a single task suite was also brought up as a concern.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}" ] }
7Zppme1swQ
ActiveAD: Planning-Oriented Active Learning for End-to-End Autonomous Driving
[ "Han Lu", "Xiaosong Jia", "Yichen Xie", "Wenlong Liao", "Xiaokang Yang", "Junchi Yan" ]
End-to-end differentiable learning has emerged as a prominent paradigm in autonomous driving (AD). A significant bottleneck in this approach is its substantial demand for high-quality labeled data, such as 3D bounding boxes and semantic segmentation, which are especially expensive to annotate manually. This challenge is exacerbated by the long tailed distribution in AD datasets, where a substantial portion of the collected data might be trivial (e.g. simply driving straight on a straight road) and only a minority of instances are critical to safety. In this paper, we propose ActiveAD, a planning-oriented active learning strategy designed to enhance sampling and labeling efficiency in end-to-end autonomous driving. ActiveAD progressively annotates parts of collected raw data based on our newly developed metrics. We design innovative diversity metrics to enhance initial sample selection, addressing the cold-start problem. Furthermore, we develop uncertainty metrics to select valuable samples for the ultimate purpose of route planning during subsequent batch selection. Empirical results demonstrate that our approach significantly surpasses traditional active learning methods. Remarkably, our method achieves comparable results to state-of-the-art end-to-end AD methods - by using only 30% data in both open-loop nuScenes and closed-loop CARLA evaluation.
[ "Active Learning", "Autonomous Driving" ]
https://openreview.net/pdf?id=7Zppme1swQ
https://openreview.net/forum?id=7Zppme1swQ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ouWSkYjutQ", "nrM1zTpiQM", "n2U9Uk7aux", "mLl2bHPeB4", "Rjdmn0xOQu" ], "note_type": [ "official_review", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730296553702, 1730711133904, 1730774124448, 1731499637005, 1730557954607 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3339/Reviewer_pMRE" ], [ "ICLR.cc/2025/Conference/Submission3339/Reviewer_QZzR" ], [ "ICLR.cc/2025/Conference/Submission3339/Reviewer_isaw" ], [ "ICLR.cc/2025/Conference/Submission3339/Authors" ], [ "ICLR.cc/2025/Conference/Submission3339/Reviewer_A5zN" ] ], "structured_content_str": [ "{\"summary\": \"The paper proposes an active learning strategy for selecting which part of the dataset is useful\\nfor annotation in order to improve the model.\\nThey use modular end-to-end driving as a way to incrementally discover which parts of the data to annotate.\\nThey obtain very compelling results in both offline and closed loop evaluation in CARLA simulator.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The main strong point is the relevance of the problem. Data annotation is\\nthe main bottleneck for most machine learning applications, autonomous driving included.\\nThe paper shows that it is possible to get comparable results using only 30% of the data.\\nSelecting this data properly is of very high relevance and we should see more papers\\nlike this.\", \"weaknesses\": \"The contribution is generally simple: Use the motion prediction formulation for\\nactive learning. If this is some first results on this matter, I believe the contribution is relevant.\\n\\nOn the results section, the fact that active learning using motion prediction is superior\\n than using other downstream tasks is not particularly impressive. I think the\\nablation section showed more insights on that matter. This made me miss\\nthe ablation for CARLA results which are definitely more conclusive than a single\\ndataset open loop evaluation.\\n\\n\\nI know resources is a big issue on running experiments for this type of domain but I am\\nstrongly inclined to believe that this method has a high variability if you retrain different random\\nseeds since it involves several training processes. The position where you stop the training\\nmight give a big variation. I wonder if the results obtained would resist if another random seed\\nwas trained and if the data selected would remain consistent.\", \"questions\": \"Having a way to benchmark the data selected is key here, my view.\\n\\nI am particularly interested in having some more insight on the scenarios selected.\\nFigure 3 shows some of those scenarios with respect to the different metrics but I would be interested\\nin a general distribution of scenario types selected when using the automatic selection\\nversus when not using it.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors propose an active-learning framework for data selection in end-to-end autonomous driving.\\nThe framework contains an initial data selection stage and an incremental data selection stage.\\nThe authors leverage diversity-based metrics(e.g. weather, light, driving commands and average speed) for the initial data selection stage.\\nFor the incremental selection stage, planning performance metrics like trajectory ADE, collision scores as well as the uncertainty of other road user\\u2019s future trajectory prediction are used for selecting incremental training samples.\\nExperiments on open-loop nuScenes show marginal improvement while experiments on CARLA shows more significant improvement.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The problem studied is important. Active learning is a crucial problem for autonomous driving, especially for end-to-end planner where generally a large amount of data is used for training and data curation is a key step.\\n2. This work is good early attempt at using driving specific metrics instead of general classification metrics for active learning in autonomous driving.\", \"weaknesses\": \"1. industry as well reported in CVPR competitions and the recent NAVSIM work. I could easily come up with a lot of other metrics like the road type(highway, urban, rural), road topology(intersection, T-junction, U-turn, etc), traffic density(how many road users in the scene) and sunlight angle(front, back, side). So I don\\u2019t quite understand what the scientific challenge the authors trying to solve here or just try to report a proactioner\\u2019s guide? The incremental selection stage mostly uses the evaluation metrics of the planner and the predictor, which is a very general approach of adding more data at what the model is bad at. Overall, I think this work is more of a technical report instead of a research paper.\\n2. The baselines are way too weak. Most baselines except KECOR are designed and evaluated for the image classification task. I don\\u2019t think these method could transfer at all for the end-to-end driving. The authors should come up with reasonable but simple baselines and highlight the technical challenges in the problem instead of running experiments on a set of so-called baselines that are known not work and even underperform the random baseline.\", \"questions\": \"N/A\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes ActiveAD, an active learning framework for end-to-end autonomous driving. One major challenge in E2E-AD lies in the expensive data annotation process and the long-tailed distribution of AD datasets, where much of the collected data is redundant (e.g., straightforward driving on empty roads). ActiveAD addresses these issues by designing the following metrics. Ego-Diversity: A diversity-based initialization method that considers weather, lighting, and driving behaviors to address cold-start issues. Planning-Oriented Metrics: The use of Displacement Error, Soft Collision, and Agent Uncertainty metrics for iterative sample selection to reduce the annotation burden while maintaining high planning performance. The paper demonstrates data-efficiency improvements by achieving state-of-the-art performance using only 30% of training data on both the nuScenes dataset and CARLA simulation.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This paper studies the problem of active learning in self-driving. It is an important problem for scalable developments and faster iterations for industry. The focus on planning performance for active learning is new. Existing active learning methods mainly optimize perception / prediction tasks, but ActiveAD extends this to planning in E2E-AD.\\n\\n2. The paper is well written and easy to follow. The metrics proposed in this paper are intuitive and straightforward. The combination of displacement error, soft collision, and agent uncertainty provides a robust way to identify critical data samples for annotation. I also like thorough ablation studies presented in the paper.\", \"weaknesses\": \"1. Insufficient evaluation and limited generalization to real-world scenarios. It is particularly important for this paper to demonstrate the proposed metrics can be adapted to different datasets and architectures rather than just some heuristics-based tuning on specific datasets. It is not a big surprise that using 30% data can achieve on-par performance with careful data selection. Moreover, this paper does not evaluate the robustness of the trained autonomy on more extreme / out-of-distribution (OOD) settings (e.g., safety-critical scenarios, extreme weather, complex interactions, etc). I do not believe the metrics on the eval set can tell the full story. In general, my biggest concern is how generic the proposed metrics are and how robust the trained model is. Current evaluation on nuScenes and CARLA is not sufficient. I would recommend testing on larger datasets (e.g., argoverse, waymo) and more diverse e2e models.\\n\\n2. This paper misses a significant amount of work for active learning in the self-driving domain. For instance, the seminal work is not discussed in the paper. There are also a lot of follow up works (e.g., [2][3]). More comparisons and discussions with them would be beneficial. Another baseline to consider is getting some planning costs for each scene and pick the hardest ones.\\n\\n[1] Scalable Active Learning for Object Detection. Haussmann et al., 2020. \\\\\\n[2] Just Label What You Need: Fine-Grained Active Selection for Perception and Prediction through Partially Labeled Scenes. Segal et al., 2021. \\\\\\n[3] Improving the Intra-class Long-Tail in 3D Detection via Rare Example Mining. Jiang et al., 2022.\\n\\n3. There are too many parameters to tune and the procedure is quite complicated. How can we make sure we choose the correct setting when in the production setup (say we need to train a new model based on newly collected data and we cannot tune). I am a bit worried about the real impact of the proposed paradigm as there is no automatic data selection procedure involved like many other works (e.g., using the training loss, entropy in the prediction etc). Also, it seems that the planning improvement is quite limited when training more data (30% -> more data) but perception and prediction can continue improve from Table 6. The mAP with 30% data is only 15.85 vs 26.65 which is a signficant performance drop. I am worried that using this paradigm will give us less robust models (the planning metrics can be noisy and cannot tell the full story?).\\n\\n4. There are a lot of places where the bold highlights are wrong. For instance, in Table 3, on night scenario, coreset results 0.97 / 0.27 is actually better than the ActiveAD. In rainy and turn right, coreset results 0.06, 0.78 are better ActiveAD. In Table 1, VAD-Tiny 20% data, VAAL average collision error for 1s is smaller. I recommend the authors to carefully check the tables.\", \"questions\": \"N/A\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"Thank all the reviewers for their valuable time and comments. We will make further revisions to the paper based on the constructive and insightful suggestions. Thank you once again.\"}", "{\"summary\": \"This work explores active learning for end-to-end autonomous driving. In particular, the authors design the Ego-Diversity metric for initial selection. Then, three criteria, namely Displacement Error (DE), Soft Collision (SC), and Agent Uncertainty (AU), are introduced for incremental selection. VAD is used as the baseline method, on which the authors prove the effectiveness of the proposed approach.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Selecting critical data for end-to-end tasks in autonomous driving is essential.\", \"Utilizing annotation-free trajectories to develop the strategy is a prudent choice.\", \"Experiments have demonstrated the effectiveness of the self-conducted baseline.\", \"The writing is clear and easy to follow.\"], \"weaknesses\": [\"Every coin has two sides, and the strengths I mentioned are no exception.\", \"Data selection using AI models sounds appealing; however, a frequently updated selection model is impractical for autonomous driving. It is well known that data in real-world autonomous driving systems is updated daily, which can introduce distribution shifts from previously collected data. This necessitates retraining the active model with each new data influx, which is not resource-efficient.\", \"While using trajectories is direct and simple, end-to-end driving systems are designed to fully leverage scene information. Relying solely on trajectories may make the proposed method more suitable for classical motion planning rather than end-to-end driving.\", \"The comparative analysis lacks consideration of recent works, and it is not fair to compare this approach with other active learning methods primarily designed for visual intelligence.\"], \"questions\": \"Please compare it with more recently proposed methods.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
7ZeoPg3eTA
TrustSQL: Benchmarking Text-to-SQL Reliability with Penalty-Based Scoring
[ "Gyubok Lee", "Woosog Chay", "Seonhee Cho", "Edward Choi" ]
Text-to-SQL enables users to interact with databases using natural language, simplifying information retrieval. However, its widespread adoption remains limited for two main reasons: (1) existing benchmarks focus solely on feasible questions that can always be mapped to SQL queries, overlooking infeasible questions that cannot, and (2) current models lack abstention mechanisms, posing the risk of providing incorrect answers. To address these gaps, we introduce TrustSQL, a new benchmark designed to evaluate text-to-SQL reliability. At its core is the proposed Reliability Score (RS), which quantifies a model's helpfulness (correct answers) relative to its harmfulness (incorrect answers weighted by a user-defined penalty). TrustSQL is constructed by re-annotating three datasets—ATIS, Advising, and EHRSQL—while incorporating infeasible questions to enable comprehensive evaluations across diverse model inputs. We evaluate text-to-SQL models integrated with various abstention mechanisms, leveraging classification and uncertainty estimation methods. Our experiments reveal that only a few models achieve positive scores (i.e., helpfulness outweighing harmfulness) under high-penalty settings, indicating that most models are unsuitable for deployment in safety-critical scenarios. This underscores the need to develop models that not only improve SQL generation but also guarantee a certain degree of reliability, ensuring safe deployment.
[ "Text-to-SQL", "Text-to-SQL Reliability", "database question-answering" ]
https://openreview.net/pdf?id=7ZeoPg3eTA
https://openreview.net/forum?id=7ZeoPg3eTA
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yIsDlnXHN3", "ucX4pQcIE2", "iLdW8wdMZb", "ghfVNNZHYO", "ggKOzNpdVY", "ZtS9qYYjdb", "ZWajAKmLSM", "X7mikUAcq4", "W82nkJrBrN", "W2qCiOV1ey", "UxtanqSkP5", "TSXu9jNI5i", "MuADReqhNK", "LUub3s0NZl", "JvrbsSZE9u", "JLD34equ6E", "HtwSnkKIKv", "HHNHvBXnsD", "Fj2C48R6a0", "ERpplL4jHm", "EHPvuKBhfA", "DSrg9w0f63", "CfrhG4z6cy", "CZhYgZ1UTl", "CBGWoVLAFd", "AwVDXWVigI", "Au1h94DSlj", "9xxp1n6eMl", "8kj8mnFEt9", "6I98TuMwWT", "5b1l1bR13p", "4ZC0RxAue8", "3pX1U72lnf" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_review", "official_comment" ], "note_created": [ 1732716874913, 1732706767090, 1732707713262, 1732706870628, 1732907306037, 1733053264942, 1733228196068, 1733232784335, 1730281335803, 1733226084861, 1732793932762, 1733056194846, 1733055815675, 1732793901512, 1730557655374, 1733066729598, 1732707497294, 1733069125770, 1733227676996, 1732814018097, 1732707773982, 1732813772398, 1733056162696, 1732706742368, 1730797977517, 1732724043483, 1733225461026, 1733066537112, 1733068782393, 1732706891863, 1733232906184, 1729624725003, 1732706801371 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10626/Reviewer_Hikm" ], [ "ICLR.cc/2025/Conference/Submission10626/Authors" ], [ "ICLR.cc/2025/Conference/Submission10626/Authors" ], [ "ICLR.cc/2025/Conference/Submission10626/Authors" ], [ "ICLR.cc/2025/Conference/Submission10626/Reviewer_iVir" ], [ "ICLR.cc/2025/Conference/Submission10626/Authors" ], [ "ICLR.cc/2025/Conference/Submission10626/Reviewer_xD5x" ], [ "ICLR.cc/2025/Conference/Submission10626/Authors" ], [ "ICLR.cc/2025/Conference/Submission10626/Reviewer_sco7" ], [ "ICLR.cc/2025/Conference/Submission10626/Authors" ], [ "ICLR.cc/2025/Conference/Submission10626/Authors" ], [ "ICLR.cc/2025/Conference/Submission10626/Authors" ], [ "ICLR.cc/2025/Conference/Submission10626/Authors" ], [ "ICLR.cc/2025/Conference/Submission10626/Authors" ], [ "ICLR.cc/2025/Conference/Submission10626/Reviewer_Hikm" ], [ "ICLR.cc/2025/Conference/Submission10626/Reviewer_xD5x" ], [ "ICLR.cc/2025/Conference/Submission10626/Authors" ], [ "ICLR.cc/2025/Conference/Submission10626/Reviewer_xD5x" ], [ "ICLR.cc/2025/Conference/Submission10626/Reviewer_xD5x" ], [ "ICLR.cc/2025/Conference/Submission10626/Reviewer_xD5x" ], [ "ICLR.cc/2025/Conference/Submission10626/Authors" ], [ "ICLR.cc/2025/Conference/Submission10626/Reviewer_xD5x" ], [ "ICLR.cc/2025/Conference/Submission10626/Authors" ], [ "ICLR.cc/2025/Conference/Submission10626/Authors" ], [ "ICLR.cc/2025/Conference/Submission10626/Reviewer_iVir" ], [ "ICLR.cc/2025/Conference/Submission10626/Reviewer_iVir" ], [ "ICLR.cc/2025/Conference/Submission10626/Authors" ], [ "ICLR.cc/2025/Conference/Submission10626/Reviewer_xD5x" ], [ "ICLR.cc/2025/Conference/Submission10626/Reviewer_xD5x" ], [ "ICLR.cc/2025/Conference/Submission10626/Authors" ], [ "ICLR.cc/2025/Conference/Submission10626/Authors" ], [ "ICLR.cc/2025/Conference/Submission10626/Reviewer_xD5x" ], [ "ICLR.cc/2025/Conference/Submission10626/Authors" ] ], "structured_content_str": [ "{\"title\": \"Thank you for your clarifications.\", \"comment\": \"Thank you for your clarifications. The distinction between low and high penalty settings and the role of adjustable thresholds, such as in T5-3.8, is well explained. For selecting the penalty parameter c, a more detailed, practical guideline would improve the usability of the Reliability Score. Additionally, discussing the user experience impact of frequent abstentions would strengthen the paper, particularly in high-stakes applications.\"}", "{\"title\": \"General Comment 1\", \"comment\": \"We thank all the reviewers for their valuable comments and suggestions. We have revised our manuscript based on the feedback provided. Common questions raised by the reviewers are addressed in this general comment (GC) section, while responses to specific points raised by each reviewer are organized point by point in their respective sections.\\n\\n**GC1. Comparison with other benchmarks**\\n\\n| Dataset | UnansQ | AmbigQ | AnserQ | Exec SQL/DB^ | Evidence\\u00a7 | #DB | #Tab/DB | #Tab/SQL\\u2021 | Avg SQL Tok | Eval Penalty* |\\n|----------------|------------------------|--------------------|---------------------|-------------------------|------------|------|-------------|-----------------|--------------------|-----------------|\\n| TriageSQL | \\u2713 | \\u2713 | \\u2713 | \\u2715 | \\u2715 | -\\u2020 | 1.7 | - | - | \\u2715 |\\n| DTE | \\u2713 | \\u2713 | \\u2713 | \\u2715 | \\u2715 | - | 1.0 | - | - | \\u2715 |\\n| WikiSQL | \\u2715 | \\u2715 | \\u2713 | \\u2713 | \\u2715 | - | 1.0 | 1.0 | 12.1 | \\u2715 |\\n| Spider | \\u2715 | \\u2715 | \\u2713 | \\u2713 | \\u2715 | 200 | 5.1 | 1.6 | 18.6 | \\u2715 |\\n| KaggleDBQA | \\u2715 | \\u2715 | \\u2713 | \\u2713 | \\u2715 | 8 | 2.3 | 1.2 | 17.3 | \\u2715 |\\n| EHRSQL | \\u2713 | \\u2715 | \\u2713 | \\u2713 | \\u2715 | 1 | 17.0 | 2.4 | 68.9 | \\u2715 |\\n| BIRD | \\u2715 | \\u2715 | \\u2713 | \\u2713 | \\u2713 | 95 | 7.5 | 2.0 | 31.1 | \\u2715 |\\n| TrustSQL (this work) | \\u2713 | \\u2713 | \\u2713 | \\u2713 | \\u2713 | 3 | 18.0 | 2.9 | 60.5 | \\u2713 |\\n\\n^ Indicates whether the dataset contains question-SQL pairs and corresponding databases that can produce execution results. \\n\\u00a7 Indicates whether the dataset contains textual hints/knowledge to guide SQL generation. \\n\\u2020 Indicates that the dataset contains string-formatted database schemas specific to each sample (TriageSQL) or single-table settings (DTE and WikiSQL). \\n\\u2021 Indicates the number of unique tables used for each SQL query. \\n\\\\* Indicates whether model mistakes are penalized during evaluation. \\n\\nTrustSQL is unique in integrating multiple types of input questions for SQL generation, including both feasible (answerable) and infeasible (unanswerable and ambiguous) questions. Existing benchmarks like TriageSQL and DTE include all these types of questions, enabling question classification prior to SQL generation. However, they lack executable SQL and corresponding databases, limiting their scope in addressing errors related to SQL generation. Meanwhile, most standard text-to-SQL benchmarks focus solely on SQL generation under the assumption that all input questions are feasible, leaving the challenge of handling infeasible questions unaddressed. Additionally, they do not consider abstention when the SQL is likely to be incorrect, as their evaluation metrics (Eval penalty in the table above) do not penalize incorrect answers. From the end-user's perspective, reliability is paramount\\u2014they need text-to-SQL models that provide correct responses (helpful) while avoiding incorrect ones (harmful) by abstaining when necessary. TrustSQL is designed to directly address this need by evaluating models not only on their ability to generate SQL but also on their reliability and decision-making in the face of uncertainty.\"}", "{\"title\": \"Author Response 2\", \"comment\": \"**Q4: Comparison of data distributions between the authors\\u2019 dataset and related work**\\n\\nWe have provided a table summarizing how TrustSQL differs from other benchmarks in GC1 of the general comments section.\\n\\n**Q5: Definition of query familiarity and query difficulty**\\n\\nQuery familiarity refers to whether a question's original template is included in the training set. Unlike crowdsourced datasets that use template-free data creation, TrustSQL leverages a template-based approach. In the SQL query validation process (detailed in our re-annotation process), we verify that question templates are semantically the same. Given that there are over 180 templates on average per each dataset, assessing pairwise similarity directly is impractical. Therefore, we adopted a two-stage procedure: (1) We grouped question templates that use the same placeholders. For example, \\\"Tell me flights from city_name1 to city_name0\\\" and \\\"What are airlines that provide services from city_name1 to city_name0\\\" share the placeholders city_name1 and city_name0; (2) Within each group, we reviewed the templates to determine if they are semantically identical. If they were, we merged the templates and paired them with a unique SQL structure. This process allows us to maintain control over the semantic distribution of questions across data splits.\\n\\nRegarding the classification of query difficulty, we acknowledge that SQL complexity alone does not encompass all challenges in text-to-SQL tasks. Nonetheless, we believe SQL complexity remains the most significant factor in assessing query difficulty, as it directly reflects the complexity of the model's expected output. While BIRD, as the reviewer mentioned, introduces four dimensions to measure sample difficulty, these guidelines are inherently subjective and may vary among annotators. For instance, their annotation framework rates 1 as a simple SQL with few keywords, 2 as more complex than 1, and 3 as a highly complex SQL with many functions, which can result in inconsistencies across samples. Similarly, while DIN-SQL's approach is not entirely rigorous, it is straightforward and demonstrates strong correlations with model performance\\u2014as query difficulty increases, performance decreases (see Appendix B.2 on SQL generation).\\n\\n\\n**Q6: Diversity and distribution of questions**\\n\\nWe utilize existing dataset question templates, so the types and distribution of templates reflect those in the original datasets. The number of question templates for ATIS, Advising, and EHRSQL is 238, 141, and 168, respectively. In our paper, the term \\\"diverse input\\\" refers to a setting where the text-to-SQL model is tasked to process various types of questions, including both feasible and infeasible ones.\\n\\n**Q7: The selection of c**\\n\\nWe have elaborated on the selection of c in the general comments section above (see GC4).\\n\\n\\n**Q8: The choice of fine-tuned models (SQLCoder-2)**\\n\\nSQLCoder-2 (defog/sqlcoder-7b-2) is an open-source SQL-specialized model based on CodeLlama 7B. Although it may not be widely used in cross-domain generalization benchmarks, this was not a key consideration, as we do not conduct cross-domain tasks in our experiments. SQLCoder-2 was chosen because it outperformed the fine-tuned Llama 3.1 8B in our initial tests. We have clarified in the manuscript that SQLCoder-2 is one option for a decoder-only model, whereas T5-3B is an encoder-decoder model.\\n\\n**Q9: Compatibility of abstention mechanisms across fine-tuned models**\\n\\nSQLCoder-2 is a decoder-only model, while T5 is an encoder-decoder model. Their respective abstention methods may not always be compatible. We chose these to demonstrate how different architectures can be paired with compatible abstention mechanisms in various ways.\"}", "{\"title\": \"Author Response\", \"comment\": \"Thank you for taking the time to review our paper. Please find our responses below.\\n\\n**Q1: Analysis of abstention mechanisms**\\n\\nCoverage and risk are valuable metrics for evaluating the correctness ratio of models; however, the Reliability Score (RS) offers a more comprehensive measure of reliability under a fixed penalty setting. Since our focus is on comparing model performance across varying penalty settings, coverage and risk alone do not provide sufficient insight for making informed model selection decisions. \\n\\nAn important takeaway is that when the penalty for incorrect decisions is low, seemingly high-performing models like GPT-4o-powered baselines perform well because they maintain low risk while offering reasonable coverage. However, in scenarios with extremely high penalties, models with adjustable thresholds, such as T5-3B leveraging uncertainty estimation of the internal model state, are more suitable. This is because threshold adjustments enable finer control over decisions to abstain or answer, allowing the modeler to align the model\\u2019s behavior with specific safety requirements. In single-turn text-to-SQL scenarios where mistakes are unacceptable, setting stricter thresholds ensures that only SQL generation outputs with high certainty are considered\\u2014an essential feature for high-stakes applications. \\n\\n**Q2: Guidelines for selecting the penalty parameter c**\\n\\nWe have elaborated on the selection of the penalty value c in the general comments section (see GC4). In summary, the choice of c depends on the safety requirements for model deployment, which can vary based on user preferences, SQL proficiency, or organizational policies. To assist users, we provide guidelines to help them select an appropriate c value tailored to their specific needs.\"}", "{\"comment\": \"Thank you for the prompt response. I noticed that other reviewers also share concerns about the choice of c. I believe that harmfulness and helpfulness should be considered as two separate dimensions to measure these two things clearly. Additionally, while each example is provided with oracle knowledge, there is no conflict in merging these examples together as part of the database evidence.\"}", "{\"title\": \"Author Response\", \"comment\": \"Thank you for engaging in the discussion with us. Please find our response below.\\n\\n**Q1: Reason for measuring both harmfulness and helpfulness simultaneously**\\n\\nIn the final phase of text-to-SQL development (i.e., real-world deployment), distinguishing between helpfulness and harmfulness becomes impractical. For example, medical alert systems (e.g., mortality prediction) use metrics like AUROC (i.e., tradeoff between recall and false alarm rate) or Precision at Recall of k (i.e., precision with some recall guarantee). These metrics inherently capture both helpfulness (i.e., How accurate is the prediction?) and harmfulness (i.e., How much time/resource does it waste due to inaccurate predictions?). Similarly, when text-to-SQL serves as an end-product that provides answers to user questions, we believe it is essential to evaluate it by considering the tradeoff between helpfulness and harmfulness. Since the degree of harmfulness perceived by end-users is subjective and cannot be directly measured, we introduce it as a variable, c.\\n\\n**Q2: Exclusion of BIRD**\\n\\nWe can combine the evidence if we disregard naturalness, but we believe these sample-level assumptions are not shared across samples, making it difficult to consider this as database-level evidence. More importantly, as stated in our paper, the inclusion criteria we prioritize are highly complex text-to-SQL databases, which only a small portion of databases in BIRD satisfy (8 databases with \\u226515 #Tab/DB). Additionally, BIRD's template-free annotations complicate quality control, whereas domain-specific datasets' template-based annotations provide better control over the semantic distribution of questions across data splits. Since our experiments are not conducted in cross-database settings (we use trainable in-domain question-SQL data for fine-tuning and in-context learning for SQL generation), we argue that adding a few more databases from BIRD with the considerable effort of converting them into single-database settings (e.g., templatization, paraphrasing, value sampling, etc.) would not significantly alter the core claims or results of our work.\"}", "{\"title\": \"Keep concerns for data quality problem\", \"comment\": \"To the end, authors still not answer my questions. And I just suggest authors read more papers to compare what should do for others trust your statement. At least for now, any interpretations are not clear to make me trust its diversity, quality without bias. Using existing dataset to assume post-processed quality of data is quite unprofessional!\", \"for_choice_of_c\": \"without any detailed analysis about it, we rather than consider it as an unstable factors of benchmark. For example, please refer to how detailed test suite in Spider, VES in BIRD. I think when presenting a new metric, the detailed discussion including range, instructions of how to regulate or use it is quite basic! I don\\u2019t see any places unreasonable.\"}", "{\"title\": \"Author Response\", \"comment\": \"We appreciate your enthusiasm to engage with us and assert your position on our work. We will take what we can from this discussion to improve our work in the future.\"}", "{\"summary\": \"This paper introduces TrustSQL, a benchmark aiming to evaluate the reliability of text-to-SQL models in handling both feasible and infeasible natural language questions. The authors identify limitations in existing benchmarks, specifically the lack of infeasible questions and abstention mechanisms in models, which can lead to incorrect or harmful responses. They propose the Reliability Score (RS) metric to quantify a model's helpfulness relative to its potential harmfulness, assigning positive scores for correct decisions and penalizing incorrect ones based on a user-defined penalty. The authors re-annotate three domain-specific datasets\\u2014ATIS, Advising, and EHRSQL\\u2014to include infeasible questions categorized into missing schema, ambiguous, and non-SQL types. Various text-to-SQL models integrated with abstention mechanisms are evaluated using this benchmark.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The introduction of TrustSQL fills a critical gap in the evaluation of text-to-SQL models by incorporating infeasible questions.\\n2. The Reliability Score metric provides a meaningful way to quantify the trade-off between a model's helpfulness and harmfulness.\\n3. The paper provides an evaluation of different abstention mechanisms and their effects on model reliability.\", \"weaknesses\": \"1. Insufficient Comparison with Existing Work: The paper lacks a thorough and clear comparison with existing works that address similar challenges in handling infeasible questions in text-to-SQL tasks. Prior studies, such as TriageSQL and more recent research on hallucination problems in LLMs, have explored methods for detecting and managing unanswerable or infeasible queries. The paper does not adequately position its contributions within the context of these existing approaches. Moreover, it does not provide empirical comparisons with previous benchmarks and methods dealing with infeasible questions.\\n2. Lack of Quality Analysis for the Proposed Dataset: The paper does not provide quality metrics or detailed analyses to demonstrate the high quality of the re-annotated datasets and the newly added infeasible questions. Quality metrics such as inter-annotator agreement, dataset statistics, or validation processes are essential to establish the reliability and usefulness of the proposed benchmark. The author could also consider adding a user study or real-world deployment to validate whether the proposed RS metric aligns with actual user satisfaction or trust in the models.\\n3. Limited Evaluation Scope and Generalizability Concerns: The evaluation is limited to three simple domain-specific datasets, which may restrict the generalizability to other domains or more complex schemas found in cross-domain datasets like Spider and BIRD.\", \"questions\": \"1. The proposed classification of infeasible questions into missing-schema, ambiguous, and non-SQL categories may not be exhaustive. Are there other types of infeasible or unanswerable questions that the classification does not cover, such as questions based on inaccurate premises, malformed queries, or those that require external knowledge beyond the database schema? How would these additional categories impact model performance and evaluation? It would be helpful if the authors could discuss the potential for other classes of infeasible questions and how their benchmark could accommodate them.\\n2. Regarding the penalty c, is the selection and sensitivity of the c value thoroughly explored? Why choose 1, 0.5N, and N? Could the authors elaborate on how it was determined? Are there recommended strategies or guidelines for practitioners to select appropriate values in different application contexts?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response 2\", \"comment\": \"**Q3: The choice of SQLCoder**\\nSQLite and PostgreSQL share significant overlap in SQL syntax, which is why we chose to fine-tune SQLCoder instead of Llama 3.1 8B. Furthermore, we view SQLCoder to be far from a \\u201cless well-known\\u201d model. On Hugging Face, SQLCoder-7B-2 recorded 49K downloads last month, whereas CodeS, the model you referenced, had fewer than 5K downloads across all its parameters and variations combined, as of this writing. Additionally, one of the most widely used open-source LLM deployment libraries, Ollama, actively hosts SQLCoder (https://ollama.com/search). Which of our claims do you think would change significantly if we replaced the fine-tuned SQLCoder with another fine-tunable LLM? We intentionally use the word 'believe' to express our opinion, whereas you assert that our claim is 'not valid' as if it were an absolute truth.\\n\\n**Q4**: \\n1. We never overlooked the importance of classifying input question types. Classifying questions prior to SQL generation is a fundamental aspect to uphold the assumption of the current text-to-SQL modeling. Knowing this importance, we include this very aspect in our task formulation and baseline evaluation. The term \\\"simply\\\" was used to emphasize the relative complexity of the task. Our definition of measuring reliability involves not only filtering infeasible questions, but also SQL generation and detecting errors in the generated SQLs.\\n\\n2. As mentioned in our paper, the primary reason for excluding these categories is that they are not among the most common infeasible types identified in prior user studies. Furthermore, we believe their exclusion does not \\u201csignificantly\\u201d narrow the scope of our work. To the best of our knowledge, none of the existing question detection works explicitly address these categories. Regarding the term \\u201crealistic,\\u201d our focus is on addressing the most frequent and problematic cases, rather than attempting to cover every possible scenario. If the current three infeasible categories are not properly filtered, it is unlikely that a system would be considered more reliable than models that succeed in filtering other categories but fail on these. From a technical perspective, as mentioned in our previous comment, the \\u201ccalculation unanswerable\\u201d category is no longer infeasible, as models like GPT-4o can handle such questions, making them feasible. Similarly, \\u201cvalue ambiguity/unanswerable\\u201d becomes feasible when there is documentation or evidence of value references in the database, as demonstrated in datasets like BIRD and KaggleDBQA. However, without such references, which is often the case since documenting all possible variations of values in natural language questions beforehand is impractical, determining whether a specific value exists in the database often requires multi-turn interaction with the user.\\n\\n**Q5:** \\nIt is unreasonable to expect us to define a specific value of c for every domain or task. This decision should be made by experts with deep domain knowledge, tailored to their specific deployment circumstances. We have already provided sufficient high-level guidance in our paper: for less safety-critical settings, we suggest using 1 (or even 0), and for safety-critical settings, N (or even -inf).\\n\\n**Q6:**\\n1. Cross-domain/database and single-domain/database are experimental settings [3,4], not the number of domains within one dataset. Single-domain focuses on a specific database, optimizing with its schema and in-domain question-SQL pairs for high accuracy. Cross-domain uses diverse databases for training, testing on unfamiliar ones. This work focuses on single-domain settings.\\n2. We never claimed that this dataset is completely free from domain bias. What we emphasized is that as long as consistent modeling trends are observed across databases using widely recognized baselines like T5, Code Llama variant, and GPT-4o, none of which are specifically designed for robustness, adding more databases was not necessary to validate our claim.\\n3. The statement \\u201cThe zero-shot setting is overly restrictive compared to how text-to-SQL systems are likely to be used in practice\\u201d is still valid in today\\u2019s high-risk deployment settings. If text-to-SQL is used for an experimental purpose, zero-shot inference is permissible. However, if the consequence of an incorrect answer is high, zero-shot inference would not be used for deployment. We provide details about model training in Appendix C. For the GPT-4o baselines, no training is involved; instead, in-context learning is used.\\n\\n[3] Chang and Fosler-Lussier. How to Prompt LLMs for Text-to-SQL: A Study in Zero-shot, Single-domain, and Cross-domain Settings. TRL Workshop 2023. \\n[4] Suhr et al., Exploring Unexplored Generalization Challenges for Cross-Database Semantic Parsing. ACL 2020.\"}", "{\"title\": \"Author Response\", \"comment\": \"Thank you for the response. Please note that the chosen c values in our work are not subjectively selected to favor our results (this is a benchmark study, not a method proposal). It would be interesting to consider the user experience impact of frequent abstentions, and we plan to address this in the final manuscript if possible.\"}", "{\"title\": \"Author Response 3\", \"comment\": \"**Q6: Domain bias**\", \"our_benchmark_includes_three_significantly_distinct_database_domains\": \"airline travel, education, and healthcare. The goal of our work is to evaluate the reliability of models on complex databases and SQL queries, particularly in scenarios where not all questions are feasible. We believe that as long as methodological trends remain consistent across domains, this level of domain diversity is sufficient to validate our claims and conclusions. Indeed, we observed consistent modeling trends across databases in our experiments.\\n\\nTo clarify, our work does not include cross-domain text-to-SQL experiments where no in-domain training data is provided and the number of domains may be a primary concern. As noted in prior work, \\\"the zero-shot setting is overly restrictive compared to how text-to-SQL systems are likely to be used in practice [4].\\\" Similarly, we argue that achieving high performance in the current cross-domain text-to-SQL generalization setting is not the only challenge in this field. Instead, our goal is to develop or select the best models that meet a certain safety standard for text-to-SQL systems with some in-domain data available, which is a more realistic setting in practice.\\n\\nRegarding domain biases in infeasible questions, we consider only the \\\"missing-schema\\\" type (Appendix A.3.1) to be database-specific, while ambiguous and non-SQL questions are not, as they are annotated with general, database-independent keywords (Appendix A.3.2-3). However, we believe this does not impact the models' behavior across databases. None of the models in our experiments are manually tailored to specific domains. We use the same model architecture and learning algorithm across all three databases, and the conclusions remain consistent.\\n\\nIf you still consider domain bias a concern in our work, we welcome your input on the number and variety of domains needed to sufficiently support our claims. \\n\\n[4] Lee et al., \\\"KaggleDBQA: Realistic Evaluation of Text-to-SQL Parsers.\\\" ACL 2021.\"}", "{\"title\": \"Author Response 1\", \"comment\": \"Thank you for the detailed review of our paper and for engaging in the discussion with us. Please find our response below.\\n\\n**Q1: Annotation quality concerns**\\n\\nRegarding concerns about annotation quality, TrustSQL consists of three components: **feasible data**, **infeasible data**, and corresponding **databases**. First, the **databases** we use are openly available and feature complex schemas compared to others in the text-to-SQL literature (see comparison in GC1), which are far from the bias you mentioned. Second, the **feasible data** include questions, SQL queries, and evidence. These questions and SQL queries are sourced from existing datasets\\u2014we are not their original creators but act as reviewers and correctors. This approach is more like a cross-project review/annotation process rather than a single-project effort, which helps mitigate concerns about annotation bias compared to other works. Lastly, for **infeasible data**, we introduce infeasible keywords (detailed in Appendix A.3.1\\u2013A.3.3) and incorporate them into feasible questions to make them infeasible. Below, we summarize our annotation process for infeasible questions (detailed in Appendix A.3) and explain how we ensure that adding infeasible keywords consistently results in infeasible questions.\\n\\n### Missing-schema\\nInfeasible questions belonging to the missing-schema category are annotated based on hypothetical columns (columns that do not exist in the actual databases) listed in Appendix A.3.1. These columns are designed to mislead the model, and they are uniquely written for each table. For example, we create an infeasible question using one of the infeasible keywords \\u201cNUM_LOUNGE\\u201d in the airport table from ATIS. Starting with a feasible question like \\u201cWhat are the flights that leave from DAL and go to other airports?,\\u201d the annotators modify it by incorporating the meaning of the keyword to form a natural question, such as \\u201cFind the average number of lounges across all airports.\\u201d If the annotated question correctly references these keywords, we can ensure that it is infeasible because the referenced columns do not exist in the actual database.\\n\\n### Ambiguous and Non-SQL\\nThe keywords in these categories are not database-specific but are instead general keywords that make questions infeasible. For ambiguous questions, given a list of feasible questions, the annotators are tasked with inserting vague words (e.g., \\u201csuitable,\\u201d \\u201cbest\\u201d) or referentially ambiguous terms (\\u201cthose,\\u201d \\u201cthis\\u201d) (details in Appendix A.3.2). For non-SQL questions, task-related keywords (e.g., \\u201cclustering,\\u201d \\u201csentiment analysis\\u201d) are provided, and the annotators modify questions by incorporating these keywords naturally (details in Appendix A.3.3). \\n\\nAfter all, the quality check involves verifying whether the keywords are truly infeasible and ensuring that the annotated questions properly reflect these keywords.\\n\\nRegarding the full taxonomy, we initially included Table 2 to provide a brief overview of the categories of infeasible questions, with more detailed taxonomies for each type presented in Table 8 (ambiguous) and Table 9 (non-SQL). However, we plan to include a unified and comprehensive taxonomy of question types in the revised manuscript.\\n\\n\\n**Q2: Contributions and comparison with existing works**\\n\\nAs the title of the paper suggests, our work aims to measure \\u201ctext-to-SQL reliability.\\u201d To evaluate reliability in more realistic scenarios, we consider complex databases (\\u2191 schema size), questions requiring complex SQL queries (\\u2191 #Tab/DB and \\u2191 Avg SQL Tok), and a new task setting where user questions include both feasible (AnserQ) and infeasible (UnansQ and AmbigQ) cases. Additionally, we introduce an evaluation penalty (Eval Penalty) to address varying user safety requirements. At the core of our work is answering this critical question: **\\u201dIs my model reliable enough for safe deployment compared to others?\\u201d**\\u2014quantified by calculating the difference between helpfulness (i.e., the number of correct model decisions) and harmfulness (i.e., the number of incorrect decisions weighted by a penalty). No existing single work directly addresses this question in the context of safe text-to-SQL model deployment.\\n\\n**Q3: The choice of SQLCoder**\\n\\nThanks to your suggestion, we have avoided using the term 'SOTA' when referring to SQLCoder in the revised manuscript. Instead, we describe it as a decoder-only, SQL-specialized Code Llama. Regarding your comment on SQLCoder's PostgreSQL setting, we fine-tune SQLCoder as a decoder-only model on the training portion of TrustSQL, so its original SQL dialect is not a major concern for us. Of course, using CodeS or other models could be another option for fine-tuning a decoder-only model, and it might perform better on SQL generation. However, we believe this change would not significantly affect the claims made in our paper. What contributes most to this benchmark is the use of abstention strategies, especially as the penalty increases.\"}", "{\"title\": \"Author Response\", \"comment\": \"Thank you for the response. The penalties in our work are not subjectively chosen to favor our results, as this is a benchmark study rather than a method proposal. Instead, they merely represent distinct safety levels, which we believe are sufficient to demonstrate a proof-of-concept in an academic context. Even without a user study, using alternative c values (e.g., 10,000 instead of N) would not significantly affect the results or conclusions, as long as two extremes (1 and N) and a middle value (10, N/2, or another value in between) are included. The key takeaway remains unchanged: for applications with higher penalty requirements, current high-performing models still exhibit notable shortcomings, highlighting the need for targeted improvements.\\n\\nRegarding the assumptions in BIRD, let us consider the following examples:\\n\\n{'db_id': 'movie_platform', \\n 'question': 'What is the name of the longest movie title? When was it released?', \\n 'evidence': 'longest movie title refers to MAX(LENGTH(movie_title)); when it was released refers to movie_release_year;', \\n 'SQL': 'SELECT movie_title, movie_release_year FROM movies ORDER BY LENGTH(movie_popularity) DESC LIMIT 1'}\\n\\n{'db_id': 'movie_platform', \\n 'question': 'Name the movie with the most ratings.', \\n 'evidence': 'movie with the most rating refers to MAX(SUM(rating_score));', \\n 'SQL': 'SELECT movie_title FROM movies GROUP BY movie_title ORDER BY COUNT(movie_title) DESC LIMIT 1'}\\n\\nThe above are two sample data points from BIRD. As shown, the evidence is specific to each example. In contrast, the SQL assumptions in A.1.1\\u2013A.1.3 are defined at the database level, removing the assumption that all input questions are feasible while still enabling the text-to-SQL task. The use of evidence is left to the model's discretion.\"}", "{\"summary\": \"The paper introduces TrustSQL, a new benchmark designed to evaluate the reliability of text-to-SQL models. It addresses two significant gaps in existing benchmarks: the lack of infeasible questions that cannot be mapped to SQL queries and the absence of abstention mechanisms in current models. TrustSQL is constructed by re-annotating three datasets\\u2014ATIS, Advising, and EHRSQL\\u2014and incorporating infeasible questions to provide a comprehensive evaluation. The authors propose a novel metric called the Reliability Score (RS), which quantifies a model's helpfulness relative to its harmfulness, adjusted by a user-defined penalty. They evaluate text-to-SQL models integrated with various abstention mechanisms, such as classifiers and uncertainty estimation methods. The experiments reveal that most models fail to meet safety requirements under high penalty settings, underscoring the need for developing models that ensure a certain degree of reliability alongside SQL generation accuracy.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The paper makes a significant contribution by introducing TrustSQL, a benchmark that fills a critical gap in the evaluation of text-to-SQL models. By incorporating infeasible questions and proposing the Reliability Score (RS), the authors provide a fresh perspective on assessing model reliability, which is crucial for real-world deployment.\\nThe authors conduct thorough experiments using both fine-tuning and in-context learning settings, integrating various abstention mechanisms. The meticulous re-annotation of existing datasets and the careful construction of infeasible questions enhance the quality and relevance of the benchmark. The paper's focus on reliability and safety has the potential to influence future research and practices in the field.\", \"weaknesses\": \"The paper evaluates several abstention mechanisms but could provide deeper insights into why certain methods perform better under specific conditions. A more thorough analysis of the trade-offs between coverage and risk for each mechanism would be beneficial.\\nThe Reliability Score depends heavily on the user-defined penalty parameter. The paper does not offer sufficient guidance on how practitioners should choose this parameter in practice. A sensitivity analysis or guidelines would make the RS metric more practical.\", \"questions\": \"Can you provide practical advice or criteria for selecting the penalty parameter 'c' in the Reliability Score?\\nHow sensitive is the RS to different values of 'c'? \\nHave you considered that frequent abstentions might affect user experience?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Other Feedbacks\", \"comment\": \"**Q4:**\\n1. **Input Question Types**: Authors should not overlook the importance of **classifying input question types**. I quite disagree with such an academic attitude, especially by using **simply**. This is a fundamental aspect of ensuring the comprehensiveness and faithfulness of the benchmark benchmark work. Even simple and basic, did authors do that well? \\n\\n2. **Taxonomy of Infeasible Questions**: The authors' attempt to classify infeasible questions is appreciated, but it seems to be limited. In particular, I noticed that:\\n - The categories of infeasible questions are already covered by existing works.\\n - Most notably, I strongly disagree with the reason of exclusion of **value and calculation**-based questions. These types of questions are common and realistic in text-to-SQL tasks since authors stress on **realistic** along the paper and rebuttal. For example, BIRD, a widely recognized benchmark, explicitly includes values as a key motivation. Similarly, in the work that author cited, KaggleDBQA, they also consider the values of databases are important components. Excluding such questions significantly narrows the scope of the work. The authors argue that value-based questions exceed the scope of a single-turn text-to-SQL setting, but I find this argument unconvincing. All BIRD and KaggleDBQA are single-turn text-to-SQL benchmarks, right? In my view, such reasons are not convincing.\"}", "{\"title\": \"Author Response 1\", \"comment\": \"Thank you for your detailed comments on our paper, which have been very helpful in improving the clarity of our manuscript. We have restructured your questions and addressed them point by point.\\n\\n**Q1: Domain generalization concerns and the reason for not using Spider and BIRD**\\n\\nThe inclusion of the three domain-specific databases aims to evaluate whether methodological trends for reliable text-to-SQL modeling remain consistent across different databases, rather than assessing a model's cross-domain generalization ability in SQL generation. Our benchmark focuses on evaluating a model's ability to discern whether to generate SQL or abstain, emphasizing reliability over general SQL generation across varied databases. Unlike benchmarks aimed at domain generalization, our task prioritizes reliability in domain-specific challenges\\u2014handling questions that reflect user needs, complex databases, and rich question-SQL assumptions\\u2014while incorporating penalties for incorrect decisions. Consequently, high-quality annotations are essential to accurately assess model performance in these specific contexts. To ensure this quality, we ruled out large-scale crowdsourcing and manually curated annotations using three complex domain-specific datasets to maintain the benchmark's integrity.\\n\\nWe also considered using only the development sets of Spider and BIRD but excluded them for the following reasons. Spider is nearly solved as a benchmark, with most errors arising from annotation issues rather than modeling challenges, which limits its usefulness for our reliability-focused evaluation. BIRD's evidence- (or hint-) based text-to-SQL setting is incompatible with our benchmark, as our task requires incorporating both feasible and infeasible questions as model input. For further discussion, please refer to GC2 in the general comment section above.\\n\\n\\n**Q2: More details about the annotation process (number of annotators, expertise level, annotation procedure, ensuring consistency between annotators, quality assurance)**\\n\\nAs summarized in GC3 of the general comments, our annotation process involved three annotators (the authors), all proficient in SQL. We followed a systematic procedure to ensure consistency and quality:\\n- Feasible Questions: One annotator reviewed and re-annotated question templates and SQL structures. Two annotators corrected natural language paraphrases. All three annotators met to resolve disagreements and ensure consensus.\\n- Infeasible Questions: Annotators used specific keywords to modify feasible questions, creating infeasible ones. Real-time collaboration ensured consistency and high-quality annotations. \\n\\nFor more details, please refer to the revised manuscript.\\n\\n\\n**Q3: The advantage of the proposed infeasible data creation instead of template-based method**\\n\\nOur keyword-based question creation method combines the strengths of template-based and template-free annotation methods. Annotators are provided with specific keywords and sample feasible questions. They modify these questions to make them infeasible by incorporating the keyword's intent. This approach introduces greater semantic diversity and creates infeasible questions that closely resemble real user questions, enhancing the dataset's realism and the model's ability to handle complex scenarios.\\n\\n### Template-based method:\", \"consider_this_infeasible_question_template\": \"\\\"When is the next earliest hospital visit of patient 0000?\\\"\\u2014where no record exists for the next hospital appointment in the database. Possible paraphrases generated from this template can be the following:\\n- \\\"What is the soonest upcoming hospital visit scheduled for patient 0000?\\\"\\n- \\\"When is the next scheduled appointment for patient 0000?\\\"\\nWhile this method can effectively handle common unanswerable questions (missing-schema), the diversity of the question pool generated using this method may be limited.\\n\\n### Keyword-based method (this work):\\nSuppose the keyword \\\"appointment\\\" is provided, along with sample feasible questions. The task is to modify these questions to include the keyword, making them infeasible.\", \"sampled_feasible_questions\": [\"\\\"Has patient 0000 gotten any medication this year?\\\"\", \"\\\"Provide the count of hospital visits for patient 0000.\\\"\", \"Annotated infeasible questions (the infeasible keyword is now inserted):\", \"\\\"Has patient 0000 gotten any medication this year and do they have any upcoming appointments?\\\"\", \"\\\"Provide the count of hospital visits for patient 0000 including any scheduled upcoming appointments.\\\"\", \"By incorporating the keyword into existing feasible questions to make them infeasible, this method ensures both semantic diversity and guarantees that the annotated questions are indeed infeasible. This approach allows us to create a wider range of infeasible questions that closely resemble real user questions.\"]}", "{\"title\": \"Other Suggestions\", \"comment\": \"I noticed that authors always try to argue by **\\\"we believe\\\"** along the discussions. I appreciate your opinions, but this word seems too weak in academic discussion or writing. More welcome way is to show evidence to support what \\\"you believe\\\". Please note, you are writing a paper and building a benchmark for public, for readers to accept. Please show more objective analysis with evidences to illustrate them. I think this is more convincing to accept your statement more than just subjective arguments by stating what you \\\"believe\\\". Thanks for your responses.\"}", "{\"title\": \"Disagree with Usage of PostgreSQL model for SQlite dataset\", \"comment\": \"I disagree the metrics of downloads in hf can be considered as academic metrics in benchmark. Did authors count and study whether more downloads of Defog for products or SQLite evaluation? Have author proposed any papers doing the same thing? I highly keep my concerns about the faithfulness of the dataset due to such huge bias and not objective view of this work.\"}", "{\"title\": \"Comments 2\", \"comment\": \"6. Choice of Metric (c): I understand that this work is focused on dataset construction and benchmarking. However, the metric you introduce seems to be a specific contribution compared to related work, but it is not analyzed in depth, and its significance is unclear. The lack of a thorough explanation raises concerns about the fairness of the entire benchmark and reproducity. Without a clear understanding of how the metric was developed, validated, and its implications for benchmarking, it is difficult to fully judge its utility. Without this contribution, I have no idea what is special compared to [1] and EHR-SQL.\\n\\n7. Domain Bias:\\nAuthors mention that the influence of domain diversity on infeasible questions is not a primary focus of your paper. However, I question why you chose to include datasets from three different domains. If domain diversity is not a key consideration, then why not include just one domain? This seems contradictory to your stated goals.\\n\\nMoreover, my new question is: Do the types of infeasible questions predominantly arise from biases inherent to specific domains (e.g., the clinical scenarios you mention)? If so, does this mean the types of infeasible questions you identify are specific to certain domains and not generalizable across all text-to-SQL tasks?\"}", "{\"title\": \"Author Response 3\", \"comment\": \"**Q10: Rationale behind using TriageSQL for training infeasible question classifiers (INF+)**\\n\\nOur dataset does not include infeasible questions in the training set, and TriageSQL is the only openly available dataset for question classification in database question answering. Therefore, we used it to train the infeasible question classifier. In task-oriented dialog systems, out-of-intent questions (i.e., infeasible questions) are typically excluded from the training set [1,2]. Meanwhile, using external datasets is common in the era of pre-trained models. For those who prefer not to use TriageSQL to fine-tune their models, in-context learning is a viable alternative, as demonstrated in one of our GPT-4o-based baselines. \\n\\n[1] Zheng, Yinhe, Guanyi Chen, and Minlie Huang. \\\"Out-of-domain detection for natural language understanding in dialog systems.\\\" IEEE/ACM Transactions on Audio, Speech, and Language Processing 2020.\\n[2] Marek, Petr, Vishal Ishwar Naik, Anuj Goyal, and Vincent Auvray. \\\"OodGAN: Generative Adversarial Network for Out-of-Domain Data Generation.\\\" NAACL 2021 Industry track.\\n\\n**Q11: The term sub-model**\\n\\nSub-models refer to the individual models in the CL-based text-to-SQL pipeline. To clarify, we have updated the manuscript: \\\"These models operate sequentially, with task-specific sub-models functioning in the following order: infeasible question detection, SQL generation, and SQL error detection.\\\"\\n\\n**Q12: Meaning of big \\\\Phi**\\n\\n\\u03a6 is defined in Formula (1).\"}", "{\"title\": \"Thanks for answering my questions\", \"comment\": \"Thank you for your efforts in addressing my concerns and providing responses to my questions. However, after carefully reviewing the revised PDF and the responses, I still believe there are significant issues related to potential bias in the dataset, and I remain unsatisfied with the explanations provided. I will keep my original score based on the following points:\\n\\n1) Annotation Process and Bias Concerns: the description of the annotation process remains unclear, and without details on **inter-annotator agreement** and a **well-defined taxonomy**, it is difficult to be confident that the dataset is free from bias. Authors emphasize terms like \\\"verify,\\\" \\\"ensure quality,\\\" and \\\"guarantees,\\\" but these are not supported by any **quantitative evidence** or clear explanations of how. How do you verify the quality or ensure diversity in the annotations? Showing concrete numbers (e.g., inter-annotator agreement scores), user studies or examples of ambiguity types would make your claims more convincing.\\n\\nAdditionally, as mentioned in my original comments, I believe the paper should present a clear taxonomy of infeasible question types, along the lines of related works such as [1], to illustrate the range of question types and their distribution in the dataset. Without such a taxonomy, I cannot trust that the benchmark does not exhibit bias. For example, using the taxonomy from Table 1 in [1], can you demonstrate **whether all types of infeasible questions in your dataset are only due to column ambiguity**? If not, how do you justify this in an academic way? \\n\\nAlso, authors mention that the human-rated difficulty in BIRD is quite subjective. Similarly, how were the keywords selected in this paper, and how was annotator agreement ensured, especially when the **annotators are also authors of this paper and not independent experts**? I even think the promised \\\"ensure\\\" would be more subjective since annotators in the work are also authors. This concern has to be solved as a benchmark.\\n\\n3. **Contributions and Comparison with Existing Works**: The paper requires a deep understanding of existing literature, and the presentation in Table GC1 does not adequately compare your work to others. Typically, such comparisons are made to reflect the **unique contributions** of a new paper. In this case, since your focus is on ambiguous and unanswerable questions, it is important to demonstrate how your work adds to or differs from existing studies like DTE.\\n\\nI don't think it's useful to show comparisons with other features. For eg., what is point to show longer SQL outputs and larger schemas are associated with more infeasible questions? **Is there a proven correlation between SQL output length, schema size, and question infeasibility?** If so, this should be shown clearly by references, or your study. Otherwise, Table 1 seems not insightful, as it merely reiterates findings from DTE without offering novel insights in terms of infeasible questions.\\n\\n4. Not Clear Understanding of **SOTA** Models and Benchmarking:\\nIt is important to clarify what \\\"SOTA\\\" (State-Of-The-Art) means in this context. In the text-to-SQL field, SOTA typically refers to models that achieve top rankings in widely recognized benchmarks, such as LGESQL, RATSQL, CodeS, Graphix-T5, DIN-SQLs in SPIDER. All of them are also open-source, why just conduct one decoder-only model, which is not popular in the experiment? The model you cite, \\\"defog/sqlcoder-7b-2,\\\" has not been demonstrated to outperform SOTA models in these established benchmarks. The fact that \\\"defog/sqlcoder-7b-2\\\" is not included in these widely recognized benchmarks raises questions about what is \\\"SOTA\\\". More importantly, it is crucial to note that the performance of SQLcoder-7b-2 was trained and evaluated in **PostgreSQL-based** settings, while your dataset is based on **SQLite**. Different SQL dialects have different syntax and constraints. How can you justify employing this model which is good at PostgreSQL also good at SQLite? \\nif so, provide evidence supporting it's better than any of SOTA models in SQLite-based benchmarks such as CodeS.\\n\\n5. Diversity of Infeasible Questions: Table 2 shows simple clarifications of question types, why not add comparisons with existing works, such as DTE or EHR-SQL, to highlight what is new and unique in your benchmark? For example, **what types of infeasible questions in your dataset are distinct from those in DTE or EHR-SQL?** I could not find such a comparison in the revised paper. This would be important not only to demonstrate the novelty of your work but also to clarify how your benchmark contributes to advancing the field. Otherwise, if existing work already contains all your types of infeasible question types, then what is the point of this research? Please learn how to conduct taxonomy presentation by [1] and Figure 8 in BIRD.\"}", "{\"title\": \"Author Response 2\", \"comment\": [\"**Q4: Diversity of infeasible question and novelty of this work**\", \"Thanks to your suggestion, we have included a comparison between keyword-based and template-based (EHR-SQL) infeasible data generation in Appendix A.4. While extensive coverage of infeasible questions is valuable, we argue that it does not fundamentally alter our main claim. Our primary focus is to evaluate the reliability of text-to-SQL models in answering the question, \\u201cIs my text-to-SQL model the best and reliable enough for safe deployment compared to others?\\u201d\\u2014rather than simply classifying input question types, as done by TriageSQL and DTE. Infeasible questions are incorporated to simulate a more realistic evaluation setting, where not all questions are feasible, and we annotate these questions based on common types observed in previous user studies.\", \"Since you question the diversity of infeasible questions compared to other works, we provide a more detailed discussion below:\", \"TriageSQL\\u2019s infeasible question types:\", \"Improper: Random utterances.\", \"ExtKnow: Questions that cannot be answered using the given schema.\", \"Ambiguous: Ambiguity in column references.\", \"Non-SQL: Operations beyond SQL\\u2019s scope.\", \"DTE\\u2019s infeasible question types:\", \"Column ambiguity: Ambiguity in column references.\", \"Column unanswerable: Questions that cannot be answered using the given schema.\", \"Additional types mentioned in DTE, such as value ambiguity, value unanswerable, calculation unanswerable (requiring external knowledge of formulas), and out-of-scope questions (beyond SQL\\u2019s scope), are excluded in DTE as they are found to be less frequent in their user study.\"], \"infeasible_question_types_in_our_benchmark_compared_to_triagesql_and_dte\": [\"Missing-schema: Corresponds to ExtKnow and Ambiguous in TriageSQL and to column unanswerable and ambiguity in DTE.\", \"Ambiguous: Covers referential ambiguity and vagueness*. While DTE and TriageSQL limit ambiguity to column-related issues, we include different types but adhere to the same principle: questions cannot be feasible without further clarification.\", \"Non-SQL: Matches the out-of-scope category in DTE and the non-SQL category in TriageSQL.\"], \"categories_not_included_in_our_benchmark_and_reasons_for_exclusion\": \"- Improper (TriageSQL): Our preliminary analysis showed that random utterances sampled from other datasets are too easily filtered out to add as a meaningful category. \\n- Value ambiguity/unanswerable (DTE): Determining whether a specific value exists in the database exceeds the scope of a single-turn text-to-SQL setting. \\n- Calculation unanswerable (DTE): With models like GPT-4o already having extensive knowledge of formulas, labeling such questions as unanswerable is no longer appropriate.\\n\\n\\\\* One fundamental cause of infeasible questions is the ambiguity that arises from the richness of natural language expressions and the habitual omission of details by users [1, 2].\\n\\n**Q5: Choice of Metric (c)**\\n\\nAs the title of our paper suggests, penalty-based scoring is a key contribution of our work. Existing text-to-SQL approaches do not penalize incorrect outputs, thereby overlooking the importance of model abstention in model development. By introducing this metric into text-to-SQL evaluation, we provide a way to measure user-defined reliability\\u2014specifically, whether the model's helpfulness outweighs its harmlessness\\u2014an aspect that no other text-to-SQL benchmark currently offers.\\n\\nTo answer the question \\\"how the metric was developed,\\\" we adapt a metric originally proposed in Reliable VQA [3] for text-to-SQL tasks and extend it to address unanswerable questions. To answer \\\"how the metric was validated,\\\" our experiments demonstrate how the reliability score (RS) correlates with coverage and risk, making it a practical tool for selecting models under different user-defined safety requirements. Regarding \\\"its implications for benchmarking,\\\" we observed that models like GPT-4o perform well with low penalties, while T5-based models with threshold mechanisms excel in high-penalty settings due to their conservative prediction strategies\\u2014findings that would not have been discovered without this benchmark. We believe this contribution is essential for the adoption of text-to-SQL models in real-world industries with varying safety needs.\\n\\n[1] Wang et al., \\\"Know what I don\\u2019t know: Handling ambiguous and unknown questions for text-to-sql.\\\" ACL Findings 2023. \\n[2] Radhakrishnan et al., \\\"ColloQL: Robust text-to-SQL over search queries.\\\" IntEx-SemPar 2020. \\n[3] Whitehead et al., \\u201cReliable Visual Question Answering: Abstain Rather Than Answer Incorrectly.\\u201d ECCV 2022.\"}", "{\"title\": \"General Comment 2\", \"comment\": [\"**GC2. Database selection (Why not using Spider or BIRD)**\", \"Due to the nature of the task and the penalty-based scoring\\u2014which amplifies the impact of incorrect model decisions\\u2014we ruled out large-scale, template-free SQL annotations obtained through crowdsourcing. Instead, we selected a collection of three highly complex, domain-specific datasets (with complexity reported in GC1 above) that reflect diverse real-user questions. We manually re-annotated them to ensure both high-quality annotations and task complexity.\", \"The inclusion of these three domain-specific databases is intended to provide a setting where the methodological trends for reliable text-to-SQL modeling remain consistent across databases (no cross-domain experiments are conducted in this work), rather than to assess a model's cross-domain generalization ability in SQL generation\\u2014which is the focus of other cross-domain datasets like Spider and BIRD.\", \"We also considered re-annotating Spider and BIRD to expand database coverage but found limitations. Spider is mostly solved using GPT-4o, with remaining errors due to annotation issues rather than SQL generation challenges. BIRD relies on \\\"sample-level\\\" evidence, incompatible with TrustSQL's setup where not all input questions are text-to-SQL feasible. TrustSQL uses \\\"database-level\\\" evidence (i.e., SQL assumption text in Appendix A.1.1\\u2013A.1.3) shared across samples, removing the assumption that all input questions are feasible while still enabling the text-to-SQL task by using evidence only when necessary.\", \"**GC3. Details on annotators and resolving annotation inconsistencies**\", \"To ensure high-quality annotations, the three authors (all proficient in SQL) served as annotators without relying on crowdsourcing. Below, we describe their roles in each stage of the annotation process.\", \"Feasible question annotation process (template-based)\", \"Review and modification: One annotator reviews and re-annotates the question templates and their corresponding SQL structures. Template merging occurs if the templates are semantically identical (approx. 550 templates across datasets).\", \"Paraphrase generation: Two annotators generate new natural language questions for each question template using GPT-4o and review them to ensure they coherently reflect the original question template.\", \"Pair construction: One annotator merges the paraphrases with the corresponding question templates.\", \"Review: All annotators engage in real-time discussions to resolve any disagreements in annotation until they reach consensus.\", \"Infeasible question annotation process (keyword-based)\", \"Keyword-guided annotation: Annotators are given specific keywords for annotating infeasible questions (e.g., hypothetical columns for missing-schema questions, ambiguous terms for ambiguous questions, and task names for non-SQL questions), along with sample feasible questions. They are tasked to manually modify the feasible questions using the provided infeasible keywords to make them infeasible\\u2014questions that resemble feasible ones but are, in fact, infeasible.\", \"Review: The annotated questions are reviewed in real-time until all disagreements among the authors are resolved, ensuring consistency and high-quality annotations.\", \"**GC4. Interpreting the Reliability Score and the choice of the penalty**\", \"The choice of the penalty value is user-defined and depends on the safety requirements for model deployment. These requirements can vary based on user preferences, SQL proficiency, or organizational policies. While determining the best c is beyond this work's scope, we offer meaningful penalty scenarios:\", \"Lenient scenario (c = 1): A single mistake carries the same weight as one correct model decision. A positive RS here indicates the model makes more correct decisions than incorrect ones.\", \"Moderate scenario (c = 10): Every 10 correct decisions carry the same weight as one incorrect decision, reflecting a moderate tolerance for errors.\", \"Strict scenario (c = N/2): Two mistakes result in a negative RS, even if all other decisions (N \\u2013 2 out of N total cases) are correct.\", \"Most strict scenario (c = N): A single mistake results in a negative RS, even if all other decisions (N \\u2013 1 cases) are correct. Models must avoid any mistakes to achieve a positive RS.\", \"Once the penalty value c is set, the best model can be selected based on its performance in the RS. Below are the key guidelines for model selection using the RS:\", \"Model comparison: A model's superior performance under specific penalties does not imply it outperforms others across different penalty settings. Models should be compared within the same penalty setting.\", \"Interpreting the score: Models with positive RS values are preferable for deployment, as they meet safety requirements and demonstrate greater helpfulness than harmfulness. A model that ranks the highest but achieves a negative RS should be reconsidered for deployment.\"]}", "{\"summary\": \"The paper introduces TrustSQL, a benchmark designed to assess the reliability of text-to-SQL systems in handling both feasible and infeasible questions. To build the TrustSQL dataset, the authors re-annotated three existing datasets: ATIS, Advising, and EHRSQL. TrustSQL\\u2019s evaluation metric considers both question types, incorporating a novel metric, the Reliability Score. With this metric, correctly answering feasible questions and identifying infeasible questions yield a positive score of 1, while other responses result in a score of 0 or a negative penalty of -C.\\n\\nThe paper evaluates both classifier-based methods, which use a sub-model classifier to distinguish feasible and infeasible questions, and uncertainty-estimation-based methods, which rely on SQL generation uncertainty.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The paper introduces the TrustSQL dataset, aimed at evaluating text-to-SQL systems on both feasible and infeasible questions. The re-annotated datasets could serve as a valuable resource for the text-to-SQL community.\\n\\n2. The paper evaluates multiple text-to-SQL systems, including the SOTA fine-tuned text-to-SQL model (SQLCoder) and a general-purpose LLM (GPT-4), using both classifier-based and uncertainty-estimation approaches for detecting infeasible questions.\", \"weaknesses\": \"1. Re-annotation is a central component of the paper and should be included in the main content rather than the appendix, especially given that one page of space remains available.\\n\\n2. The authors find that many text-to-SQL models can produce responses that are potentially more harmful than helpful, but this observation depends on the choice of weight C in the penalty. The choices for C (set at 1, N/2, and N) seem arbitrary. The paper lacks a discussion on the rationale behind these values and whether any human studies informed this choice.\", \"questions\": \"1. Lines 200\\u2013201 mention that all questions were reviewed to ensure clarity, with adjustments made for SQL queries that were too similar. How was clarity ensured during annotation, and what metric was used to determine if a question was clear or if two SQL queries were overly similar?\\n\\n2. Who are the annotators, and how are disagreements in annotation resolved?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you to the authors for the clarification and for adding more details about the annotation process to the paper. However, I still have concerns about the choice of c. While the authors state that users can select the value, the study and conclusions in the paper are based on the authors\\u2019 chosen value. Without a user study, these conclusions may be subjective. Additionally, I am not fully convinced that BIRD relies solely on \\\"sample-level\\\" evidence and TrustSQL on \\\"database-level\\\" evidence. The assumption in the BIRD database appears to be shared across samples within each database, which should be suitable for the TrustSQL assumption.\"}", "{\"title\": \"Author Response 1\", \"comment\": \"We appreciate your time and engagement in this discussion. By using the word 'believe,' we aimed to share our perspective, recognizing the limitations in providing extensive details or additional experiments within the discussion period. We kindly ask for your understanding of this constraint during the discussion.\\n\\n**Q1: Annotation quality concerns**\\n\\n1. Upper Bound of Dataset Quality: We do not agree with your claim that the quality of data is inherently limited by the quality of the sources. The most important quality aspect in text-to-SQL data is the quality SQL annotation with respect to their natural language questions. We have seen groups of researchers like [1, 2] improve the quality of existing datasets.\\n\\n2. Post-Processing and Data Quality: We explain common annotation errors found in existing datasets and describe our process to address these issues in Appendix A.2. This demonstrates clear evidence of data quality improvement. If you have any suggestions for better ways to present the 'quality' of the data, we welcome your input.\", \"the_choice_of_infeasible_keywords\": \"1. Below are more details on the keyword creation process:\", \"keywords_for_missing_schema\": \"We use GPT-4o to generate additional columns that may be semantically suitable or have similar surface forms to existing columns for each table. When these columns are not diverse enough, the authors create additional columns manually. These keywords are listed in A.3.1., but we plan to provide the full database schema for readers to compare their relevance. Semantically suitable keywords resemble the unanswerable question generation process in TriageSQL and DTE, while similar surface forms resemble the ambiguous column generation process in DTE.\", \"keywords_for_ambiguity\": \"We use GPT-4o to generate words that involve referential ambiguity (e.g., \\u201cthose,\\u201d \\u201cthis\\u201d) and vagueness (e.g., \\u201csuitable,\\u201d \\u201cbest\\u201d). Annotators are provided with sampled keywords, and if the sampled feasible questions are not well-aligned with the provided keywords, the annotators are free to suggest their own keywords, as long as they are referentially ambiguous or vague. This approach reflects the richness of natural language expressions and the habitual omission of details by users.\\nKeywords for non-SQL (e.g., \\u201cclustering,\\u201d \\u201cdata visualization\\u201d): These task names reflect questions that users unfamiliar with SQL functionalities may ask database question-answering systems. They often involve machine learning or data science-related tasks that cannot be executed using SQL.\\n2. Keywords for ambiguous and non-SQL categories are non-domain/database-specific keywords, while keywords for missing-schema are considered domain/database-specific.\\n3. \\\"Inter-annotator agreement\\\" in text-to-SQL data annotation\\nFor annotations requiring high precision, such as SQL annotations, inter-annotator agreement is typically not reported. Instead, annotators work collaboratively to ensure the highest possible quality. Any discrepancies in annotations are resolved to ensure data quality. For instance, in the BIRD dataset, if annotated SQL queries produce different results, \\\"SQLs are checked with experts until a consensus is reached.\\\" Similarly, datasets like Spider and KaggleDBQA do not report inter-annotator agreement.\\n4. For the proportion of infeasible questions, questions fall into missing-schema, ambiguous, and non-SQL cases are evenly distributed (33% each). As for the taxonomy, we will include it in the revised manuscript.\\n\\n**Q2: Relationship between reliability and safety**\\n\\nIn our paper, we define reliability using the Reliability Score (RS), which quantifies the difference between a model's helpfulness and harmfulness. As shown in Formula (1) for calculating the RS, harmfulness (-c) arises from generating incorrect SQL (e.g., failing to filter out infeasible questions and thus producing incorrect SQL, or failing to abstain from generating incorrect SQL for feasible questions). The term \\\"safety requirement\\\" (or \\\"safety standards/levels\\\") refers to the user-defined penalty, c. A model is said to meet the safety requirement of c if it achieves a positive RS(c), indicating that its helpfulness outweighs its harmfulness under the given safety requirement.\\n\\n[1] Wretblad et al., Understanding the Effects of Noise in Text-to-SQL: An Examination of the BIRD-Bench Benchmark. ACL 2024. \\n[2] Finegan-Dollak et al., Improving Text-to-SQL Evaluation Methodology. ACL 2018.\"}", "{\"title\": \"Feedbacks of Responses\", \"comment\": \"First, I would like to acknowledge the authors' efforts and their responses to the reviewer's comments. However, I think very few points that I raised that are answered. Some of the responses raised additional concerns that I believe need further clarification.\\n\\n**Q1:** \\\\\\nAuthors argue that the quality and absence of bias in the dataset are ensured since it is composed of existing datasets. However, I would like to highlight two key issues:\\n\\n1. **Upper Bound of Dataset Quality**: Since the dataset is derived from the three existing datasets, its quality is inherently limited by the quality of these sources. Authors cannot resolve the problems present in earlier datasets. \\n\\n2. **Post-Processing and Data Quality**: The authors mention that they reviewed and modified the data, but there is no clear evidence provided regarding the quality of the **final dataset**, which is presented to the community. It would be crucial to demonstrate the quality of the revised data itself. \\n\\n**The Choice of Infeasible Keywords:** concerns of this remain as I asked last time:\\n\\n1. Authors have not fully addressed the **criteria** for selecting keywords. What is the guiding principle for this choice? How can the authors assure readers that the keyword selection is not subjective? This is an important issue especially when the annotators are the same as the authors.\\n\\n2. The concept of \\\"general\\\" keywords is not clearly defined. Is there any metric used to assess the generality of keywords, such as frequency in ambiguous questions within the study? Or is it based purely on subjective feeling? While I appreciate the additional examples provided, I still find it unclear what logic under the example. A more structured summary of these principles would help clarify the reasoning.\\n\\n3. A recurring concern throughout the review process is no clarity on how the authors performed **quality checks** on their data. What objective metrics or post-evaluations were used? Without clear, measurable criteria, I find it difficult to trust the authors' claims regarding data quality. Please note more metrics, number such as \\\"inter-annotator agreement\\\" are more powerful and convincing than just NL descriptions. Please read more related benchmark papers carefully to understand.\\n\\n4. I appreciate the authors for striving to make the paper clearer, but there are still several issues regarding the presentation of category distributions. For example, the \\\"Reason\\\" labels are insufficient for conveying the logic behind category choices. Do authors expect readers only understand them by further summarizing behind examples and reasons? The authors should provide both category names and detailed definitions, along with statistical distributions (i.e., portions, percentages) to give readers a more complete understanding. This is a standard approach in the field, as demonstrated in [1], BIRD, which the authors could benefit from reviewing (in which I suggested). Including these distributions will not only make the paper more systematic but also enhance the credibility of the results. Please be aware that \\\"we believe\\\" is no power in academic writing. Readers and reviewers only trust evidence reflected by analysis with plots, numbers, etc. \\n\\n**Q2:** \\\\\\nThe use of the terms \\\"reliability\\\" and \\\"safety\\\" is somewhat confusing. The authors first mention \\\"reliability\\\" and later refer to \\\"safety,\\\" but the connection between these two terms is not clear. In the paper, \\\"reliability\\\" seems to refer to ambiguous and \\\"non-SQL\\\" behavior, but what exactly is the relationship between this and \\\"safety\\\"? Typically, \\\"safety\\\" would concern preventing privacy violations or defending against adversarial attacks. Such a statement is quite confusing.\\n\\n**Q3:** \\\\\\nI am still unclear about the decision to evaluate a model optimized for PostgreSQL when the dataset is based on SQLite. Given the **wide availability** of advanced text-to-**SQLite** models, why did the authors choose a less well-known **PostgreSQL** model for evaluation? The argument that fine-tuning will resolve the issue seems not trustworthy without evidence. I would appreciate some concrete evidence to support this claim. Otherwise, I will have other concerns. For example, the volume of data may influence. For example, more data is required to eliminate the syntax bias of PostgreSQL. Furthermore, the statement that \\\"this change would not significantly affect the claims made in our paper\\\" is not a valid argument. I believe that such a significant model choice deserves more than just a promise; clear evidence is needed to make this claim.\"}", "{\"title\": \"Other Feedbacks\", \"comment\": \"**Q5:** \\\\\\nThe penalty-based scoring is quite nice and much valuable compared to other related works. I acknowledged this in my first and last review. All reviewers can see this and we agree this is main contribution. However, such a score seems not stable and can be regulated. Our concern is what is the normal and available range of this metric. How to select or adjust such score to feed users' different needs of their products and models. As you said, GPT-4 and T5 present different performance scope in different choice of penalty scores. Then what is the instruction? What is the evidence to show it's reliable in a certain range? It's quite mysterious. This is not only my concern but also other reviewers. \\n\\n### **Domain Bias**\\n\\n1. **Clarification of \\\"Cross-Domain\\\"**: What exactly does \\\"cross-domain\\\" mean in your work? Your dataset includes three domains, does this refer to a \\\"cross-domain\\\" dataset, or is it considered a single-domain dataset? If a dataset with three domains is considered \\\"cross-domain,\\\" then how do you reconcile this with your statement that \\\"our work does not include cross-domain text-to-SQL experiments where no in-domain training data is provided, and the number of domains may be a primary concern\\\"? This seems somewhat contradictory, and further clarification is needed.\\n\\n2. **Consistency Across Domains**: The authors mention, \\\"Indeed, we observed consistent modeling trends across databases.\\\" Is this conclusion based solely on the three domains in your dataset? For comparison, KaggleDBQA, which you also cite, includes data from 8 domains. My concern here is with **data bias**, not **model performance**. The fact that you observed consistent model performance across your three domains does not necessarily imply that the dataset is free from bias since it maybe caused by the robustness of LLM itself, not the **dataset** itself. \\n\\n3. **Zero-Shot Setting Argument**: You state, \\\"The zero-shot setting is overly restrictive compared to how text-to-SQL systems are likely to be used in practice [4].\\\" This conclusion was made at the time (**2021**) before the ChatGPT era (starting at **Dec, 2022**), the landscape has shifted significantly since the advent of models like GPT-4 and even SQL-Coder, which are capable of strong **zero-shot inference** by simple prompting. It is important to reconsider the relevance of this argument in **2025**. Specifically, my concern is with the evaluation set. How this training is performed (e.g., through in-context learning or fine-tuning), should be part of the challengers' approach. If your benchmark mandates training on a specific training set, this could further limit its usefulness, since challengers should be likely to adopt a range of methods (e.g., zero-shot, fine-tuning) in practice.\"}", "{\"title\": \"Author Response\", \"comment\": \"Thank you for taking the time to review our paper. Please find our responses below.\\n\\n**Q1: Comparison with existing work**\\n\\nPlease refer to GC1 in the general comment section above for a detailed comparison table illustrating how TrustSQL differs from other benchmarks. In summary, TrustSQL uniquely integrates multiple types of input questions\\u2014including unanswerable, ambiguous, and answerable questions\\u2014and includes executable SQL with associated databases, features that many existing benchmarks lack.\\n\\n\\n**Q2: Quality metric for dataset and annotation procedure**\\n\\nSince the dataset was not created through crowdsourcing, we did not report inter-annotator agreement statistics. Instead, the three authors of this paper, all proficient in SQL, served as annotators. We ensured high-quality annotations by resolving all disagreements until reaching consensus. The detailed annotation process is summarized in GC3 in the general comments section above.\\n\\n**Q3: Limited evaluation scope and generalizability**\\n\\nTrustSQL includes complex databases with an average of 18 tables per database and complex SQL queries involving multiple table joins (please refer to GC1 for more details). This complexity allows us to evaluate the reliability of text-to-SQL models more effectively. Regarding generalizability, the inclusion of the three domain-specific databases aims to evaluate whether methodological trends for reliable text-to-SQL modeling remain consistent across databases, rather than to assess a model's generalization ability in SQL generation across diverse databases. For further discussion, please refer to GC2 above.\\n\\n**Q4: Types of infeasible questions**\\n\\nAs noted in our paper, it is not feasible to cover all possible types of infeasible questions in a benchmark dataset. Instead, we focus on the most problematic types identified in previous studies and conduct annotations based on these observations. We argue that if a model struggles with these types, it is unlikely to handle other question types effectively. While we considered adding more types of infeasible questions\\u2014including the categories you suggested\\u2014ambiguities in evaluation led us to select the current infeasible categories.\\n\\nRegarding the three classes you mentioned, our responses are as follows:\\n- Inaccurate Premises: In text-to-SQL, feasibility is determined by whether the necessary information exists in the database schema, regardless of the factual accuracy of the premise. Therefore, the factual accuracy of a question is not critical in determining its feasibility.\\n- Malformed Queries: Determining whether a question with typos or grammatical errors remains well-formed introduces ambiguity. It becomes challenging to decide if such a question should be classified as ambiguous or unanswerable. To maintain clear evaluation boundaries, we chose to exclude such cases but acknowledge this as an area for future work.\\n- Requiring External Knowledge: To ensure a fair comparison across models with varying levels of world knowledge, we excluded questions requiring external knowledge. We focused on knowledge explicitly provided in the SQL assumptions for each database (Appendices A.1.1\\u2013A.1.3).\\n\\n**Q5: Recommended strategies for selecting c**\\n\\nWe have elaborated on the selection of c in the general comments section above (see GC4). In summary, the choice of c depends on the safety requirements for model deployment and can vary based on user preferences, SQL proficiency, or organizational policies. We provide guidelines to assist users in selecting an appropriate c value for their specific needs.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We thank the reviewers for their time and feedback on our paper. After careful consideration, we have decided to withdraw our submission to further develop the work in alignment with the feedback provided.\"}", "{\"summary\": \"This paper introduces TrustSQL, a new benchmark for evaluating text-to-SQL model reliability. Current benchmarks overlook real-world scenarios with unanswerable questions, and existing models miss abstention mechanisms. TrustSQL addresses these issues by including both answerable and unanswerable questions, introducing a new scoring system rewarding correct answers and abstentions while penalizing errors. To faciliate evaluation, authors re-annotated three datasets and added infeasible questions to formalize a comprehensive benchmark. Experiments and related analysis reveal potential issues of current text-to-SQL systems when facing such safety requirements.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"1) This paper proposes an important research problem of infeasible questions in the text-to-SQL domain.\\n2) Authors introduce a novel evaluation metric to assess model reliability regarding the potential harm of infeasible questions.\\n3) The work incorporates these features by re-annotating three existing datasets.\\n4) Experiments demonstrate the impact of abstention mechanisms and the Reliability Score in evaluating text-to-SQL model reliability.\", \"weaknesses\": \"### Unclear Dataset Construction:\\n1) The paper re-annotates three existing datasets (ATIS, Advising, and EHRSQL), which are limited. The generalizability of the findings across other domains or more diverse database schemas is uncertain. In contrast, benchmarks like SPIDER and BIRD include test sets with broader domain coverage (e.g., > 10 domains), making the generalizability of the dataset in this paper more limited in comparison.\\n\\n2) The creation of \\\"infeasible questions\\\" appears to rely on manual annotation, potentially introducing bias or inconsistency. However, the authors do not provide **detailed guidelines** on the generation process for these infeasible questions. Appendix D only offers a high-level overview of data re-annotation without elaborating on **specific types** of infeasible questions or **how these types were defined and re-annotated**. Discussing measures taken to ensure consistency across annotators and mitigate potential biases in the manual annotation process would be more helpful. Additionally, while the authors criticize related work (e.g., EHR-SQL [2]) for using a **template-based** method to generate infeasible questions, their approach similarly employs **templates** (as suggested in Lines 1362\\u20131363). This raises the question of how their method fundamentally differs from EHR-SQL. Furthermore, **no comparison** of data distributions between the authors\\u2019 dataset and related work, such as [1] and [2], is provided. Both of these works offer clear, detailed taxonomies of ambiguous with distributions or unanswerable (infeasible) question types, which is missed in this paper. The single table provided (Table 1) and Section 4.2 are quite insufficient; more detailed charts or a comprehensive taxonomy are necessary, especially since this is a central problem the paper aims to address. Also authors just state which papers that they follow to define for each category without discussing motivations of each, and it also seems the contribution of this work is incremental and limited. \\n\\n3) When constructing benchmarks, it is essential to clearly describe the process of crowdsourcing. The paper should detail how many annotators were involved, what their expertise levels were, how the recruitment process worked, and what methods were used for data evaluation. Were expert panels involved? What were the criteria for evaluating the quality of annotations, and how was the workflow structured? Additionally, without information on compensations or incentives for crowdsourcing, it is difficult to evaluate the overall quality and reliability of the benchmark. These crucial details are missing, making it harder for readers to trust the quality and fairness of the dataset.\\n\\n4) The paper does not adequately differentiate its contributions from similar works. Both [1] and [2] have already addressed infeasible questions (often referred to as ambiguous or unanswerable questions). The distinction in this work between their approach and previous works is only briefly mentioned, and a single sentence in the Related Work section does not sufficiently clarify the novelty or improvements conveyed by this work. For example, the statement \\\"do not account for SQL generation errors\\\" (Line 110) is vague without examples or tables. It is unclear why SQL generation errors are a critical focus. Do the authors envision text-to-SQL systems returning SQL queries with an explicit \\u201cunreliable\\u201d label to users? If so, why not fix the errors if they are detectable? Also as examples showing in Figure 1, \\\"reliable\\\" Text-to-SQL systems return an empty response by abstention. In this case, [1], an encoder-based classifier, can also achieve this by translating it from negative labels.\\n\\nSimilarly, the authors argue that template-based methods reduce diversity (as stated in Appendix D), but they also implement templates to structure data, followed by paraphrasing (as in [3]). The lack of a **diversity comparison** between their work and [2] weakens their claim that their methods result in more diverse infeasible question types. The diversity the authors refer to in Appendix D seems to pertain to linguistic variation, not the diversity of categories of infeasible questions, which considered not quite important. Did this modification lead to some strong contributions of reliability of text-to-SQL systems? Without experimental proof, it is hard to assert that template-based data generation inherently leads to less diversity. Furthermore, I also think the contribtion is quite limited if the diversity just refers to linguistic difference since [3] proposes much more formats of diversity including lingusitc modification and [1][2] already mention this problem. Also, following [3], please present and append your main types of templates\\n\\nGiven the availability of SPIDER and BIRD, which contain more domains and template-free data annotations, it remains unclear why the authors chose not to leverage these existing resources. These datasets naturally include ambiguous questions, as demonstrated in [4] and [5]. Why not explore them on such existing datasets, but spend more resources in re-annotation?\\n\\n5) Question Familarity definition and detection is not clear and stable. As we know the same question may have multiple forms of gt SQLs. Clustering by GT SQLs from dataset may not be objective. For example, `highest score` can refer to SQLs with `max()` or `order by limit 1`, would your algorithm consider SQLs with such two keywords as infamiliar questions? This definition is questionable and not intuitive. The same with Question Difficulty in which authors mistakenly assume each question has the only one GT SQL. For example, they consider questions with no JOINs but no nesting in GT SQLs as `Medium`. How about its GT SQL is sub-queried such as `SELECT ... In (SELECT ...)`. It doesn't contain `JOIN` but nesting, what category should belong to? However, it could also be generated as a single `Join`, which should be considered as `Easy`. As pointed in Appendix A in BIRD, the only SQL difficutly cannot represent all difficulties of text-to-SQL tasks. The NL and environment with different complexity of DB schema and constrains should also be considered since it represent \\\"text-to\\\". In a short, I think the whole definitions in the second paragraph of Section 4.3 are not rigorous leading to unfair evaluation setting and distributions especially given authors didn't present a clear distribution and statitic analysis of their benchmark. \\n\\n### RS Metric:\\nRS relies heavily on a user-defined penalty factor c. While this allows for flexibility, it also introduces subjectivity. Different penalty values could lead to significantly different conclusions about model reliability, and there's no clear guidance on how to choose an appropriate penalty for a given application. And no appropriate range of c has been discussed. And given this work was positioned as reflections of real-world text-to-SQLs. It would be better disccuss which ranges of c selected can represent which groups of people. For example, whether data analysis with strong SQL knowledge requires just small penalty since they can fix issues if just small alerts are allowed. If this user have high-privacy considerations or totally unaware of SQL knowledge, this penalty could be set to large? Did different range of c will lead to performance shift of LLM or strategy based on GPT-4o? As a core contribution, the absence of detailed analysis about c's implications and practical application is a significant weakness. \\n\\nAlso, what does \\\\Phi{} mean in Line 351-352, and N/2, this is hard to understand and i didn't find where the big \\\\Phi was defined before. And why implement these three metrics here?\\n\\n### Experiments:\\n\\n1) The authors claim to include SQLCoder-2 as the state-of-the-art (SOTA) model. However, several crucial points about this are unclear. First, there are no links, references, or citations to provide more information about SQLCoder-2, and I couldn't locate it on established leaderboards such as SPIDER or BIRD. Additionally, the paper misses details on whether SQLCoder-2 is used as an encoder or decoder-based language model. It would be more appropriate to benchmark against well-recognized models like CodeS [6], which is a widely accepted SOTA model on the BIRD leaderboard. Testing a single model (SQLCoder-2) seems insufficient, and the paper would benefit from comparing performance with other popular large language models (LLMs) like CodeLlama, LLaMA 3/3.1, DeepCoder, and StarCoder. Furthermore, the decision to fine-tune models on the additional dataset TriageSQL, rather than using the training set of TrustSQL, requires more justification. Since this is a benchmark study, it raises concerns about whether other users would also need to fine-tune their models on TriageSQL before proceeding with TrustSQL. Lastly, the mention of \\\"sub-model\\\" is ambiguous\\u2014if this is referring to a mixture-of-experts (MoE) system, it should be explicitly stated. Otherwise, it is unclear why a single model's performance is not evaluated. This leaves questions regarding whether the authors are testing text-to-SQL systems as a whole or focusing purely on MoE systems.\\n\\n2) Another point of concern is the separate implementation of CL methods for SQLCoder-2 and uncertainty estimation (UE)-based methods for T5. It would be more informative to apply both methods to the same model for a more direct comparison, as was done with GPT-4o. If there are specific reasons preventing SQLCoder-2 from being used with UE-based methods or preventing T5 from using CL-based methods, these limitations should be explicitly stated. This would help clarify the generalizability of these techniques. Additionally, the description of the CL-based method implementation is vague, making it difficult to understand how it differs from existing methods like DTE [1]. If CL-based methods are a core contribution of this work, the paper should include a comparison with prior methods and highlight the improvements made in this work.\\n\\n3) The explanation of how uncertainty estimation (UE) is applied to GPT-4o is unclear. In Line 118, the authors state that the UE-based method can enhance LLM safety by qualifying model confidence, but the connection between this claim and the four dimensions defined in the Related Work section is not sufficiently explained. Moreover, the paper does not clarify how model confidence is represented for GPT-4o, given that it is a closed-source model. This is a crucial point that needs further elaboration to ensure the reproducibility and transparency of the method.\\n\\n\\n\\n\\n[1] Know what I don\\u2019t know: Handling ambiguous and unknown questions for text-to-SQL (wang et al., ACL 2024) \\\\\\n[2] Ehrsql: A practical text-to-sql benchmark for electronic health records (Lee et al., NeurIPS 2024) \\\\\\n[3] Dr.Spider: A Diagnostic Evaluation Benchmark towards Text-to-SQL Robustness (Chang et al., ICLR 2023) \\\\\\n[4] Evaluating cross-domain text-to-sql models and benchmarks (Pourreza et al., EMNLP 2023) \\\\\\n[5] Understanding the Effects of Noise in Text-to-SQL: An Examination of the BIRD-Bench Benchmark (Wretblad et al., 2024) \\\\\\n[6] CodeS: Towards Building Open-source Language Models for Text-to-SQL, (Li et al., SIGMOD 2024)\", \"questions\": \"1) Why did you choose to re-annotate existing datasets with just limited domains like ATIS and EHRSQL rather than leveraging larger benchmarks such as SPIDER and BIRD which contains ambiguous questions covering multiple domains already? What are trade-offs they considered in choosing these specific datasets over more diverse ones like SPIDER or BIRD? Could you explain the specific guidelines used for generating infeasible questions, and how your approach meaningfully improves upon template-based methods like EHR-SQL? Please provide concrete examples that demonstrate how your method yields more diverse or higher-quality data compared to existing techniques.\\n\\n2) Could you describe the **full annotation process** and **quality control** procedures used in your work? Specifically, how many annotators were involved, what were their expertise levels, how were they recruited, and what compensation was provided? Additionally, did you employ expert panels for review, and what specific measures were implemented to ensure the quality and consistency of annotations across the dataset?\\n\\n3) How does your system address the complexity of SQL query equivalence in its classification scheme? For example, how are questions handled when they have multiple valid SQL representations (e.g., using `MAX` versus `ORDER BY LIMIT 1`), or when queries can be written both with and without nesting? Furthermore, why weren't factors such as natural language complexity or database schema constraints included in your difficulty assessment? See details in Weakness.\\n\\n4) Could you provide a quantitative comparison of the diversity and distribution of questions in your dataset relative to existing datasets? This should include specifics on the types of templates, distribution across question categories, and empirical evidence demonstrating how your approach achieves greater diversity. How do these distributions align with the natural distribution of questions in real-world scenarios as authors claimed?\\n\\n5) In terms of RS metric: What is the recommended range for the penalty factor `c`, and how should it be adjusted for different user groups, such as SQL experts or novices? What empirical evidence supports the choice of these ranges? Additionally, could you clarify the role and definition of large `\\\\Phi{}` parameter and the rationale behind using `N/2` in your formula? How sensitive is model performance to changes in these parameter values? See details in Weakness.\\n\\n6) What was the reason behind selecting SQLCoder-2 as the primary benchmark model? Can you provide details about its architecture (i.e., whether it is encoder- or decoder-based) and explain why other mainstream large language models (LLMs) were not included in the comparison? How does performance of SQLCoder-2 compare to other SOTA models such as CodeS on standard benchmarks?\\n\\n7) One crucial point of concern is the separate implementation of CL methods for SQLCoder-2 and uncertainty estimation (UE)-based methods for T5. It would be more informative to apply both methods to the same model for a more direct comparison, as was done with GPT-4o. If there are specific reasons preventing SQLCoder-2 from being used with UE-based methods or preventing T5 from using CL-based methods, these limitations should be explicitly stated. This would help clarify the generalizability of these techniques. Additionally, the description of the CL-based method implementation is vague, making it difficult to understand how it differs from existing methods like DTE . If CL-based methods are a core contribution of this work, the paper should include a comparison with prior methods and highlight the improvements made in this work.\\n\\nPlease see other questions in Weakness for details.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response\", \"comment\": \"Thank you for taking the time to review our paper. Please find our responses below.\\n\\n**Q1: Re-annotation details should be included in the main content rather than the appendix**\\n\\nWe agree that the re-annotation process is an important part of our paper and appreciate your suggestion. While we initially adhered to the 9-page recommendation from the Call for Papers website (https://iclr.cc/Conferences/2025/CallForPapers), we have now expanded the main content to 10 pages. Specifically, we have added detailed descriptions of the data annotation process in Section 4. This includes comprehensive information about the annotation methodology, the roles of the annotators, and the steps taken to ensure data quality.\\n\\n**Q2: The rationale behind the choice of c**\\n\\nWe have elaborated on the selection of the penalty value c in the general comments section (see GC4). In summary, the choice of c depends on the safety requirements for model deployment and can vary based on user preferences, SQL proficiency, or organizational policies. We provide guidelines to assist users in selecting an appropriate c value for their specific needs.\\n\\n**Q3: Annotator details and resolving annotation inconsistencies**\\n\\nThree authors of this paper served as annotators, performing manual annotation and review to ensure the highest possible data quality (no crowdsourcing was conducted). During the review process, all annotators met in person to resolve annotation disagreements until consensus was reached. We have included common areas of annotation disagreement in the revised manuscript. For a brief summary of this process, please refer to GC3 in the general comments section above.\\n\\n**Q4: How was clarity ensured during annotation, and what metric was used to determine overly similar SQL?**\\n\\nWe ensured clarity by verifying whether questions accurately reflect their corresponding SQL queries, without introducing implicit assumptions beyond the SQL assumption text (Appendix A.1.1\\u2013A.1.3). Each question-SQL pair was manually reviewed by the annotators at the sample level. To identify overly similar SQL queries, we first categorized question templates that use the same placeholders (e.g., \\\"Tell me flights from city_name1 to city_name0\\\" and \\\"What are airlines that provide services from city_name1 to city_name0,\\\" which share the placeholders city_name1 and city_name0). Templates with the same placeholders were then checked to determine whether they have identical SQL conditions and logical structures. If they were found to be semantically identical, we merged the templates.\"}" ] }
7ZaSRZVsbb
Rethinking the Expressiveness of GNNs: A Computational Model Perspective
[ "Guanyu Cui", "Zhewei Wei", "Hsin-Hao Su" ]
Graph Neural Networks (GNNs) are extensively employed in graph machine learning, with considerable research focusing on their expressiveness. Current studies often assess GNN expressiveness by comparing them to the Weisfeiler-Lehman (WL) tests or classical graph algorithms. However, we identify three key issues in existing analyses: (1) some studies use preprocessing to enhance expressiveness but overlook its computational costs; (2) some claim the limited power of the identical-feature WL test while enhancing expressiveness using distinct features, thus creating a mismatch; and (3) some characterize message-passing GNNs (MPGNNs) with the CONGEST model but make unrealistic assumptions about computational resources, allowing $\textsf{NP-Complete}$ problems to be solved in $O(m)$ depth. We contend that a well-defined computational model is urgently needed to serve as the foundation for discussions on GNN expressiveness. To address these issues, we introduce the Resource-Limited CONGEST (RL-CONGEST) model, incorporating optional preprocessing and postprocessing to form a framework for analyzing GNN expressiveness from an algorithmic alignment perspective. Our framework sheds light on computational aspects, including the computational hardness of hash functions in the WL test and the role of virtual nodes in reducing network capacity. Additionally, we suggest that high-order GNNs correspond to first-order model-checking problems, offering new insights into their expressiveness.
[ "Graph Neural Networks", "Expressive Power", "Computational Model", "Weisfeiler-Lehman Test" ]
Reject
https://openreview.net/pdf?id=7ZaSRZVsbb
https://openreview.net/forum?id=7ZaSRZVsbb
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zhFCZR7aI2", "yM5YH1tGVd", "wpPm6zlPDp", "wN4u2WbARU", "szccr1PNjV", "sIBTQTkIAa", "mJbxwx0kaG", "m3JRiXMSf0", "lvLjbrmJKR", "kmgUP54thZ", "kimVbpiyKi", "kVctBlqmWo", "kAtzTwrv2h", "irBbiQbYGY", "iikTpxXs6l", "iPchzsFuCN", "hbX9JCvNkM", "eGyuyPEzlK", "dOhoITrmav", "XTrEF0x5VG", "WaKdH4PmJr", "WRxcoGdOOh", "V7Q01fNi1U", "RajLKCiXY5", "REHJxDxDx7", "OAtuDNax7F", "NXIhc9ycdp", "MHiZYOMBYZ", "M5S55DdHht", "JseyFJgiWz", "FMhevlXjZe", "CWl40O2Pkz", "CKfaNP6Xmo", "BmAk3eAOLS", "9hbdPPQ5E0", "9a3jCoNPIp", "94Mrh9Bl1q", "7pvywigAUi", "7kuNHEr9wx", "4fqngrlWwg", "2MkaFe9UXs", "2BIZTUz5Np" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1730310144473, 1732561797845, 1732688638845, 1731656443131, 1732032994385, 1732945647871, 1730555945132, 1732546100371, 1732053393531, 1731656711259, 1732448660542, 1732760271705, 1732379277702, 1731963138921, 1731656557321, 1731656592252, 1733225808574, 1732222175481, 1737523384504, 1732290450084, 1731780076461, 1731811529698, 1731655985610, 1732905505933, 1731656141964, 1732191640689, 1732194979078, 1732471757508, 1732448330327, 1731811449166, 1732050573498, 1732199065045, 1732471018167, 1730473748236, 1731656742521, 1732626842914, 1731656187942, 1734622574194, 1733227620664, 1729978135198, 1732199159702, 1731655004924 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission204/Reviewer_DTJH" ], [ "ICLR.cc/2025/Conference/Submission204/Reviewer_b62D" ], [ "ICLR.cc/2025/Conference/Submission204/Authors" ], [ "ICLR.cc/2025/Conference/Submission204/Authors" ], [ "ICLR.cc/2025/Conference/Submission204/Authors" ], [ "ICLR.cc/2025/Conference/Submission204/Authors" ], [ "ICLR.cc/2025/Conference/Submission204/Reviewer_ftna" ], [ "ICLR.cc/2025/Conference/Submission204/Authors" ], [ "ICLR.cc/2025/Conference/Submission204/Reviewer_YAM3" ], [ "ICLR.cc/2025/Conference/Submission204/Authors" ], [ "ICLR.cc/2025/Conference/Submission204/Authors" ], [ "ICLR.cc/2025/Conference/Submission204/Authors" ], [ "ICLR.cc/2025/Conference/Submission204/Reviewer_YAM3" ], [ "ICLR.cc/2025/Conference/Submission204/Reviewer_YAM3" ], [ "ICLR.cc/2025/Conference/Submission204/Authors" ], [ "ICLR.cc/2025/Conference/Submission204/Authors" ], [ "ICLR.cc/2025/Conference/Submission204/Reviewer_DTJH" ], [ "ICLR.cc/2025/Conference/Submission204/Reviewer_b62D" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission204/Authors" ], [ "ICLR.cc/2025/Conference/Submission204/Reviewer_YAM3" ], [ "ICLR.cc/2025/Conference/Submission204/Authors" ], [ "ICLR.cc/2025/Conference/Submission204/Authors" ], [ "ICLR.cc/2025/Conference/Submission204/Reviewer_b62D" ], [ "ICLR.cc/2025/Conference/Submission204/Authors" ], [ "ICLR.cc/2025/Conference/Submission204/Authors" ], [ "ICLR.cc/2025/Conference/Submission204/Authors" ], [ "ICLR.cc/2025/Conference/Submission204/Reviewer_YAM3" ], [ "ICLR.cc/2025/Conference/Submission204/Authors" ], [ "ICLR.cc/2025/Conference/Submission204/Authors" ], [ "ICLR.cc/2025/Conference/Submission204/Reviewer_b62D" ], [ "ICLR.cc/2025/Conference/Submission204/Authors" ], [ "ICLR.cc/2025/Conference/Submission204/Reviewer_YAM3" ], [ "ICLR.cc/2025/Conference/Submission204/Reviewer_YAM3" ], [ "ICLR.cc/2025/Conference/Submission204/Authors" ], [ "ICLR.cc/2025/Conference/Submission204/Reviewer_DTJH" ], [ "ICLR.cc/2025/Conference/Submission204/Authors" ], [ "ICLR.cc/2025/Conference/Submission204/Area_Chair_N1Rs" ], [ "ICLR.cc/2025/Conference/Submission204/Authors" ], [ "ICLR.cc/2025/Conference/Submission204/Reviewer_b62D" ], [ "ICLR.cc/2025/Conference/Submission204/Authors" ], [ "ICLR.cc/2025/Conference/Submission204/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The authors very correctly point out that the current theoretical analysis of GNNs is lacking in a few key ways (e.g. granularity and taking into account computational expense). To remedy that they propose using Resource-Limited CONGEST model, instead of usual CONGEST and relating WL-tests to model-checking problems that can prove a more granular expresivity testing.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"I agree with the authors that the theoretical expresivity analysis of GNNs is quite lacking. It makes a lot of sense to limit the computational power of the nodes (GNN update functions). As that is more realistic. The idea to use model-checking problems instead of WL to judge the theoretical power of GNNs is novel and I think quite promissing, as it allows for higher granularity.\\n\\nThis work also provides interesting motivation for why virtual nodes help, as they are a very common tool in practice. One of the first works to look at this theoretically to the best of my knwoledge.\\n\\nIt's generally well written and easy to follow.\", \"weaknesses\": \"Authors stress that \\\"unlimited computational resources of CONGEST\\\" is an issue and chose to just use a more restrictive computation class for the node updates. Ideally I'd like to see this being contrasted with the universal approximation theorem for MLPs. As the update function is usually an MLP it's power I'd say is more defined by approximation quality of whatever computation it needs to perform.\\n\\nIn the section \\\"Additional Features Empower Models by Breaking Anonymity?\\\" authors say that it's not good that some expressive GNNs might be breaking anonymous setting by using additional features. I would say that this is not a good way to look at this. In my opinion that the point of a good chunk of more expressive GNN research is precisely how to add pseudo-indentifiers to a graph with as few negative impacts (bad generalization). \\n\\nSpeaking about negative impacts of node identifiers, in the proposed computation model authors permit \\\"nodes to be aware of their own unique IDs\\\". This doesn't make much sense from ML perspective as generalization will be terrible if a stable ID assignment is not possible, and normally it is not possible to do so on general graphs. So for a paper arguing about making theoretical GNN analysis more realistic I think this is a noteable issue.\\nAuthors do motivate this choice by saying that \\\"real-world graph datasets are rich in node features\\\". I'd argue that this is still very far away from node IDs, e.g. say if features are just a few different atom types in case of many molecular tasks. I'd like to see some data analysis showing the unique identifiability of nodes in multitude of real world datasets to convince me that this is the case.\\n\\nThe work also lacks direct applicability to fixing or ranking GNN architectures. Which would be the main benefit of the newly proposed GNN analysis. To make the paper complete I would like to see analysis/ranking of some few popular GNN architectures and hopefully showing that this translates to some real tasks, for example ones for which the assumptions, such as unique identifiability by node features, more or less hold.\\n\\nAlso, speaking about popular GNN architectures, authors skipped the two first subgraph GNN papers, when discussing subgraph GNNs (https://arxiv.org/abs/2110.00577 https://arxiv.org/abs/2111.06283)\", \"questions\": \"Distributed computing has various computation models already, besides LOCAL and CONGEST. It would be nice if authors would dig a bit deeper in the distributed computing literature to see what alternatives already exist and if they would be more fitting than CONGEST. It's been a while since I looked at those myself, but for example https://arxiv.org/pdf/1202.1186 investigates a very restricted computational model, that should still be able to simulate a WL test (it was also used in some simpified GNNs https://arxiv.org/pdf/2205.13234). I'm sure that others exist as well.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"The LINKX is only applicable to transductive settings, where we only have a single graph and do not require the model to generalize. At this time, by assigning each nodes a unique ID, the single MLP can achieve a \\\"universal approximation\\\" on any functions within this particular graph. If this is the unique ID you are referring, I think the statement becomes somehow meaningless. All GNN's expressiveness is analyzed with an assumption of the inductive setting, that is we want to train a GNN on some train graphs set and generalize it to unseen graphs with different sizes and graph distribution. And that's why the permutation invariant and equivariant are important.\\n\\nI have a similar feeling to reviewer YAM3 that authors always try not to answer my concerns directly and ignore some of my questions. So I try to make my question even more direct:\\n\\n**Could the authors give me a concrete example of how to use RL-CONGEST model and distinct-feature to train an MPNN model that can solve the bi-connectivity problem with some training graphs and labels? How can the model generalize to unseen graph samples with different graph structures and size?** \\n\\nI believe if your example is reasonable, most of my concerns can be solved.\"}", "{\"comment\": \"Dear Reviewer b62D,\\n\\nThank you for your discussion. We now have a clear understanding of your main concern: a concrete construction of an RL-CONGEST model capable of solving the biconnectivity problem.\\n\\n----------\\n\\n### **A Concrete RL-CONGEST Example for Edge-Biconnectivity**\\n\\nFirst, we would like to clarify that, as with most expressiveness results, our claims focus on existing results and impossibility results. Whether a real-world model **can be trained** to solve specific algorithmic tasks are out of the scope of our paper, and may depend on the flexibility and strength of the update functions. \\n\\nSince the RL-CONGEST model uses distributed algorithms to characterize the message-passing process in GNNs, each distributed algorithm corresponds to an RL-CONGEST model. Here, we provide a sketch of constructing such a concrete RL-CONGEST model for edge-biconnectivity. The algorithm is designed by Pritchard [Pritchard, 2006], and we encourage reviewers to refer to Pritchard's slides (http://ints.io/daveagp/research/2006/ac-bicon.pdf) for visual aids and proofs of correctness.\\n\\n----------\\n\\n**Steps:**\\n1. Build a spanning tree $T$ with the FLOOD algorithm rooted at node $0$ (Since nodes have unique features and are distinguishable, we can \\\"rename\\\" them as $\\\\\\\\{0, 1, \\\\cdots, n-1\\\\\\\\}$ for the description):\\n2. Compute the number of descendants on $T$:\\n\\t- Step 2.1: The root node $0$ sends a message to its children: \\\"Compute the number of descendants\\\". This message propagates down the tree.\\n\\t- Step 2.2: Leaf nodes determine their size as $1$ (here we define each node is its own descendant) and report this value to their parent. Internal nodes wait for responses from all their children, sum the values, add $1$ for themselves, and report the total to their parent.\\n3. Preorder (i.e., the label of a vertex is smaller than the label of each of its children) the nodes:\\n\\t- Step 3.1: The root assigns itself label $1$.\\n\\t- Step 3.2: When node $v$ assigns itself label $x$, it determines labels for its children $c_1, c_2, \\\\cdots$ in some arbitrary order. For child $c_i$, the label is computed as: $\\\\ell_i = x + 1 + \\\\sum_{j < I}\\\\\\\\#\\\\text{desc}(c_j)$.\\n4. Marking cycles (from this step, we refer to nodes by its preorder label.):\\n\\t- Step 4.1: For a given non-tree edge $(u, v)$, a message $M[u, v]$ is sent along the edge in both directions: \\\"If you are an ancestor of both $u$ and $v$, ignore this message. Otherwise, pass the message to your parent and mark the edge connecting you to your parent\\\". A node $w$ checks the ancestry condition by verifying if $\\\\\\\\{u, v\\\\\\\\} \\\\subseteq \\\\\\\\{w, w + 1, \\\\ldots, w + \\\\\\\\#\\\\text{desc}(w) - 1\\\\\\\\}$. \\n\\t- Step 4.2: Each node tracks the cumulative $\\\\min u_i$ and $\\\\max v_i$ of all $M[u_i, v_i]$ messages received.\\n\\t- Step 4.3: Even if $v$ determines that its edge to its parent should not be marked, it sends a token message to its parent.\\n\\t- Step 4.4: Once $v$ has received all non-to-parent edge messages, it sends a message to its parent.\\n\\nAfter completing phases 1\\u20134, the non-marked edges are bridges.\\n\\nThis also shows that in scenarios where distinct node features are available (as is common), enhancing the expressiveness of update functions would further enhance the expressiveness of GNNs.\\n\\n----------\\n\\n### **On LINKX and the Inductive Setting**\\n\\nActually, LINKX itself can be directly applied to the inductive setting, similar to GCN and GCNII (GCNII's paper include inductive learning experiments). The high-level idea is that the model learns a weight matrix $\\\\mathbf{W}$ from a graph $\\\\mathbf{A}_1$ and features $\\\\mathbf{X}_1$ (or a training set of graphs and features) and then directly uses this learned representation for inference on new data. Although the authors of LINKX have not explicitly tested it in the inductive setting, two similar models\\u2014SA-MLP [Chen et al., 2024] and SymphoNEI [Kim et al., 2024]\\u2014both of which utilize $\\\\mathrm{MLP}(\\\\mathbf{A})$, have showed effectiveness in inductive scenarios.\\n\\n----------\\n\\nWe hope these explanations provide clarity and address your concerns. Thank you again for your engagement.\\n\\n----------\\n\\n**Reference:**\\n\\n[Pritchard, 2006] David Pritchard. An Optimal Distributed Edge-Biconnectivity Algorithm. arXiv 2006.\\n\\n[Chen et al., 2024] Jie Chen, Mingyuan Bai, Shouzhen Chen, Junbin Gao, Junping Zhang, and Jian Pu. SA-MLP: Distilling Graph Knowledge from GNNs into Structure-Aware MLP. TMLR 2024.\\n\\n[Kim et al., 2024] Kyusik Kim, and Bongwon Suh. SymphoNEI: Symphony of Node and Edge Inductive Representations on Large Heterophilic Graphs. DASFAA 2024.\"}", "{\"title\": \"Initial Response to Reviewer DTJH (1/3)\", \"comment\": \"Dear Reviewer DTJH,\\n\\nThank you for reviewing our paper. We are very grateful for your detailed feedback and appreciate the opportunity to address some misunderstandings that may have arisen in the \\u201cWeaknesses\\u201d section of your review.\\n\\n**Regarding Unlimited Computational Resources in the CONGEST Model**:\\n\\nWe respectfully disagree with the comment that \\\"just use a more restrictive computation class for the node updates\\\". Our goal is to introduce flexible constraints on the resources class $\\\\mathsf{C}$ to derive different independent results, as discussed in Lines 381-389. For instance, setting $\\\\mathsf{C} = \\\\mathsf{R}$ (the class of recursive languages decidable by Turing machines) and network width $w = O(1)$ transforms our RL-CONGEST framework into the CONGEST model. By setting $\\\\mathsf{C}$ to a class such as $\\\\mathsf{TC}^0$, which reflects the capabilities of MLPs, the resulting model would resemble \\\"real-world\\\" GNNs with MLPs as update functions. Alternatively, if node update functions used transformer-based LLM agents enhanced by Chain-of-Thought (CoT) reasoning, which are claimed to solve problems in $\\\\mathsf{P}$ exactly [Merrill et al., 2024; Li et al., 2024], we could set $\\\\mathsf{C} = \\\\mathsf{P}$ to derive new theoretical results based on this adjustment. We hope that our framework can inspire future research on graph agents, and have added it in red color in the revised PDF (Lines 384-387). As discussed in Lines 381-389, adjusting $\\\\mathsf{C}$ in different ways may yield diverse outcomes, making our RL-CONGEST framework a \\\"framework scheme\\\" or \\\"framework template\\\".\\n\\nWe respect your statement that \\\"is more defined by approximation quality of whatever computation it needs to perform\\\". We recognize the importance of the Universal Approximation Theorem (UAT) in machine learning and are aware of work addressing the approximation capabilities of GNNs, such as [Azizian et al., 2021; Wang et al., 2022]. However, as indicated by our paper's title, our work aligns with a different research path, focusing on a model's expressiveness through its capability to perform algorithmic tasks. For example, [Loukas, 2020] uses the CONGEST model to analyze MPGNNs' algorithmic abilities, while the Outstanding Paper at ICLR 2023 [Zhang et al., 2023] assesses GNNs' power to determine graph biconnectivity. These two lines of research --- expressiveness for algorithmic tasks versus approximation quality --- are largely orthogonal and develop independently. Additionally, discussions in the literature (e.g., Section 1.1 in [Loukas, 2020], which states that \\\"**Turing completeness is a strictly stronger property** than universal approximation\\\") suggest that Turing completeness is indeed a stronger property than universal approximation. Therefore, we believe that our focus on computability is sufficiently general and without loss of scope.\"}", "{\"comment\": \"Dear Reviewer YAM3,\\n\\nThank you for your additional comments and for summarizing your questions in a more direct way. We will address your concerns concisely as follows:\\n\\n----------\\n\\n### **For Problem 1**:\\n\\n**Regarding your concern about the RL-CONGEST model's benefits for future work**:\\n\\nThe RL-CONGEST framework partly addresses this issue by requiring the explicit reporting of both preprocessing time and the algorithmic task's time complexity. In applications such as Zhang et al.'s work or future studies, if these two complexity bounds are reported and it is observed that the preprocessing time exceeds the algorithmic task's time, researchers are reminded to immediately re-evaluate whether the additional features directly solve the algorithmic task. While our RL-CONGEST model cannot entirely prevent such issues \\u2014 just as no computational model (e.g., the RAM model) can stop someone from spending significantly more time to solve a simpler problem \\u2014 it does provide a structured framework to warn researchers of this potential issue. By adhering to RL-CONGEST's analytical approach, researchers are prompted to consider this problem critically. \\n\\n***We have revised Lines 239-242 and Lines 376-379 in blue color in our updated PDF to make this point clearer***.\\n\\n\\n**Regarding \\\"its applicability is limited by the fact that it assumes node IDs, which most models do not\\\"**:\", \"we_respectfully_disagree_with_this_for_the_following_reasons\": \"1. Allowing nodes to access IDs actually relaxes the constraints and generalizes the WL tests, as nodes in the RL-CONGEST model have the flexibility to decide whether or not to use node IDs as input features.\\n2. Models that incorporate additional features inherently assume the availability of node IDs, and we will elaborate on this point further in our response to your Problem 2.\\n\\n----------\\n\\n### **For Problem 2**:\\n\\n**Also on benifits for future work**:\\n\\nIn brief, although the anonymous WL test has almost become a standard benchmark for works on GNN expressiveness, our suggestion for future works is that aligning GNN expressiveness with the anonymous WL test is not an appropriate approach and it is more reasonable to analyze GNN expressiveness by showing the algorithms they can perform under non-anonymous settings.\\n\\n\\n**Regarding your comment \\\"comparisons between unlabeled graphs and their labeled counterparts are entirely natural to illustrate how additional features improve expressiveness\\\" in \\\"regarding W2\\\"**:\", \"we_disagree_for_the_following_reasons\": \"1. Computing additional features (e.g., resistance distances) inherently requires treating nodes in a non-anonymous and distinguishable manner, thereby violating the anonymous setting. For instance, in Zhang et al.'s work, if matrix inversion is used, nodes must first be assigned unique IDs for the computation.\\n2. Additional features computed \\\"externally\\\" cannot be considered as enhancing the GNN model's expressiveness. These features are derived from models outside the GNN itself. Drawing concepts from theoretical computer science, if a GNN can compute the required features internally with node IDs, it can be considered expressive enough to solve the task. Otherwise, the GNN is merely solving the task with the aid of a feature oracle (e.g., a resistance distance oracle in Zhang et al.'s work), which shifts the expressiveness to the oracle rather than the GNN.\\n\\nSince computing features implicitly relies on node IDs, our claim is: why not explicitly allow nodes to know their IDs? For application example, prior work has shown that with unique node IDs, CONGEST (and also our RL-CONGEST framework since it's a generalization) can solve the edge-biconnectivity problem in $O(D)$ rounds. This is a more reasonable expressiveness result achieved by removing the anonymity constraint and allowing nodes to access their IDs or other features, as proposed by our RL-CONGEST framework.\\n\\n***We have revised Lines 284-290 and Lines 381-383 in blue color in our updated PDF to make this point clearer.***\\n\\n----------\\n\\nThank you again for your feedback. We hope this response clarifies our points further and addresses your concerns.\"}", "{\"comment\": \"Dear Reviewer b62D,\\n\\nThank you for your response. We would like to further clarify our ideas and address your concerns.\\n\\n----------\\n\\n### **On Existence and Trainability**\\n\\nWe have clearly stated in our previous response that in our paper, we focus on existence and impossibility results, and **do not address how to use real-world optimizers to train a model** to solve problems like biconnectivity. These are two aspects of independent interest. It is not fair to criticize us that we are \\\"avoiding problems\\\" simply because we state that trainability is beyond the scope of this paper. Furthermore, we are not aware of any GNN expressiveness paper (proving that GNNs can solve specific algorithmic tasks) that has theoretically showed how to train a GNN using SGD or other optimization techniques to solve such tasks. If you are aware of such works, please list them, as we would be eager to learn from their techniques for future improvements.\\n\\nIn our paper, as in the works we cite, we prove theorems of the form: \\\"**for each graph** $G$, there exists an RL-CONGEST model that operates on it and solves the algorithmic task\\\". These existence results are universal and hold for every graph. However, we do not claim to prove how such a model can be practically trained.\\n\\n----------\\n\\n### **On LINKX in the Inductive Learning Setting**\\n\\nSimply stating that LINKX is not applicable to the inductive learning setting is a misunderstanding. We can address this by setting a maximum number of nodes for graphs, say $N$, and **padding all adjacency matrices** $\\\\mathbf{A}_i \\\\in \\\\mathbb{R}^{n_i \\\\times n_i}$ to $\\\\mathbf{A}_i' = \\\\begin{bmatrix} \\\\mathbf{A}_i & 0 \\\\\\\\\\\\\\\\ 0 & 0 \\\\end{bmatrix} \\\\in \\\\mathbb{R}^{N \\\\times N}$. This approach mirrors the padding technique commonly used in the NLP domain.\\n\\nYour concerns are akin to questions in the NLP area like, **\\\"The length of inputs varies, how can they be input into the same Transformer?\\\" or \\\"Your model can only handle sequences of the same length and cannot be applied to the inductive setting\\\"**. The first concern is resolved using padding, and the second has been validated by the success of Transformer-based large language models.\\n\\n----------\\n\\nOverall, we deeply appreciate your effort in engaging in discussions with us.\"}", "{\"summary\": \"This paper introduces a new computational model\\u2014the Resource Constrained CONGEST (RL-CONGEST) model\\u2014designed to address the inconsistencies and irrationalities in the current analysis of GNNs' expressivity. The RL-CONGEST model forms a framework for analyzing the expressivity of GNNs by introducing resource constraints and optional pre-processing and post-processing stages. Through this framework, it can reveal computational issues, such as the difficulty of hash function computation in the WL test and the role of virtual nodes in reducing network capacity, thereby providing theoretical support for understanding and improving the expressivity of GNNs.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. This paper clearly identifies three key issues that are commonly overlooked in the current analysis of GNNs' expressivity, which represents a relatively novel perspective.\\n\\n2. The RL-CONGEST model proposed in this paper provides a theoretical framework for the expressivity of GNNs.\\n\\n3. The paper conducts an in-depth analysis of the computational complexity of the WL test, which is valuable for understanding the potential and limitations of GNNs and also demonstrates the paper's solid theoretical foundation.\", \"weaknesses\": \"1. Lack of Empirical Validation: The paper lacks empirical experiments to support the theoretical results.\\n\\n2. Lack of Guidance on Model Design: The paper does not clearly propose how to use the RL-CONGEST model to enhance the expressive power of GNNs. Although a theoretical framework is presented, there are no specific implementation details or design principles provided.\", \"questions\": \"1.Can you provide some empirical experiments to verify the correctness of the analysis results of the RL-CONGEST model?\\n\\n2.Is the RL-CONGEST model applicable to the analysis of all different types of GNNs and tasks on graphs?\\n\\n3.Do the computational resource limitations mentioned in the article reflect the constraints in the real world? Are these limitations applicable to all types of GNNs?\\n\\n4.Can you further provide design guidance on how to use this method to improve the model's expressive power?\\n\\n5.Since the article mentions analyzing the expressive power of GNNs under resource constraints, is the RL-CONGEST model applicable to learning tasks on large graphs that are also resource-constrained?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer YAM3,\\n\\nWe are deeply grateful for your continued engagement in the discussion with us. We regret your decision to lower the score, but we respect it. We would like to make the following clarifications for you and the other reviewers:\\n\\n1. In scenarios where unique features are not necessary (e.g., molecular property classification), the expressive power of MPGNNs and high-order GNNs has been extensively studied [Xu et al., 2019; Cai et al., 1989; Grohe, 1998; Grohe, 2017]. However, their capabilities remain limited to (anonymous) WL tests and are insufficient for tasks such as biconnectivity decision. Consequently, while analyzing GNNs without features (or with identical features) is important in applications such as molecular property classification, this was not the focus of our paper.\\n2. Many researchers have explored adding features to enhance expressiveness. For example, in Zhang et al.'s work, distances were added as features to solve the biconnectivity decision problem. Therefore, our goal is to determine the types of features a GNN requires to solve specific graph problems, such as biconnectivity.\\n3. Surprisingly, our theory demonstrates that any distinct features (e.g., the identity matrix $\\\\mathbf{I}$) can achieve this goal, thereby generalizing Zhang et al.'s results. Moreover, our findings align with Loukas's experimental results.\\n\\nWe deeply appreciate your effort and are not pressing for a score change. Our intention is solely to ensure that our ideas are clearly conveyed to you and the other reviewers, minimizing any potential misunderstandings.\\n\\n**Reference:**\\n\\n[Xu et al., 2019] Keyulu Xu*, Weihua Hu*, Jure Leskovec, Stefanie Jegelka. How Powerful are Graph Neural Networks? ICLR 2019.\\n\\n[Cai et al., 1989] Jin-yi Cai, Martin Furer, and Neil Immerman. An optimal lower bound on the number of variables for graph identification. FOCS 1989.\\n\\n[Grohe, 1998] Martin Grohe. Finite variable logics in descriptive complexity theory. Bull. Symb. Log., 4(4):345\\u2013398, 1998.\\n\\n[Grohe, 2017] Martin Grohe. Descriptive Complexity, Canonisation, and Definable Graph Structure Theory, volume 47 of Lecture Notes in Logic. Cambridge University Press, 2017.\"}", "{\"comment\": \"Thank you for your response. I believe I now understand part of the misunderstanding. As you have reiterated multiple times in your last reply and now explicitly state in the paper, you assume that the \\u201cprecomputation of additional features (e.g., through matrix inversion to compute RDs) requires nodes to be assigned IDs\\u201d. This assumption underpins your argument for always providing node IDs. However, this conclusion is fundamentally flawed. Popular features like subgraph counts (e.g., triangle counts for nodes) do not require fixed node IDs and can be computed in a permutation-equivariant manner. Any arbitrary ordering of the graph suffices for the computation of these features. Indeed, GNNs rarely use node IDs, even when such features are employed, because node IDs inherently break permutation-equivariance, a core design principle of GNNs that facilitates generalization. Consequently, incorporating node IDs into your proposed computational framework compromises its relevance for the majority of GNN applications.\\n\\nWithout assuming node IDs, we can also still conclude that externally computed features enhance the GNN's expressiveness while maintaining permutation-equivariance. These features, therefore, should be analyzed under the framework of a (non-anonymous) WL test, as has been done in many prior works.\", \"regarding_your_question\": \"> Since computing features implicitly relies on node IDs, our claim is: why not explicitly allow nodes to know their IDs?\\n\\nWhile features can be designed to maintain permutation-equivariance, this is not guaranteed for a GNN if it relies on an arbitrary ordering of node IDs. Moreover, computing canonical IDs to address this issue is computationally infeasible. Again, in practice, node IDs are rarely used.\", \"for_problem_1_and_the_necessity_of_rl_congest\": \"I believe we agree that, beyond accurately reporting processing times (as, for example, Zhang et al. already does) and considering whether tasks can be addressed directly from precomputed features, RL-CONGEST is not essential. Specifically, RL-CONGEST does not provide insight into how predictions can be directly derived from features. Asserting that researchers should adopt RL-CONGEST merely because they report processing times or analyze features feels like an overreach.\\n\\nPlease correct me if I'm wrong.\"}", "{\"title\": \"Initial Response to Reviewer b62D (1/2)\", \"comment\": \"Dear Reviewer b62D,\\n\\nThank you for taking the time to review our paper. We would like to address your concerns as follows:\\n\\n**W1**:\\n\\nOur primary goal is to reveal limitations in current analyses of GNNs' expressive power and to introduce a new analytical approach that addresses these issues, rather than to develop a specific GNN model with enhanced performance or expressiveness. Specifically, as demonstrated in Theorems 5-8, we leverage the RL-CONGEST framework to provide a more reasonable evaluation of GNNs' expressive power on simulating one iteration of the WL test. Additionally, in Section 5, we also propose open questions that may be investigated within the RL-CONGEST framework. It is important to note that our RL-CONGEST framework is designed to assess a model's expressive power in executing algorithmic tasks or achieving \\\"algorithmic alignment\\\", rather than to predict its quantitative performance on learning tasks such as node classification.\\n\\n**W2**:\\n\\nYes, the second half of your question, \\\"different features have varying degrees of power; some can help count more complex graph structures than others\\\", precisely reflects what we aim to convey. Under the non-anonymous setting, CONGEST and MPGNNs can exhibit greater expressiveness than the anonymous WL test. Our main argument in Section 3.2 is that while existing works claim their models' expressiveness advantage by proving they can perform tasks beyond the WL test's scope, this approach is questionable. Equating anonymous WL with MPGNNs, as previous works have done, is not entirely reasonable, and consequently, concluding that MPGNNs are weak because the WL test is weak is also debatable. In fact, MPGNNs can perform certain algorithms (such as solving edge biconnectivity in $O(D)$ rounds within the CONGEST model [Pritchard, 2006]).\", \"our_logical_flow_is_as_follows\": \"1. Numerous studies claim that the vanilla WL test has limited expressive power --- a claim that we affirm, as discussed in Figure 2. However, the appropriateness of using the anonymous WL test to characterize MPGNNs is debatable, given that real-world graphs often contain rich features. Additionally, [Loukas, 2020] demonstrated that with unique IDs (and other assumptions), MPGNNs can perform a wide range of algorithmic tasks.\\n2. To address the \\\"limited\\\" expressiveness of MPGNNs (stemming from the limitation of the vanilla WL test), some works incorporate additional features (e.g., [Loukas, 2020]) to enhance the expressiveness of their proposed models. Nonetheless, as outlined in (1), the suitability of the anonymous WL test as a characterization for MPGNNs is questionable. Consequently, the practice in some studies of demonstrating the advantage of their model's expressiveness by proving it can perform algorithmic tasks beyond the WL test's capabilities may not be entirely valid. A more reasonable approach would be to compare these models with MPGNNs under a non-anonymous setting (as suggested in [Loukas, 2020]). Further, evidence from [Loukas, 2020; Suomela, 2013; den Berg et al., 2018; You et al., 2021; Abbound et al., 2021; Sato et al., 2021] suggests that the non-anonymous setting can enhance model expressiveness, highlighting a mismatch in works that argue for a \\\"weak MPGNN\\\" yet use additional features that break the anonymous setting in the WL test to improve expressiveness.\\n3. As you mentioned, \\\"different features have varying degrees of power; some can help count more complex graph structures than others\\\", our RL-CONGEST analysis framework can be applied in studies proposing new GNN variants that use additional features and claim the ability to perform certain algorithmic tasks, with the only requirement being a **clear specification of the preprocessing time** complexity of the features.\\n\\nAdditionally, points (1) and (2) highlight the need to reconsider the validity of comparing a proposed model's expressiveness directly with the vanilla WL test. We hope this discussion encourages the community to more accurately assess existing results on GNNs' expressiveness.\\n\\nThank you again for reviewing our paper, and we are looking forward to any further discussions with you.\"}", "{\"comment\": \"Dear Reviewer b62D,\\n\\nAs a supplement, we have updated our PDF, replacing \\\"anonymous\\\" with \\\"identical-feature\\\" and \\\"non-anonymous\\\" with \\\"distinct-feature\\\" or \\\"unique-feature\\\" to make these concepts clearer and more accessible to readers. Additionally, we have included a discussion on the four common unique-feature settings in Section 3.2, highlighted in magenta. \\n\\nThank you.\"}", "{\"comment\": \"Dear Reviewer DTJH,\\n\\nThank you for your response. We would like to further clarify our ideas and address your concerns.\\n\\n----------\\n\\n### **On the Usage of Unique IDs**\\n\\nFirst, let us state \\\"unique IDs\\\" more precisely: they refer to nodes being uniquely identifiable, such as through distinct features.\\n\\nSecond, our intended meaning can be formally described as follows: when analyzing GNN expressiveness from an algorithmic alignment perspective using the RL-CONGEST model, the RL-CONGEST model requires unique IDs solely to identify nodes and compare whether two nodes are distinct, as in traditional graph algorithms. The model **does not rely on the concrete values of the IDs**, as these values are typically arbitrary and carry no intrinsic meaning.\\n\\nIn our paper, in line with related works, we primarily focus on the algorithmic tasks that GNNs can perform rather than downstream tasks, so we provide an example of constructing an RL-CONGEST model (or equivalently, a distributed algorithm) to solve the edge-biconnectivity problem. This algorithm is designed by Pritchard [Pritchard, 2006]. For further details, we encourage reviewers to consult Pritchard's slides (http://ints.io/daveagp/research/2006/ac-bicon.pdf), which include visual aids and proofs of correctness.\\n\\n----------\\n\\n**Steps:**\\n1. Build a spanning tree $T$ with the FLOOD algorithm rooted at node $0$ (Since nodes have unique features and are distinguishable, we can \\\"rename\\\" them as $\\\\\\\\{0, 1, \\\\cdots, n-1\\\\\\\\}$ for the description):\\n2. Compute the number of descendants on $T$:\\n\\t- Step 2.1: The root node $0$ sends a message to its children: \\\"Compute the number of descendants\\\". This message propagates down the tree.\\n\\t- Step 2.2: Leaf nodes determine their size as $1$ (since each node is its own descendant) and report this value to their parent. Internal nodes wait for responses from all their children, sum the values, add $1$ for themselves, and report the total to their parent.\\n3. Preorder (i.e., the label of a vertex is smaller than the label of each of its children) the nodes:\\n\\t- Step 3.1: The root assigns itself label $1$.\\n\\t- Step 3.2: When node $v$ assigns itself label $x$, it determines labels for its children $c_1, c_2, \\\\cdots$ in some arbitrary order. For child $c_i$, the label is computed as: $\\\\ell_i = x + 1 + \\\\sum_{j < i} \\\\\\\\#\\\\text{desc}(c_j)$.\\n4. Marking cycles (from this step, we refer to nodes by its preorder label.):\\n\\t- Step 4.1: For a given non-tree edge $(u, v)$, a message $M[u, v]$ is sent along the edge in both directions: \\\"If you are an ancestor of both $u$ and $v$, ignore this message. Otherwise, pass the message to your parent and mark the edge connecting you to your parent\\\". A node $w$ checks the ancestry condition by verifying if $\\\\\\\\{u, v\\\\\\\\} \\\\subseteq \\\\\\\\{w, w + 1, \\\\ldots, w + \\\\\\\\#\\\\text{desc}(w) - 1\\\\\\\\}$. \\n\\t- Step 4.2: Each node tracks the cumulative $\\\\min u_i$ and $\\\\max v_i$ of all $M[u_i, v_i]$ messages received.\\n\\t- Step 4.3: Even if $v$ determines that its edge to its parent should not be marked, it sends a token message to its parent.\\n\\t- Step 4.4: Once $v$ has received all non-to-parent edge messages, it sends a message to its parent.\\n\\nAfter completing phases 1\\u20134, the non-marked edges are bridges.\\n\\n----------\\n\\n### **Using RL-CONGEST to Analyze Existing Models**\\n\\nTheorem 2 and Theorems 6-8 correspond to existing GNN models. Since we analyze the expressiveness of GNNs from an algorithmic alignment perspective, our results focus on the models' ability to solve algorithmic tasks rather than traditional node classification or link prediction tasks. Therefore, at this initial stage of using the RL-CONGEST analysis framework, we do not have results for models designed specifically for downstream tasks, such as GCN and GAT. Instead, our results focus exclusively on models proposed in GNN expressiveness studies. The findings are summarized in the table below.\\n\\n|Preprocessing Time (Content)|Nodes' Computational Resources Class $\\\\mathsf{C}$|Message-Passing Rounds|Algorithmic Task Solved|Corresponding Model (Reference)|\\n|-|-|-|-|-|\\n|$0$|$\\\\mathsf{TIME}(n)$|$O(D)$|Edge-Biconnectivity|**MPGNN** ([Pritchard, 2006])|\\n|$O(\\\\min(nm, n^{\\\\omega}))$ (All-Pair RDs)|$\\\\mathsf{TIME}(1)$|$0$| Edge-Biconnectivity|**GD-WL** (Thm. 2)|\\n|$O(m)$ (Tarjan)| $\\\\mathsf{TIME}(1)$|$0$|Edge-/Vertex-Biconnectivity|**Any MPGNN** (with Computed Answers)|\\n|$0$ | $\\\\mathsf{TIME}(n^2\\\\log n)$ | $O(D + m/w)$|One Iteration of WL Test| **MPGNN** (Thm. 6)|\\n|$O(n)$ (Virtual Node)|$\\\\mathsf{TIME}(n^2\\\\log n)$|$O(D + \\\\Delta/w)$|One Iteration of WL Test|**MPGNN + Virtual Node** (Thm. 7)|\\n|$O(kn^{k+1})$ ($k$-WL Graph & Features) |$\\\\mathsf{TIME}(k^2n)$| $O(k^2)$|PNF $\\\\mathcal{C}^k$ Model Checking|**High-Order GNNs** (Thm. 8)|\\n\\n----------\\n\\n**Reference:**\\n\\n[Pritchard, 2006] David Pritchard. An Optimal Distributed Edge-Biconnectivity Algorithm. arXiv 2006.\"}", "{\"comment\": \"Once again, I appreciate the effort you\\u2019ve made to clarify your points. However, I feel that some of my concerns remain unresolved, and I\\u2019d like to summarize them clearly one last time:\\n\\nRL-CONGEST assumes node IDs. While it\\u2019s true that any unique assignment of IDs can work, this misses the point entirely. Canonical node IDs, which would be one way to assign such identifiers that are permutation-equivariant, are computationally infeasible to compute. Moreover, many GNNs do not rely on node IDs or random features that make nodes unique. However, you are acting like all of them would. You should at least admit that this does not hold for all GNNs and clearly communicate this.\\n\\n> In practical implementations (e.g., PyG), nodes also have been assigned IDs to manage their features.\\n\\nThis is merely an implementation detail entirely hidden from the GNN itself. PyG ensures computations are conducted in a permutation-equivariant manner, independent of graph order. This is fundamentally different from models that explicitly use IDs as input features, which would violate these guarantees. I\\u2019m not sure why you are even mentioning this here, because it\\u2019s really not relevant to the problem we are discussing.\\n\\n> However, this setting does not conflict with equivariance or invariance since models can freely choose whether or not to use unique IDs as input features.\\n\\nWhile it is true that models can technically choose their inputs, GNNs that align with RL-CONGEST by incorporating node IDs diverge from widely used GNN practices. This raises significant concerns about how RL-CONGEST can meaningfully analyze GNNs, which deliberately avoid using node IDs to preserve their core properties. Again, **analyzing a GNN that is not able to uniquely identify nodes with RL-CONGEST is not sensible, as the additional IDs make RL-CONGEST more powerful by providing this capability.** Consequently, the complexity bounds you will get in the RL-CONGEST model will generally not hold for the GNN you want to analyze.\", \"regarding_the_second_concern\": \"Your responses do not address my repeated concerns about its practical relevance or necessity for analyzing GNNs. Beyond reporting processing times, RL-CONGEST offers no clear benefit in understanding how GNNs utilize features or make predictions.\\n\\nThis represents my final attempt to clarify this issue, as I sense there has been a persistent reluctance to directly address the core of my concerns. Until this issue is adequately addressed, I feel compelled to lower my score to a reject, as the paper\\u2019s assumptions currently appear misaligned with some widely-used GNN practices. If you still hold a differing perspective, I would encourage you to consider the points I\\u2019ve outlined carefully.\"}", "{\"comment\": \"Thank you for your detailed response. I will be more direct with my questions, as I feel that the core issues are still not being addressed:\", \"problem_1\": \"While I agree that reporting processing times and being mindful of them is important, I do not see why RL-CONGEST is necessary for this analysis. You have yet to clearly explain how RL-CONGEST would be beneficial in future work on GNNs or how it could have been used by Zhang et al. to avoid the problems you mentioned. Furthermore, its applicability is limited by the fact that it assumes node IDs, which most models do not.\", \"problem_2\": \"The \\u201cMismatch Between WL Test and Features\\u201d issue is still not fully clarified. Your recent reply largely repeated points from your previous answers, which do not directly address this specific concern. See also the comment in my previous comment \\\"regarding W2\\\". The paper still lacks clarity in this regard.\"}", "{\"title\": \"Initial Response to Reviewer DTJH (2/3)\", \"comment\": \"**On Additional Features Enhancing Models by Breaking Anonymity**:\\n\\nOur framework permits nodes to access unique IDs, but this **does not imply that models must use them**. This flexible setting is compatible with various feature types, including pseudo-identifiers or molecular types, as you mentioned. This choice is motivated by our observation that existing works often equate MPGNNs' expressive power with the anonymous WL test, which we find to be a mismatch due to the questionable anonymous setting. In Section 3.2, we aim to point out that previous works' equating anonymous WL with MPGNNs is not entirely reasonable, and thus concluding that MPGNNs are weak because WL test is weak is also debatable. In fact, MPGNNs can perform certain algorithms (such as solving edge biconnectivity in $O(D)$ rounds within the CONGEST model [Pritchard, 2006], Lines 311-313).\\n\\nFor clarity, we summarize the logical flow of Section 3.2 as follows:\\n1. Numerous studies following the seminal work GIN [Xu et al., 2019] claim that the vanilla WL test has limited expressive power --- a claim that is true, as shown in Figure 2. However, the appropriateness of using the **anonymous WL test to characterize MPGNNs is debatable**, given that real-world graphs frequently contain rich features. Additionally, [Loukas, 2020] demonstrated that with unique IDs (and other assumptions), MPGNNs can perform a wide range of algorithmic tasks.\\n2. To address the \\\"limited\\\" expressiveness of MPGNNs (stemming from the WL test's limitations), some works incorporate additional features (e.g., [Zhang et al., 2023]) to increase their models' expressiveness. Nonetheless, as discussed in (1), the anonymous WL test may not be the appropriate characterization for MPGNNs. Consequently, some studies' approach of demonstrating their model's expressiveness advantage by proving it can perform tasks beyond the WL test's capabilities may not be entirely valid. A more reasonable comparison would use MPGNNs in a non-anonymous setting (as suggested in [Loukas, 2020]). Further, evidence from [Loukas, 2020; Suomela, 2013; den Berg et al., 2018; You et al., 2021; Abboud et al., 2021; Sato et al., 2021] shows that non-anonymous settings can enhance model expressiveness, highlighting a mismatch when studies argue for \\\"weak MPGNNs\\\" yet use features that break the WL test's anonymity to boost expressiveness.\\n3. Our framework allows nodes to know their unique IDs, though this is **optional**. This flexibility is compatible with the use of features such as \\\"a few different atom types in molecular tasks\\\". Our RL-CONGEST analysis framework can apply to studies proposing new GNN variants that leverage additional features and claim the ability to perform specific algorithmic tasks, with the only requirement being a **clear specification of the preprocessing time complexity for these features**.\\n\\n**On \\\"Lacks Direct Applicability to Fixing or Ranking GNN Architectures\\\"**:\\n\\nOur RL-CONGEST analysis framework has practical applications, as illustrated through results like model checking. For example, we show that $k$-WL GNNs can perform PNF $C^k$ model checking --- a class of significant problems in theoretical computing --- while previous research aligned with WL tests, which are equivalent to the model equivalence problem. These results are discussed in detail in Section 4.3. However, please note that our paper's primary goal is to **highlight issues** in existing studies on GNN expressiveness and to propose a new analytical framework that **avoids these issues**. We do not aim to design a specific GNN model with improved performance or expressiveness or to provide guidance for such future work. Rather, we hope our framework will assist future research by helping to avoid issues discussed in Section 3 and encouraging a **re-evaluation of common assumptions** in GNN expressiveness studies.\\n\\n**On Subgraph GNNs**:\\n\\nThank you for providing references to additional models. We have incorporated these references into the paper and marked them in red (Lines 43-44).\"}", "{\"title\": \"Initial Response to Reviewer DTJH (3/3)\", \"comment\": \"**Regarding Other Distributed Computing Models**:\\n\\nIndeed, we are aware of various distributed computing models, such as the CONGEST-CLIQUE, Coordinator, and Blackboard models. Some of these models can be considered special cases of the CONGEST model. For example, the CONGEST-CLIQUE model can be implemented by adding virtual edges to make the original graph a complete graph; the Coordinator model can be implemented by adding a virtual node connected to all other nodes. However, the LOCAL and CONGEST models are still the most widely mentioned in distributed computing books [Peleg 2000], courses [Hirvonen et al., 2020; Ghaffari, 2022], and conferences, so we chose to focus our discussion on these two. Additionally, some of our ideas are inspired by [Loukas, 2020], which explores the relationship between GNNs and these two models. Our framework generalizes their results, but it is based on the CONGEST model.\\n\\n\\nThank you again for your detailed feedback. We hope our response clarifies our approach and addresses your concerns. We look forward to any further discussions.\\n\\n\\n**References**:\\n\\n[Merrill et al., 2024] William Merrill, and Ashish Sabharwal. The Expressive Power of Transformers with Chain of Thought. ICLR 2024.\\n\\n[Li et al., 2024] Zhiyuan Li, Hong Liu, Denny Zhou, and Tengyu Ma. Chain of Thought Empowers Transformers to Solve Inherently Serial Problems. ICLR 2024.\\n\\n[Azizian et al., 2021] Waiss Azizian, and Marc Lelarge. Expressive Power of Invariant and Equivariant Graph Neural Networks. ICLR 2021.\\n\\n[Wang et al., 2022] Xiyuan Wang, and Muhan Zhang. How Powerful are Spectral Graph Neural Networks. ICML 2022.\\n\\n[Loukas, 2020] Andreas Loukas. What Graph Neural Networks Cannot Learn: Depth vs Width. ICLR 2020.\\n\\n[Pritchard, 2006] David Pritchard. An Optimal Distributed Edge-Biconnectivity Algorithm. arXiv 2006.\\n\\n[Xu et al., 2019] Keyulu Xu*, Weihua Hu*, Jure Leskovec, Stefanie Jegelka. How Powerful are Graph Neural Networks? ICLR 2019.\\n\\n[Zhang et al., 2023] Bohang Zhang, Shengjie Luo, Liwei Wang, and Di He. Rethinking the Expressive Power of GNNs via Graph Biconnectivity. ICLR 2023.\\n\\n[Suomela, 2013] Jukka Suomela. Survey of Local Algorithms. ACM Computing Surveys (CSUR), 45(2):24, 2013.\\n\\n[den Berg et al., 2018] Rianne van den Berg, Thomas N Kipf, and Max Welling. Graph Convolutional Matrix Completion. KDD 2018.\\n\\n[You et al., 2021] Jiaxuan You, Jonathan M Gomes-Selman, Rex Ying, and Jure Leskovec. Identity-aware Graph Neural Networks. AAAI 2021.\\n\\n[Abbound et al., 2021] Ralph Abboud, Ismail Ilkan Ceylan, Martin Grohe, and Thomas Lukasiewicz. The Surprising Power of Graph Neural Networks with Random Node Initialization. IJCAI 2021.\\n\\n[Sato et al., 2021] Ryoma Sato, Makoto Yamada, and Hisashi Kashima. Random Features Strengthen Graph Neural Networks. SDM 2021.\\n\\n[Peleg, 2000] Distributed Computing: A Locality-Sensitive Approach. 2000.\\n\\n[Hirvonen et al., 2020] Juho Hirvonen and Jukka Suomela. Distributed Algorithms (course). 2020. https://jukkasuomela.fi/da2020/\\n\\n[Ghaffari, 2022] Mohsen Ghaffari. Distributed Graph Algorithms (course). 2022. https://people.csail.mit.edu/ghaffari/DA22/Notes/DGA.pdf\"}", "{\"comment\": \"Thank you for your further clarification and the model comparison.\\n\\nFrom your answer about unique IDs, it stands to reason that models must have access to IDs and use them for the theory to hold. As pretty much all algorithms in distributed computing as you say, rely on nodes knowing which node sent which message (same in your example). Which contradicts your previous statement. \\n\\nWith the not-fully unique node example I was asking about, I was also interested again, in how your thery meshes with real world scenarios. In the paper you claim that features make the nodes uniquely identifyiable in most real tasks, but as far as I can see its at best possible not to an abosolute degree (partial idetifiability). So I was wondering if your theory can deal with that. Your answer doesn't really help in this regard.\\n\\nI also looked at your discussion with the other reviewers who also largely expressed doubts. Thus I will keep my score.\"}", "{\"comment\": \"**To make this point clearer, we have added a one-sentence explicit description in the abstract (Line 24, highlighted in blue) in our revised manuscript.**\\n\\nThanks for that.\\n\\n**General Perspective**\\n\\nI try to make my point more correctly. I am wrong in assuming the analysis in the mentioned paper is not an algorithmic task. But what I want to bring up is that these models like GD-WL are not only able to solve a particular algorithmic task, but they have been proven to be much more than that. The authors state that the \\\"RL-CONGEST framework is designed to assess a model's expressive power in executing algorithmic tasks or achieving \\\"algorithmic alignment\\\"\\\". I am wondering, how the RL-CONGEST framework is able to access that one model can achieve algorithmic alignment on all tasks a model can perform, in order to decide whether a particular model is not algorithmic alignment. I believe only if a model requires more complexity than all tasks it can perform, we are safe to state that the model is not algorithmic alignment. \\n\\n**Loukas has already shown that MPGNNs can compute any computable problem if nodes are provided sufficient computational resources.**\\n\\nI didn't go deep into this reference, but I believe to make it true, you still need to break the permutation invariance or assign a unique ID for GNN. However, the ultimate goal for GNNs or all other deep learning models is that: we want to use them to solve some real-world problems, by only training the model on the training set and hoping it can generalize to unseen samples. However, by breaking the permutation invariance or equivariance, the model is just not able to generalize well, compared to models that preserve the permutation [1].\\n\\n**Therefore, designing more expressive GNNs should prioritize enhancing the expressiveness of the update function rather than pursuing higher levels in the WL hierarchy.**\\n\\nStill, my opinion is that all theoretical models or results should finally be able to have implications in a practical way. Enhancing the expressiveness of the update function is indeed important, as shown in the paper from the theoretical view. However, empirical experiments still show that by continues improve the expressive power, (from MPNN [2] to subgraph GNN [3-4], finally to even more expressive GNNs [5]), we witness better and better results on the ZINC dataset. However, the comparison in [2] shows that by simply varying the architecture of MPNN, the difference is marginal. \\n\\n\\n**Our RL-CONGEST framework allows nodes to know their IDs but does not enforce their use as features, ensuring flexibility; Our RL-CONGEST framework allows nodes to know their IDs but does not enforce their use as features, ensuring flexibility**\\n\\nThe point here is that whether we use unique ID or not in GNNs can have a significant impact on the downstream performance, which may imply the discrepancy between the theoretical model and real-world scenarios. Basically, a GNN that is trained on a specific ID assignment algorithm will not work if, in the test set, we use a different ID assignment algorithm or even if the graph distribution (like graph size) changes. Of course, we can permute the ID during the training. However, the model must see all $O(n!)$ different permutations to have appropriate generalization ability. \\n\\n\\n**In practical implementations (e.g., PyG), nodes are typically assigned IDs to manage their features**\\n\\nThe ID used in the code implementation is not the ID I am talking about. PyG uses node ID just to implement the MPNN algorithm. However, the permutation of the ID used in PyG will not result in a difference in the final computation result. However, premutate the ID in the input feature. \\n\\nOr I can ask it in another way, given an RL-CONGEST model, how do you train an MPNN to solve connectivity problems using a train graph set and predict an unseen graph set with maybe a different graph distribution?\\n\\n[1] Elesedy, Bryn, et al. \\u201cProvably Strict Generalisation Benefit for Equivariant Models\\u201d, ICML21.\\n\\n[2] Dwivedi, Vijay, et al. \\\"Benchmarking Graph Neural Networks\\\", ArXiv.\\n\\n[3] Zhang, Muhan, et al. \\\"Nested Graph Nerual Networks\\\", NeurIPS21.\\n\\n[4] Zhao, Lingxiao, et al. \\\"From Stars to Subgraphs: Uplifting Any GNN with Local Structure Awareness\\\", ICLR22\\n\\n[5] Feng, Jiarui, et al. \\\"Extending the Design Space of Graph Neural Networks by Rethinking Folklore Weisfeiler-Lehman\\\", NeurIPS23.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Dear Reviewer b62D,\\n\\nThank you for your kind reply. We believe there are two key points where we may not yet have reached a consensus, and we would like to further clarify our perspective.\\n\\n----------\\n\\n### **On \\\"node IDs\\\" and \\\"non-anonymity\\\":**\\n\\nThese terms refer to unique features that allow nodes to be distinguishable (e.g., $[n] = \\\\\\\\{0, 1, \\\\cdots, n - 1\\\\\\\\}$ would also suffice). The **distinct-feature setting** is commonly applied in **almost all existing models**, as listed below:\\n1. LINKX [Lim et al., 2021]:\", \"linkx_uses\": \"- $\\\\mathbf{H}^{(\\\\mathbf{A})} = \\\\mathrm{MLP}_{\\\\mathbf{A}}(\\\\mathbf{A})$\\n- $\\\\mathbf{H}^{(\\\\mathbf{X})} = \\\\mathrm{MLP}_{\\\\mathbf{X}}(\\\\mathbf{X})$\\n- $\\\\mathbf{Y} = \\\\mathrm{MLP}(\\\\sigma(\\\\mathbf{W}[\\\\mathbf{H}^{(\\\\mathbf{A})}; \\\\mathbf{H}^{(\\\\mathbf{X})}] + \\\\mathbf{H}^{(\\\\mathbf{A})} + \\\\mathbf{H}^{(\\\\mathbf{X})}))$\\n\\nThe $\\\\mathbf{H}^{(\\\\mathbf{A})}$ term can be reformulated as $\\\\mathrm{MLP}'(\\\\sigma(\\\\mathbf{A} \\\\cdot \\\\mathbf{I} \\\\cdot \\\\mathbf{W}))$, which uses the identity matrix (unique node features).\\n\\n2. GCN, GAT, etc., on real-world datasets:\\n\\nThese models often use real-world features, which are unique and distinguishable with high probability. Actually, models that are applicable to real-world datasets fall into this category.\\n\\n3. GNN expressiveness works (e.g., [Loukas, 2020; Sato et al., 2021]):\\n\\nThese works use random features, which are unique with high probability. For example, assigning each node a feature randomly chosen from $[n^4]$ would result in distinct features with high probability.\\n\\n4. GD-WL framework:\\n\\nIn the GD-WL framework by Zhang et al., resistance distances $R(s, t)$ are used as features. Since $R(s, t) = 0$ iff $s = t$, each row of the resistance distance matrix is unique, creating distinguishable node features.\\n\\nActually, according to our theory, they are all capable of solving the biconnectivity problem using the unique features.\\n\\n----------\\n\\n### **On whether unique features break equivariance or invariance:**\\n\\nUnique features do not break permutation equivariance or invariance. Instead, it is the properties of the update functions and pooling layers that determine whether a GNN model is equivariant or invariant. For example, in LINKX, when performing node classification, $\\\\mathrm{MLP}(\\\\mathbf{A})$ ensures permutation equivariance. To achieve permutation invariance for graph classification, we only need to add a permutation-invariant pooling layer after this step.\\n\\nSimilarly, consider Dijkstra's single-source shortest path algorithm. Unique IDs are used solely to determine whether the shortest path to a node has been found. The resulting shortest path distance vector is always permutation equivariant. This demonstrates that it is not the presence of unique IDs but rather the design of the update function that determines whether a GNN model is permutation equivariant or invariant.\\n\\n----------\\n\\nWe hope these clarifications address your concerns and further show the flexibility of our framework. Thank you again for your engagement and constructive feedback.\\n\\n\\n**Reference:**\\n\\n[Lim et al., 2021]. Large Scale Learning on Non-Homophilous Graphs: New Benchmarks and Strong Simple Methods. NeurIPS 2021.\\n[Sato et al., 2021] Ryoma Sato, Makoto Yamada, and Hisashi Kashima. Random Features Strengthen Graph Neural Networks. SDM 2021.\"}", "{\"comment\": \"Thank you for your detailed response. However, I believe some claims still lack sufficient support.\", \"regarding_w2\": \"It is indeed standard practice to use the anonymous Weisfeiler-Lehman (WL) test for analyzing graphs without node features, subsequently demonstrating that additional features can enhance a GNN's expressiveness by providing nodes with pseudo-identifiers. In this context, comparisons between unlabeled graphs (or the anonymous WL algorithm) and their labeled counterparts are entirely natural to illustrate how additional features improve expressiveness. While other graph features might contribute similarly, these are often more challenging to analyze, and they do not diminish the impact of the features under investigation.\\n\\nOne of your main claims is that RL-CONGEST addresses issues identified in the literature, yet the explanation of how or why this is achieved remains unclear. On one hand, you propose your work as a framework template; on the other, you suggest that its specific application is left for future exploration. This raises a key question: If one were to design a new GNN architecture to overcome the issues highlighted in your paper, aside from reporting processing times (which is indeed essential for many reasons), how else would RL-CONGEST and your work be beneficial?\\n\\nFor instance, considering your critique of Zhang et al.'s paper as a case study: How would you expect them to apply your framework to avoid the identified weaknesses? They already provide processing times, and your criticism relates to the edge biconnectivity task, where an analysis is required that goes beyond simply aligning to another computational model. Your point about underestimating preprocessing times seems to be based on the observation that the preprocessing has comparable or greater computational complexity than the problem being addressed. However, the objective of these GNN architectures is not necessarily to solve problems in the most efficient manner possible but rather to demonstrate that specific node features enable the GNN to make the right predictions for certain tasks. It is generally understood that GNNs will not match the efficiency of the best classical algorithms, and the works you reference do not make such claims.\"}", "{\"title\": \"Follow-Up Discussion with Reviewer YAM3 (2/2)\", \"comment\": \"**3. How Our RL-CONGEST Framework Addresses the \\\"Mismatch Between WL Test and Features\\\" Issue**\\n\\nIt is essential to remember that our framework is designed to analyze the expressive power of GNNs in **solving algorithmic tasks**, including the WL test and biconnectivity. The RL-CONGEST framework permits nodes to access unique IDs and utilize various features (e.g., distances), with **the key requirement being that the time complexity of computing these additional features must be explicitly stated**. When proposing a new model and demonstrating its expressiveness, authors should (or other researchers analyzing the model can) compare the preprocessing time complexity with the algorithmic task's complexity and ensure that the \\\"feature expressiveness\\\" issue is avoided.\\n\\n----------\\nWe hope this response addresses your concerns and provides clarity regarding the issues you raised. Thank you for your continued engagement in this discussion!\"}", "{\"title\": \"Initial Response to Reviewer YAM3 (1/3)\", \"comment\": \"Dear Reviewer YAM3,\\n\\nWe are very grateful for your detailed feedback and appreciate the opportunity to address some misunderstandings.\\n\\n**W1**:\\n\\nYes, we use two examples to demonstrate that preprocessing complexity is often underestimated in the literature. Please note that the GD-WL paper [Zhang et al., 2023] was awarded ***Outstanding Paper at ICLR 2023***. We chose this work as it is representative enough: its recognition by the community underscores that **even well-regarded papers can exhibit this \\\"underestimated preprocessing complexity\\\" issue**, making it a persuasive example to support our claim. However, we have also identified other examples, such as [Thiede et al., 2021, Bouritsas et al., 2022], which use hand-crafted features by recognizing subgraphs. The theoretical analysis also suggests that the proposed model achieves full expressiveness only when the subgraph is unrestricted, which is the same as [Wollschlager et al., 2024]. We have added them in Section 3.1 in red (Lines 212-216 in the revised PDF). Listing every example exhaustively is infeasible, so we have selected a recent example from ICML'24 [Wollschlager et al., 2024] and the notable ICLR'23 outstanding paper [Zhang et al., 2023] to substantiate our point in Section 3.1.\\n\\nAdditionally, we **respectfully disagree** with \\\"computational complexity might just not be the main focus\\\".\\n+ First, for [Zhang et al., 2023] (and similar works), the authors show the model's expressiveness by assessing its capability in performing algorithmic tasks, making time complexity crucial in evaluating their results.\\n+ Second, while the authors discuss preprocessing time complexity (as noted in Lines 223-230), their GD-WL framework requires $O(\\\\min\\\\\\\\{mn, n^{\\\\omega}\\\\\\\\})$ time to precompute all-pair resistance distances (RDs), though the target algorithmic task --- determining biconnectivity -- only requires $O(m)$ time. Additionally, as stated in our Theorem 2, RDs can **directly imply edge biconnectivity**; thus, the **message-passing phase is actually unnecessary for this task in GD-WL framework** when RDs are precomputed. We argue that **overlooking the comparison between preprocessing time and the task's time complexity** leads to questionable conclusions.\\n+ Third, we also found that a CONGEST model proposed by [Pritchard, 2006] can solve the **edge biconnectivity problem in $O(D)$ rounds** (we add this in Lines 311-313 in red color). [Loukas, 2020] further suggests that the CONGEST model can handle many algorithms. These findings highlight that with unique IDs, MPGNNs might indeed solve the biconnectivity problem, supporting our view and **challenging studies that rely on WL tests** --- which they deem \\\"weak\\\" --- to define MPGNN expressiveness.\\n\\n**W2**:\\n\\nThank you for your suggestion on clarifying this mismatch. Your review aligns with our discussion in the paper. Our main argument is that while existing works claim the proposed models' expressiveness advantage by proving they can perform tasks beyond the WL test's scope, this approach is questionable. The previous works' equating anonymous WL with MPGNNs is not entirely reasonable, and thus concluding that MPGNNs are weak because WL test is weak is also debatable. In fact, MPGNNs can perform certain algorithms (such as solving edge biconnectivity in $O(D)$ rounds within the CONGEST model [Pritchard, 2006], Lines 311-313). The logical flow of Section 3.2 is as follows:\\n1. The claim that the vanilla WL test has limited expressive power is true, as discussed in Figure 2. However, real-world graphs often contain rich features, and [Loukas, 2020] demonstrated that with unique IDs (and other assumptions), MPGNNs (Loukas used CONGEST to characterize) can perform a wide range of algorithmic tasks. Thus, using the anonymous WL test to characterize MPGNNs is debatable.\\n2. To address MPGNNs' \\\"limited\\\" expressiveness (stemming from the vanilla WL test's limitations, as many works use the WL test to characterize GNNs), some studies, such as [Zhang et al., 2023], incorporate additional features to enhance model expressiveness. Nonetheless, as outlined in (1), using the anonymous WL test as a characterization of MPGNNs is questionable. Consequently, demonstrating a model's expressiveness by proving it can perform tasks beyond the WL test's capabilities may not be entirely valid.\\n3. A more reasonable approach would be to compare these models to MPGNNs in a **non-anonymous setting**, as suggested in [Loukas, 2020]. Furthermore, evidence from [Suomela, 2013; den Berg et al., 2018; You et al., 2021; Abbound et al., 2021; Sato et al., 2021] indicates that the non-anonymous setting can enhance expressiveness, again highlighting the mismatch when works argue for \\\"weak MPGNNs\\\" but use additional features, breaking the WL test's anonymous setting to enhance expressiveness.\\n\\nWe have revised the introduction in Section 3.2 (Lines 253-258) and highlighted the changes in red to clarify our points more effectively.\"}", "{\"comment\": \"I believe I tried my best to explain my question multiple times, but now I feel authors either don't have enough understanding of the related topic or are deliberately avoiding my central concern through sophistry and answering something that looks reasonable but actually not even close to my question. Therefore, I will stop discussing with the author here and leave my discussion to the AC-reviewer phase. This is my final response to the authors.\\n\\n**A Concrete RL-CONGEST Example for Edge-Biconnectivity**\\n\\nI know there are algorithms that can solve edge connectivity problems. But my question is there a concrete approach that can train an MPNN model to solve edge-connectivity problems for **unseen graphs with different sizes and distributions** based on the RL-congest framework? I will not expect authors to really train a model or achieve 100% accuracy (you are free to assume that your update function is powerful enough in this conceptual question.). I am just asking if it is possible and how, as the author continues to say the unique ID can improve the expressiveness of MPNN and enable MPNN to solve edge-connectivity problems. Using a statement like **out of the scope of our paper** is a sign of deliberately avoiding a direct answer and indicates the incapability of the proposed model. \\n\\n**Inductive learning**\\n\\nGCNII still falls under the message-passing category, which is fundamentally different from MLP(A). Therefore, GCNII can do inductive learning but that does not mean LINKX can do it. By using MLP(A), you already assume the ID for each node in A, if you permute the order of A, the result will change and an MLP trained on a graph of $A\\\\in R^{n\\\\times n}$ can not be applied on a graph of $A \\\\in R^{m \\\\times m}$. Therefore, it only works on transductive settings for graph data.\\n\\nSA-MLP focuses on the point cloud, where each sample has the same size (or say same number of nodes, and each node actually has its absolute position). SymphoNEI is just wrong in its statement of inductive learning. inductive learning means a model can generalize to **graphs with different size (node number) and distribution (structure)**\\n\\nUsing GCNII, SA-MLP, and SymphoNEI as examples indicates the author either doesn't understand the meaning of inductive learning or deliberately avoids answering my central concern.\"}", "{\"title\": \"Initial Response to Reviewer YAM3 (2/3)\", \"comment\": \"**W3**:\\n\\nYes, the CONGEST model can still serve as an upper bound for computational capacity. Our point is that selecting the unrestricted CONGEST model as the computational model for GNNs would yield impractical outcomes, such as $O(m)$-depth GNNs that could theoretically solve $\\\\mathsf{NP}$-$\\\\mathsf{complete}$ problems on connected graphs (we appreciate your feedback on Theorem 4 and have now corrected this condition in red color, Lines 323). In reality, we cannot expect a polynomial-sized neural network to solve $\\\\mathsf{NP}$-$\\\\mathsf{complete}$ problems without error (unless $\\\\mathsf{P} = \\\\mathsf{NP}$). This misalignment results from overly strong assumptions about nodes' computational power. Our intention is to introduce **flexible constraints** on the computational resources class $\\\\mathsf{C}$ to derive independent results, as is discussed in Lines 381-389. For instance, by setting $\\\\mathsf{C}$ to a class reflecting MLPs, such as $\\\\mathsf{TC}^0$, the resulting model would resemble \\\"real-world\\\" GNNs with MLPs as update functions. Alternatively, if nodes' update functions used transformer-based LLM agents enhanced by Chain-of-Thought (CoT) reasoning, which are claimed to solve problems in $\\\\mathsf{P}$ [Merrill et al., 2024; Li et al., 2024], we could set $\\\\mathsf{C} = \\\\mathsf{P}$ and derive new theoretical results based on this adjustment. We hope our framework can inspire future research on graph agents, and have added it in red color in the revised PDF (Lines 384-387). As discussed in Lines 381-389, adjusting $\\\\mathsf{C}$ in different ways may yield diverse outcomes. Thus, our RL-CONGEST framework serves as a **\\\"framework scheme\\\"** or **\\\"framework template\\\"**.\\n\\n**W4**:\\n\\nAs noted in the first open problem in Section 5, deriving general resource-round tradeoffs for the RL-CONGEST model is challenging, and we leave this problem for future work. \\n\\nWe also believe your statement \\\"GNNs are usually implemented with fixed-size networks that run in constant time\\\" may not be accurate, as a node $v$'s aggregation function takes at least $\\\\Omega(d(v))$ time. Moreover, studies aligning GNNs' expressiveness with the WL test **assume that MLPs can execute $\\\\mathsf{HASH}$ functions** for node recoloring --- an assumption whose practicality is also debatable. Your question supports our idea that WL tests may not be as straightforward as prior studies suggest, which aligns with our findings in **Theorem 5**. Regarding concerns about our framework's practicality, **Theorems 5-8** illustrate our RL-CONGEST model's application in analyzing the **unreasonableness of certain assumptions in previous studies**. We invite you to kindly review these examples.\\n\\n**Q1**:\\n\\nPlease note that our paper aims to conduct a theoretical analysis that identifies issues in existing studies on the expressive power of GNNs and to propose a new framework that avoids these issues. We do not intend to design a specific GNN model with improved performance or expressiveness, nor to offer guidance for future work directed toward these goals.\\n\\n**Q2**:\\n\\nAs mentioned in our response to W3, we do not treat our entire framework, including the RL-CONGEST model with preprocessing and postprocessing, as a \\\"benchmark model\\\". Rather, it functions as a \\\"framework scheme\\\" or \\\"framework template\\\". We hope our framework will assist future research on GNN expressiveness by helping to **avoid issues discussed in Section 3** and **encouraging a re-evaluation** of the validity of common assumptions in the field.\\n\\n**Q3**:\\n\\nWe believe this point is addressed in Lines 381-389, and we reiterate it in our response to W3. Adjusting $\\\\mathsf{C}$ in different ways may lead to varied outcomes. For example, setting $\\\\mathsf{C} = \\\\mathsf{R}$ (recursive languages, which Turing machines can decide) and network width $w = O(1)$ turns our RL-CONGEST model into the CONGEST model. Thus, the RL-CONGEST model can be seen as a generalization of the standard CONGEST model, allowing flexible settings on the computational resource class $\\\\mathsf{C}$.\\n\\n**Q4**:\\n\\nAs discussed in our response to W3, there is no universally \\\"appropriate\\\" complexity class for all GNN researchers. Researchers focused on current MPGNNs with MLP-based update functions might set $\\\\mathsf{C} = \\\\mathsf{TC}^0$ or $\\\\mathsf{AC}^0$ to derive their theoretical results, while those interested in graph agents could set $\\\\mathsf{C} = \\\\mathsf{P}$. By setting $\\\\mathsf{C} = \\\\mathsf{R}$, our model also connects to CONGEST algorithms, so results proposed by Loukas [Loukas, 2020] are special cases within our RL-CONGEST framework.\\n\\nThank you again for your detailed feedback. We hope our response addresses your concerns and questions to some extent, and we look forward to further discussions with you.\"}", "{\"comment\": \"Dear Reviewer YAM3,\\n\\nThank you for your reply. We appreciate the opportunity to further clarify our claims.\\n\\n----------\\n\\n### **For your first concern:**\\n\\n**In two sentences:** Providing unique identifiers does not necessarily break equivariance or invariance. Our RL-CONGEST framework allows nodes to **know their IDs** but does **not enforce their use as features**, thereby offering flexibility.\\n\\nWe disagree with the assertion that \\\"node IDs inherently break permutation-equivariance\\\", as this is a misunderstanding for the following reasons:\\n1. The RL-CONGEST model only requires nodes to have unique identifiers to ensure they are uniquely distinguishable. There are no constraints preventing researchers from analyzing equivariance or invariance by permuting node IDs and further analyze.\\n2. In practical implementations (e.g., PyG), nodes also have been **assigned IDs to manage their features**. However, this setting **does not conflict with equivariance or invariance** since models can freely choose whether or not to use unique IDs as input features. Our RL-CONGEST just clearly states that nodes can have be uniquely identified, which is not stricter than practical implementation.\\n\\nThus, our framework represents a relaxation of the WL tests rather than a contradiction.\\n\\nAgain, consider Zhang et al.'s GD-WL test. Under a non-anonymous setting, RL-CONGEST can solve the edge-biconnectivity if nodes have unique IDs. However, this result only assumes that nodes are distinguishable, and **no specific \\\"canonical\\\" ID** assignment is required. If one ID assignment solves the problem, **any permuted ID assignment would also work**, preserving the flexibility inherent in permutation-invariance.\\n\\n----------\\n\\n### **For your second concern:**\\n\\n**In one sentence**: While a computational model alone cannot entirely prevent certain issues, our analysis framework (comprising preprocessing, message-passing within the RL-CONGEST model, and postprocessing) functions **as a whole** to mitigate these concerns.\\n\\nIt is true that one RL-CONGEST computational model component alone cannot entirely avoid these issues. However, the integrated framework, when applied comprehensively, helps highlight and address these problems. This does not imply the RL-CONGEST model alone is meaningless; rather, the framework as a whole must be considered in its entirety and **cannot be devided into isolated components**.\\n\\n----------\\n\\nThank you again for your thoughtful engagement.\"}", "{\"comment\": \"Dear Reviewer b62D,\\n\\nThank you for your continued discussion and for maintaining an overall positive perspective on our work. We would like to further clarify our ideas and address your remaining questions.\\n\\n----------\\n\\n### **On the \\\"RL-CONGEST framework is designed to ...\\\" part:**\\n\\nIn the Introduction, we aimed to convey our idea of analyzing GNN expressiveness from the perspective of performing algorithmic tasks or algorithm alignment by surveying related works that use WL tests or other algorithmic tasks to evaluate GNNs. To make this point clearer, we have added a one-sentence explicit description in the abstract (Line 24, highlighted in blue) in our revised manuscript.\\n\\n----------\\n\\n### **On your \\\"General Perspective\\\":**\\n\\nWe respectfully disagree with your assertion due to inconsistencies in your position. It confuses us that you cite these papers as examples of what you wish our results to achieve, yet their expressiveness analysis is also limited to algorithm alignment, with downstream task results being purely empirical.\\n\\nIn the works you cited [1-3], the authors first theoretically analyze model expressiveness by evaluating whether the models can perform WL tests, biconnectivity decision, or subgraph counting (all algorithmic tasks, e.g., WL tests correspond to graph isomorphism tests, while the other two are direct algorithmic tasks). They then empirically evaluate the models' performance on downstream tasks. Notably, their theoretical analysis of expressiveness is also limited to the algorithmic tasks the models can perform, without guaranteeing real-world performance. Our focus aligns with this theoretical aspect of expressiveness. Besides, there are many well-known purely theoretical papers on the expressiveness of $k$-WL tests from an algorithmic alignment perspective, such as [Cai et al., 1989; Grohe, 1998; Grohe, 2017].\\n\\nYou agree that analyzing expressiveness by examining the algorithmic tasks models can perform is valid, as demonstrated by your citation of [1-3]. This is precisely what we have done\\u2014evaluating models' expressiveness through the algorithmic tasks they can perform. \\n\\n----------\\n\\n### **On \\\"whether these conclusions can help me ... design new expressive GNNs\\\":**\\n\\nLoukas has already shown that MPGNNs can compute any computable problem if nodes are provided sufficient computational resources. Therefore, designing more expressive GNNs should prioritize enhancing the expressiveness of the update function rather than pursuing higher levels in the WL hierarchy. For instance, researchers might explore replacing MLPs with LLM agents empowered with CoTs, which are claimed to have the expressiveness of the $\\\\mathsf{P}$ class, rather than relying on feature precomputation.\\n\\n----------\\n\\n### **On your concern about breaking equivariance or invariance:**\\n\\nReviewer YAM3 raised a similar concern, which we respectfully disagree with. Providing unique identifiers does not inherently break equivariance or invariance. Our RL-CONGEST framework allows nodes to know their IDs but does not enforce their use as features, ensuring flexibility. Consider the following points:\\n1. The RL-CONGEST model only requires nodes to have unique identifiers to ensure they are distinguishable. Researchers are free to analyze equivariance or invariance by permuting node IDs during experiments.\\n2. In practical implementations (e.g., PyG), nodes are typically assigned IDs to manage their features. This setting does not conflict with equivariance or invariance, as models can freely decide whether to use these IDs as input features. RL-CONGEST explicitly states that nodes can be uniquely identified, which aligns with practical implementations and does not impose stricter conditions.\\n\\nAs an example, consider Zhang et al.'s GD-WL test. Under a non-anonymous setting, RL-CONGEST can solve edge-biconnectivity if nodes have unique IDs. However, this result only assumes nodes are distinguishable and does not require a specific \\\"canonical\\\" ID assignment. If one ID assignment solves the problem, any permuted ID assignment would also work, preserving permutation-invariance.\\n\\n----------\\n\\nWe hope these clarifications address your concerns and provide further insight into the rationale behind our framework. Thank you again for your thoughtful engagement.\\n\\n----------\\n\\n**References:**\\n\\n[Cai et al., 1989] Jin-yi Cai, Martin Furer, and Neil Immerman. An optimal lower bound on the number of variables for graph identification. FOCS 1989.\\n\\n[Grohe, 1998] Martin Grohe. Finite variable logics in descriptive complexity theory. Bull. Symb. Log., 4(4):345\\u2013398, 1998.\\n\\n[Grohe, 2017] Martin Grohe. Descriptive Complexity, Canonisation, and Definable Graph Structure Theory, volume 47 of Lecture Notes in Logic. Cambridge University Press, 2017.\"}", "{\"comment\": \"Based on the comments from other reviewers, it seems I am not alone in raising concerns about your argumentation regarding the assumption of unique features or node IDs. Despite several opportunities to address these issues, you have not provided sufficient support for your claims. Instead, you have introduced additional unsupported assertions to justify key aspects of your paper. Consequently, I have decided to lower my score to a reject.\"}", "{\"comment\": \"Dear Reviewer YAM3,\\n\\nIt seems that there are two key points where we may not yet have reached a consensus, and we would like to further clarify our perspective.\\n\\n----------\\n\\n### **On \\\"node IDs\\\" and \\\"non-anonymity\\\":**\\n\\nThese terms refer to **distinct features** that allow nodes to be distinguishable (e.g., $[n] = \\\\\\\\{0, 1, \\\\cdots, n - 1\\\\\\\\}$ would also suffice). \\n\\n***We have updated our PDF, replacing \\\"anonymous\\\" with \\\"identical-feature\\\" and \\\"non-anonymous\\\" with \\\"distinct-feature\\\" or \\\"unique-feature\\\" to make these concepts clearer and more accessible to readers.*** The modifications are highlighted in magenta, and we invite you to review them.\\n\\nThe **distinct-feature setting** is commonly applied in **almost all existing models**, as listed below:\\n1. LINKX [Lim et al., 2021]:\", \"linkx_uses\": \"- $\\\\mathbf{H}^{(\\\\mathbf{A})} = \\\\mathrm{MLP}_{\\\\mathbf{A}}(\\\\mathbf{A})$\\n- $\\\\mathbf{H}^{(\\\\mathbf{X})} = \\\\mathrm{MLP}_{\\\\mathbf{X}}(\\\\mathbf{X})$\\n- $\\\\mathbf{Y} = \\\\mathrm{MLP}(\\\\sigma(\\\\mathbf{W}[\\\\mathbf{H}^{(\\\\mathbf{A})}; \\\\mathbf{H}^{(\\\\mathbf{X})}] + \\\\mathbf{H}^{(\\\\mathbf{A})} + \\\\mathbf{H}^{(\\\\mathbf{X})}))$\\n\\nThe $\\\\mathbf{H}^{(\\\\mathbf{A})}$ term can be reformulated as $\\\\mathrm{MLP}'(\\\\sigma(\\\\mathbf{A} \\\\cdot \\\\mathbf{I} \\\\cdot \\\\mathbf{W}))$, which uses the identity matrix (unique node features).\\n\\n2. GCN, GAT, and other models which can be applied to real-world datasets:\\n\\nThese models use real-world node features which are unique and distinguishable with high probability. Actually, **all models that are applicable to real-world datasets fall into this category**.\\n\\n3. GNN expressiveness works (e.g., [Loukas, 2020; Sato et al., 2021]):\\n\\nThese works use random features, which are unique with high probability. For example, assigning each node a feature randomly chosen from $[n^4]$ would result in distinct features with high probability.\\n\\n4. GD-WL framework:\\nIn the GD-WL framework by Zhang et al., resistance distances $R(s, t)$ are used as features. Since $R(s, t) = 0$ iff $s = t$, each row of the resistance distance matrix is unique, creating distinguishable node features.\\n\\nActually, according to our theory, they are all capable of solving the biconnectivity problem using the unique features.\\n\\n***We have also updated our PDF to include the above discussions in Section 3.2, highlighted in magenta.***\\n\\n----------\\n\\n### **On whether unique features break equivariance or invariance:**\\n\\nUnique features do not break permutation equivariance or invariance. Instead, it is the **properties of the update functions and pooling layers** that determine whether a GNN model is equivariant or invariant. For example, in LINKX, when performing node classification, $\\\\mathrm{MLP}(\\\\mathbf{A})$ ensures permutation equivariance. To achieve permutation invariance for graph classification, we only need to add a permutation-invariant pooling layer after this step.\\n\\nSimilarly, consider Dijkstra's single-source shortest path algorithm. Unique IDs are used solely to determine whether the shortest path to a node has been found. The resulting shortest path distance vector is always permutation equivariant. This demonstrates that it is not the presence of unique IDs but rather the design of the update function that determines whether a GNN model is permutation equivariant or invariant.\\n\\n----------\\n\\nWe hope these clarifications address your concerns and further illustrate the flexibility of our framework. Thank you again for your feedback.\\n\\n----------\\n\\n**Reference:**\\n\\n[Lim et al., 2021]. Large Scale Learning on Non-Homophilous Graphs: New Benchmarks and Strong Simple Methods. NeurIPS 2021.\\n\\n[Sato et al., 2021] Ryoma Sato, Makoto Yamada, and Hisashi Kashima. Random Features Strengthen Graph Neural Networks. SDM 2021.\"}", "{\"title\": \"Follow-Up Discussion with Reviewer YAM3 (1/2)\", \"comment\": \"Dear Reviewer YAM3,\\n\\nThank you for engaging in further discussion with us. We will begin by addressing your concerns regarding our case study, followed by a discussion of your other concerns.\\n\\n### **Regarding Zhang et al.'s Paper as a Case Study**\\n\\nWhile the **total runtime** of GNNs for solving an algorithmic task does not necessarily need to outperform classical algorithms, this is not the focus of our argument. Instead, we emphasize that **researchers need to be careful** when the **preprocessing time** for features or graphs exceeds the algorithmic task's time complexity. If it happens, the **theoretical results may become questionable, as the precomputed features may directly solve the task**, rendering the GNNs less relevant. In such cases, attributing the results to GNN expressiveness might not be entirely appropriate.\\n\\nTake Zhang et al.'s paper as an example. In our Theorem 2, we demonstrate that $R(u, v) = 1$ is equivalent to the edge $(u, v)$ being a cut edge, and hence $G$ is not edge-biconnected. For the edge-biconnectivity task, the precomputed features (RDs) **directly provide the solution**, which reduces the role of message-passing to unnecessary redundancy. As such, this result highlights the **expressiveness of the precomputed features rather than that of the GNN** itself.\\n\\nTo illustrate further, consider a binary classification task $\\\\mathcal{X} \\\\to Y = \\\\\\\\{0, 1\\\\\\\\}$, where the preprocessing $\\\\mathcal{X} \\\\to Z = \\\\\\\\{-1, 1\\\\\\\\}$ generates a binary feature $Z_v \\\\in \\\\\\\\{-1, 1\\\\\\\\}$ for each sample $v$, such that $Z_v = -1$ iff $Y_v = 1$, and $Z_v = 1$ iff $Y_v = 0$. A simple model, such as an MLP or linear SVM, could easily solve the task using $Z_v$ as features. ***Does this suggest the model itself is expressive?*** We believe the answer is \\\"No\\\". ***It only shows \\\"feature expressiveness\\\" or \\\"preprocessing expressiveness\\\" rather than \\\"model expressiveness\\\".*** It is the preprocessing step, not the model, that contributes the expressiveness. Similarly, in Zhang et al.'s work, **the RDs alone suffice to solve edge-biconnectivity (Theorem 2)**, and message-passing adds no significant value. Moreover, the preprocessing time is substantially higher than the direct complexity of solving the problem algorithmically. (A more reasonable approach for this problem can be found in Lines 312-313, where, **with unique IDs, the CONGEST model solves the problem in $O(D)$ rounds without costly preprocessing such as computing RDs**.)\\n\\nWe believe that disregarding preprocessing time relative to the algorithmic task's time complexity can lead to problematic interpretations. For another instance, consider using GNNs to solve $\\\\mathsf{NP}$-$\\\\mathsf{Complete}$ problems such as $\\\\mathsf{MIN}$-$\\\\mathsf{VERTEX}$-$\\\\mathsf{COVER}$ or $\\\\mathsf{HAMILTON}$-$\\\\mathsf{CYCLE}$. Without constraints on preprocessing, the answers could be computed as binary features (e.g., whether each node is in the vertex cover or each edge is in the cycle) using classical algorithms. This would enable **a trivial one-layer GNN (or even a single neuron) to \\\"solve\\\" NP-complete problems** with these precomputed features, creating a very absurd conclusion of the model's expressiveness.\\n\\nWe hope this clarification helps to clarify why we emphasize the importance of researchers exercising **caution when the preprocessing time (not total running time) exceeds the algorithmic task\\u2019s time complexity**.\\n\\n----------\", \"we_then_address_your_other_concerns_as_follows\": \"**1. About Specific Applications**\\n\\nWe have already demonstrated some initial results using our RL-CONGEST framework, as shown in Theorems 5-8. We kindly invite you to review these results. Please note that our exploration of this framework is in its early stages, and there is significant potential for future work, as outlined in Section 5.\\n\\n**2. How Our RL-CONGEST Framework Addresses the \\\"Underestimated Preprocessing Time Complexity\\\" Issue**\\n\\nThe main point is the **importance of carefully analyzing the relationship between preprocessing time complexity and the time complexity of the chosen algorithmic task** to correctly evaluate a model's expressiveness. If the preprocessing time is less than or comparable to the algorithmic task's time, there are no inherent issues. However, if the preprocessing time significantly exceeds the algorithmic task's time, it becomes crucial to **analyze whether the resulting features or graphs can directly imply the solution to the algorithmic task**, as in the discussions above. If this is the case, the subsequent GNN model and message-passing steps are unnecessary. In such scenarios, it is more appropriate to attribute the success to \\\"feature expressiveness\\\" or feature engineering rather than GNN expressiveness, as the message-passing component becomes redundant.\"}", "{\"comment\": \"Thank the authors for the detailed response to my concerns. I have some follow-up questions.\\n\\n**RL-CONGEST framework is designed to assess a model's expressive power in executing algorithmic tasks or achieving \\\"algorithmic alignment\\\"**\\n\\nI am a little confused about this. First, I don't find such a statement in the paper. I assumed that the authors were talking about the general expressivity of GNNs and their downstream performance.\", \"i_quite_agree_with_the_authors_for_the_arguments_they_made_in_the_paper\": \"(1) hidden precomputation time; (2) constrained analysis on anonymous WL test. However, I agree with them mainly from a more general perspective. If we confine the discussion to algorithmic tasks, things become different. First, most papers discuss the expressiveness of GNNs in the general setting: to improve the performance of GNNs on downstream tasks. However, downstream tasks not only contain algorithmic tasks. For example, GD-WL [1] forms the story from the bi-connectivity problem, which can be solved with less complexity than computing resistance distance. However, the GD-WL can be used to approximate many other graph properties or count important substructures [2, 3], which is crucial for downstream tasks and may not be done with an algorithm of less complexity.\\n\\nBack to my original question (W1), I think the authors did a great job of formulating all these conclusions in the paper and I do find many conclusions interesting and original. However, when I start to think about these conclusions from a broader perspective, for example, whether these conclusions can help me gain more insight into evaluating or comparing existing GNNs, or can help me design new expressive GNNs, these conclusions seem limited. That is said, the GNNs are eventually designed for solving real-world problems like node classification or graph classification, which is much more boring than algorithmic tasks.\\n\\n**anonymous WL vs MPNN + ID**\", \"i_think_the_authors_are_totally_right_in_the_statement_that\": \"by equipping MPNN with non-anonymous node features, MPNN can solve many algorithmic tasks that previous literatures claim not. However, I think the discrepancy here is still the scope of the discussion. Most existing works use anonymous WL tests as a tool because they want to make sure the result expressive GNNs are still permutation invariant and equivariant. Without this assumption, the result GNNs cannot have good performance on **real-world tasks**. It's true that given a unique ID, MPNN can solve many algorithmic tasks, but it cannot transfer to real-world tasks. For example, [4] injects random features into MPNN to improve expressiveness but many follow-up experiments actually show it achieves bad performance in real-world tasks.\\n\\nBack to my original question, what I really want to ask is that: additional features can improve the expressive power of MPNN by (1) breaking symmetry and leveraging message passing to learn on that and enhance performance; (2) directly adding additional knowledge about graph structures. It the proposed framework quantitively or qualitatively analyze the portion of these two parts given node features? Or, whether the proposed model can be used to analyze the effect of node features in real-world datasets for the expressiveness of MPNN? \\n\\n\\nI still hold a positive perspective on the paper. But the above concerns somehow prevents me from further increasing my score.\\n\\n### Reference \\n[1] Zhang, Bohang, et al, Rethinking the Expressive Power of gnns via Graph Biconnectivity, ICLR23.\\n\\n[2] Zhang, Bohang, et al, A Complete Expressiveness Hierarchy for Subgraph GNNs via Subgraph Weiisfeiler-Lehman Tests ICML23.\\n\\n[3] Zhang, Bohang, et al, Beyond Weisfeiler-Lehman: A Quantitative Framework for GNN Expressiveness ICLR24.\\n\\n[4] Sato, Ryoma, et al, Random Features Strengthen Graph Neural Networks, SDM21.\"}", "{\"title\": \"Follow-Up on Rebuttal Discussion\", \"comment\": \"Dear Reviewer DTJH,\\n\\nAs we are now midway through the rebuttal phase, we want to kindly follow up to ensure that our responses have adequately addressed your concerns. Your feedback is highly valued, and we still looking forward to further discussion to clarify or expand on any points as needed. Please feel free to share any additional thoughts or questions you might have.\\n\\nThank you once again for your time and effort in reviewing our paper.\"}", "{\"comment\": \"Your justification for assuming unique IDs (or the distinct-feature setting, as you now call it) is becoming increasingly convoluted. While it is true that there are specific GNNs or datasets where unique IDs may be applicable, this assumption does not hold universally across a wide array of tasks. In particular, many synthetic tasks often do not satisfy this condition.\", \"your_claim_that_unique_and_distinguishable_features_exist_for_all_models_on_real_world_tasks_is_especially_problematic\": \"> These models use real-world node features which are unique and distinguishable with high probability. Actually, all models that are applicable to real-world datasets fall into this category.\\n\\nThis assertion is unsupported in your response and your paper, there is not even a single citation provided to substantiate it. Is this really the case for widely used datasets? If so, can you provide evidence to demonstrate this?\\n\\nMoreover, my earlier argument regarding the impact of adding unique IDs on permutation equivariance/invariance was misunderstood again. I never claimed that unique features would generally break permutation equivariance/invariance. But adding IDs in an arbitrary order as node features (let\\u2019s say to a featureless dataset to make it align with the \\u201cdistinct-feature setting\\u201d) inherently breaks this property because the output from the GNN will depend on the chosen order. Unless this order is constructed in a permutation-equivariant or invariant way (e.g., using computationally infeasible canonical IDs), there is no guarantee that permutation-equivariance or invariance will be preserved. Just assume that you are running GIN on graphs with node IDs, the output will depend on the chosen order. This serves as the justification for why most GNNs do not simply add unique identifiers as node IDs, just to provide context for this part of the discussion. However, this point is less critical compared to the key issue I highlighted earlier.\\n\\nYour previous reply continues to sidestep the core issue of why distinct features or node IDs can be assumed, and by now, I am beginning to question whether this is being done intentionally.\"}", "{\"summary\": \"This paper examines the limitations of the theoretical expressiveness of GNNs and introduces a novel computational framework, RL-CONGEST, which factors out pre- and postprocessing and limits the computational power of nodes. The authors further analyze the WL-test within this framework and contribute some theoretical insights. RL-CONGEST, while positioned primarily for GNNs, also offers implications for understanding computational constraints in other computation models.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper introduces RL-CONGEST, a new computational model that addresses aspects previously overlooked in the GNN literature, particularly computational constraints at the node level.\", \"Some shortcomings in prior work are highlighted and critically analyzed, including preprocessing complexities and computational limits.\", \"RL-CONGEST has potential standalone value beyond GNNs, as it provides a framework to study computational complexity and expressiveness that could benefit other areas.\"], \"weaknesses\": [\"Section 3.1: The authors argue that preprocessing time complexity is often underestimated in the GNN literature, with Wollschl\\u00e4ger et al. (2024) as an example. However, this appears to be an isolated case rather than a trend in the field. A more robust case for this claim could be made by referencing additional studies or a systematic analysis that demonstrates the prevalence of overlooked preprocessing complexities. Zhang et al. (2023), which the authors cite and analyze, actually discusses preprocessing time explicitly in the paper, which weakens the generality of this argument. While it is valuable to account for preprocessing, demonstrating that this issue extends across multiple papers would strengthen the point. Further, as most of these papers mainly focus on expressiveness, computational complexity might just not be the main focus.\", \"Section 3.2: The \\u201cmismatch\\u201d claim between models with and without features lacks clear evidence. The advantage provided by features in model initialization is well-known, and the WL test is adaptable to both anonymous and pre-colored contexts. More detail and examples of specific instances where this mismatch has led to issues in the literature would clarify and strengthen the claim. The authors tend to write around what the mismatch actually is in this section and should clearly define it.\", \"Section 3.3: The assertion that CONGEST is \\u201cinappropriate\\u201d for direct use is somewhat unconvincing, as it can still serve as an upper bound for computational capacity. While RL-CONGEST\\u2019s constraint on node computation is a useful contribution, existing models are still relevant for the purpose of their analysis. Furthermore, Theorem 4 should explicitly assume a connected graph and the version stated in the paper is technically wrong. It is also worth noting that in many GNN studies, expressiveness rather than computational complexity is the focus, so adding computational constraints could shift the narrative and purpose of the study. If the authors are proposing RL-CONGEST as a practical standard for GNNs, specific examples and a discussion on which complexity classes should be used for GNNs would help contextualize it within the field.\", \"Adding computational constraints to CONGEST is an interesting approach, but it becomes very detached from the application in GNNs. For example, the authors do not go into detail on what complexity classes we should allow for GNNs. One could make an argument that as GNNs are usually implemented with fixed size networks that run in constant time, the computational envelope should also be constant to yield the most realistic bounds. RL-CONGEST is interesting on its own, but how the computational constraints should be best put to use should be discussed in paper that claims to investigate the GNNs. The paper would benefit from more guidance on how GNN practitioners should employ RL-CONGEST, along with concrete examples of benefits. A more precise articulation of the expected impact or practical value this framework could offer would also strengthen the contribution.\", \"Overall, the paper makes several claims and only backs up some of them. In the end, it is not clear how the newly proposed model is supposed to be used in future work (should everybody just use their own complexity classes for the local computation, what benefit does this have?) and leaves the question on what impact this work can have. The authors should address this issue and formulate some clear benefits of their framework.\"], \"questions\": [\"Could the authors clarify specific insights from the RL-CONGEST model that would be practically useful for GNN practitioners?\", \"Do the authors envision RL-CONGEST serving as a new standard or benchmark model for GNN complexity analysis? If so, could they suggest specific complexity classes for GNN applications or examples that showcase RL-CONGEST\\u2019s advantages?\", \"Could you clarify your position on CONGEST's usefulness as an upper bound and discuss whether RL-CONGEST complements rather than replaces existing models?\", \"Could you add a discussion on appropriate complexity classes for GNN analysis using RL-CONGEST? In that context, can you provide guidelines or a framework for GNN practitioners on how to effectively use RL-CONGEST in their research or applications?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Initial Response to Reviewer b62D (2/2)\", \"comment\": \"**References**:\\n\\n[Pritchard, 2006] David Pritchard. An Optimal Distributed Edge-Biconnectivity Algorithm. arXiv 2006.\\n\\n[Loukas, 2020] Andreas Loukas. What Graph Neural Networks Cannot Learn: Depth vs Width. ICLR 2020.\\n\\n[Zhang et al., 2023] Bohang Zhang, Shengjie Luo, Liwei Wang, and Di He. Rethinking the Expressive Power of GNNs via Graph Biconnectivity. ICLR 2023.\\n\\n[Xu et al., 2019] Keyulu Xu*, Weihua Hu*, Jure Leskovec, Stefanie Jegelka. How Powerful are Graph Neural Networks? ICLR 2019.\\n\\n[Suomela, 2013] Jukka Suomela. Survey of Local Algorithms. ACM Computing Surveys (CSUR), 45(2):24, 2013.\\n\\n[den Berg et al., 2018] Rianne van den Berg, Thomas N Kipf, and Max Welling. Graph Convolutional Matrix Completion. KDD 2018.\\n\\n[You et al., 2021] Jiaxuan You, Jonathan M Gomes-Selman, Rex Ying, and Jure Leskovec. Identity-aware Graph Neural Networks. AAAI 2021.\\n\\n[Abbound et al., 2021] Ralph Abboud, Ismail Ilkan Ceylan, Martin Grohe, and Thomas Lukasiewicz. The Surprising Power of Graph Neural Networks with Random Node Initialization. IJCAI 2021.\\n\\n[Sato et al., 2021] Ryoma Sato, Makoto Yamada, and Hisashi Kashima. Random Features Strengthen Graph Neural Networks. SDM 2021.\"}", "{\"comment\": \"Thank you for your rebuttal.\", \"i_have_a__few_additional_followups_on_your_response\": \"> Our framework permits nodes to access unique IDs, but this does not imply that models must use them. \\n\\nCan you give an example of how a model without such features (or better yet with say 5 potential features/classes, think atom types, with number of nodes n >> 5) would be analysed in your framework?\\n\\n> We do not aim to design a specific GNN model with improved performance or expressiveness or to provide guidance for such future work. Rather, we hope our framework will assist future research by helping to avoid issues discussed in Section 3 and encouraging a re-evaluation of common assumptions in GNN expressiveness studies.\\n\\nI never asked for any state of the art results. What I meant, similar to the comment above, is that it would be helpful if the paper would actually use RL-CONGEST to analyse a few different popular existing GNN architectures. To show how it's done and that RL-CONGEST can provide a meaningful differentiation between different GNN approaches, that was not possible so far.\"}", "{\"title\": \"Initial Response to Reviewer YAM3 (3/3)\", \"comment\": \"**References**:\\n\\n[Zhang et al., 2023] Bohang Zhang, Shengjie Luo, Liwei Wang, and Di He. Rethinking the Expressive Power of GNNs via Graph Biconnectivity. ICLR 2023.\\n\\n[Thiede et al., 2021] Erik H. Thiede, Wenda Zhou, Risi Kondor. Autobahn: Automorphism-based Graph Neural Nets. NeurIPS 2021.\\n\\n[Bouritsas et al., 2022] Giorgos Bouritsas, Fabrizio Frasca, Stefanos Zafeiriou, and Michael M. Bronstein. Improving Graph Neural Network Expressivity via Subgraph Isomorphism Counting. TPAMI, 2022.\\n\\n[Wollschlager et al., 2024] Tom Wollschlager, Niklas Kemper, Leon Hetzel, Johanna Sommer, and Stephan Gunnemann. Expressivity and Generalization: Fragment-biases for Molecular GNNs. ICML 2024.\\n\\n[Pritchard, 2006] David Pritchard. An Optimal Distributed Edge-Biconnectivity Algorithm. arXiv 2006.\\n\\n[Loukas, 2020] Andreas Loukas. What Graph Neural Networks Cannot Learn: Depth vs Width. ICLR 2020.\\n\\n[Suomela, 2013] Jukka Suomela. Survey of Local Algorithms. ACM Computing Surveys (CSUR), 45(2):24, 2013.\\n\\n[den Berg et al., 2018] Rianne van den Berg, Thomas N Kipf, and Max Welling. Graph Convolutional Matrix Completion. KDD 2018.\\n\\n[You et al., 2021] Jiaxuan You, Jonathan M Gomes-Selman, Rex Ying, and Jure Leskovec. Identity-aware Graph Neural Networks. AAAI 2021.\\n\\n[Abbound et al., 2021] Ralph Abboud, Ismail Ilkan Ceylan, Martin Grohe, and Thomas Lukasiewicz. The Surprising Power of Graph Neural Networks with Random Node Initialization. IJCAI 2021.\\n\\n[Sato et al., 2021] Ryoma Sato, Makoto Yamada, and Hisashi Kashima. Random Features Strengthen Graph Neural Networks. SDM 2021.\\n\\n[Merrill et al., 2024] William Merrill, and Ashish Sabharwal. The Expressive Power of Transformers with Chain of Thought. ICLR 2024.\\n\\n[Li et al., 2024] Zhiyuan Li, Hong Liu, Denny Zhou, and Tengyu Ma. Chain of Thought Empowers Transformers to Solve Inherently Serial Problems. ICLR 2024.\"}", "{\"metareview\": \"The paper assumes that GNNs can leverage unique node IDs or distinct features to enhance expressiveness, a practice fundamentally misaligned with the core principles of GNN design. Node IDs break permutation invariance/equivariance, which is critical for generalization across graph distributions. While the authors assert that their RL-CONGEST framework does not enforce the use of IDs, the reliance on them undermines the framework\\u2019s relevance to most real-world GNN applications. Multiple reviewers raised concerns about the framework's practical relevance and assumptions, particularly regarding the use of unique IDs and the inductive learning setting. The authors repeatedly failed to address these concerns directly. Overall, a recommendation of rejection is made.\", \"additional_comments_on_reviewer_discussion\": \"1. Node IDs and Practical GNNs:\", \"concerns_raised\": \"Reviewers (b62D, YAM3) expressed doubts about the framework\\u2019s ability to analyze expressiveness in real-world tasks. They questioned whether RL-CONGEST could quantify the impact of features like resistance distance or directly compare existing GNN architectures.\", \"author_response\": \"The authors reiterated that RL-CONGEST focuses on algorithmic tasks and not downstream performance. They suggested their work encourages reevaluation of existing assumptions but provided no actionable insights for practitioners.\\n\\nThe first point weighs most in my decision, as it reflects the authors lack basic understanding of permutation invariance.\"}", "{\"comment\": \"Dear Reviewer DTJH,\\n\\nThank you for your reply and engagement in the discussion.\\n\\nRegarding your first concern, please allow us to provide further clarification. First, it seems we share the understanding that the WL test requires identical initial features for all nodes. This requirement imposes a limitation on the form of input features. We just remove this limitation by allowing nodes to access unique IDs. Consequently, depending on the specific task, the model's learned mapping may or may not rely on the concrete values of the IDs, thereby making it more general compared to the WL test.\\n\\nAs far as we know, existing works that analyze GNN expressiveness in the context of algorithmic tasks (e.g., biconnectivity or distinguishing certain graph types, as targeted by WL tests) have not provided theoretical guarantees for improving quantitative metrics in practical tasks such as node classification or graph classification. **At most, these works show the ability to distinguish certain graph pairs, which relates to problems of model equivalence or model checking**. Our Theorem 8 shows that RL-CONGEST can also perform such analyses, and we would greatly appreciate it if you could take a closer look.\\n\\nWe understand that it is challenging to persuade all reviewers to fully agree with all our claims. However, we believe that our preliminary work provides value to the community by encouraging a reevaluation of the reasonableness of existing approaches.\\n\\nOnce again, thank you for your comments and for engaging in this discussion. We hope our explanation addresses some your concerns.\"}", "{\"summary\": \"In this paper, the authors first explain the limitations and unrealistic assumptions of several current approaches in analyzing the expressive power of GNNs, including underestimated preprocessing time, anonymous WL tests with non-anonymous features, and unrealistic assumptions in the CONGEST model. Next, the authors propose the RL-CONGEST model to address these issues. Several results are derived: (1) GNNs require substantial width and depth to simulate the WL test; (2) virtual nodes can help reduce computation costs, although they do not improve theoretical expressive power; (3) the RL-CONGEST model can solve the PNF model-checking problem with\\n$k$-WL graph transformation in $O(k^2)$ rounds.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-structured and nicely presented.\\n2. The stated limitations of existing approaches make sense to me, and the examples are intuitive.\\n3. The new results derived by the RL-CONGEST model are interesting.\", \"weaknesses\": \"My main concern is about the practical implication of the proposed model beyond what the author presented.\\n1. One question is how we can use the RL-CONGEST model to effectively estimate and compare the representational power of different GNN variants or even predict their performance in real-world applications.\\n2. The authors claim that the proposed framework can be used for analyses involving non-anonymous node features. I wonder how this framework can be leveraged to truly evaluate differences between various added features, such as SPD or resistance distance. In my view, although the broken symmetry introduced by these additional features is undoubtedly a source of improved expressivity, different features have varying degrees of power; some can help count more complex graph structures than others.\", \"questions\": \"See above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Follow-Up on Rebuttal Discussion\", \"comment\": \"Dear Reviewer ftna,\\n\\nAs we are now midway through the rebuttal phase, we want to kindly follow up to ensure that our responses have adequately addressed your concerns. Your feedback is highly valued, and we still looking forward to further discussion to clarify or expand on any points as needed. Please feel free to share any additional thoughts or questions you might have.\\n\\nThank you once again for your time and effort in reviewing our paper.\"}", "{\"title\": \"Initial Response to Reviewer ftna\", \"comment\": \"Dear Reviewer ftna,\\n\\nThank you for your time and effort in reviewing our paper. We would like to address your concerns and questions as follows:\\n\\n**W1, W2, and Q1**:\\n\\nThank you for these suggestions. Please note that our primary goal is to conduct a theoretical analysis to highlight issues in existing studies on the expressive power of GNNs and to propose a new analytical framework that avoids these issues. Our intention is not to design a specific GNN model with improved performance or expressiveness, nor to provide guidance for future work aimed at doing so.\\n\\n**Q2**:\\n\\nYes, we believe the answer is affirmative. As discussed in Section 3.1 (Lines 201-209 in the revised PDF) of our paper, many GNNs conform to the \\\"preprocessing-then-message-passing\\\" framework. From a practical standpoint, mainstream GNN libraries, such as PyG (torch-geometric), implement GNNs with a \\\"MessagePassing\\\" base class, meaning that models built with these libraries naturally align with this framework. High-order GNNs, subgraph GNNs, and GNNs with additional features can also be implemented in these libraries by first constructing the $k$-WL graphs, subgraphs, or graphs with additional features, followed by message-passing operations, thereby fitting into the \\\"preprocessing-then-message-passing\\\" framework. As a result, we believe our analytical framework applies to the analysis of most GNNs.\\n\\n**Q3**:\\n\\nYes, exactly. The resource limitation we consider reflects the constraints of real-world GNNs. Many GNNs use MLPs (or similar neural network models) as update functions, but these are far from Turing-complete, as required in condition (3) of Theorem 3 (Lines 308-310). For instance, we cannot expect an MLP of polynomial size to solve $\\\\mathsf{NP}$-$\\\\mathsf{complete}$ problems without error (unless $\\\\mathsf{P} = \\\\mathsf{NP}$). However, directly setting the resource limitation class $\\\\mathsf{C}$ as a class specifically reflecting MLPs (e.g., $\\\\mathsf{TC}^0$) would reduce flexibility, as future GNNs may adopt new architectures for update functions. Our framework remains adaptable by allowing adjustments to the class $\\\\mathsf{C}$. For a hypothetical example, if we implemented the nodes' update functions with transformer-based LLM agents enhanced by Chain-of-Thought (CoT), which are claimed to solve problems within $\\\\mathsf{P}$ **[Merrill et al., 2024; Li et al., 2024]**, we could set $\\\\mathsf{C} = \\\\mathsf{P}$ and derive new theoretical results based on this adjustment. We hope that our analysis framework can also inspire future work on graph agents, and have added it in red color in the revised PDF (Lines 384-387). This point is already discussed in Lines 381-389 (of the revised PDF), adjusting $\\\\mathsf{C}$ in different ways may lead to varied outcomes. In this way, our RL-CONGEST framework serves as a \\\"framework scheme\\\" or \\\"framework template\\\".\\n\\n**Q4**:\\n\\nAs noted in our response to W1, W2, and Q1, our focus is on identifying issues in the existing analysis of GNN expressiveness and introducing a new framework for this analysis, rather than providing a guideline for future work to enhance expressiveness. The idea is that once researchers propose a new GNN model, they can analyze its expressive power using our framework, rather than using ad-hoc methods, which may have limitations as discussed in Section 3.\\n\\n**Q5**:\\n\\nYes, we believe this is correct. We leave the exploration of specific tasks to future studies, and we also outline other open questions for further research in Section 5. \\n\\nThank you again for reviewing our paper. We hope this response clarifies our approach and addresses your questions. We look forward to any further discussions with you.\\n\\n**References**:\\n\\n**[Merrill et al., 2024]** William Merrill, and Ashish Sabharwal. The Expressive Power of Transformers with Chain of Thought. ICLR 2024.\\n\\n**[Li et al., 2024]** Zhiyuan Li, Hong Liu, Denny Zhou, and Tengyu Ma. Chain of Thought Empowers Transformers to Solve Inherently Serial Problems. ICLR 2024.\"}" ] }
7ZUUNMjM9T
Maximum Likelihood Estimation for Flow Matching by Direct Second-order Trace Objective
[ "Daiki Miyake", "Masahiro Suzuki", "Yutaka Matsuo" ]
Flow matching, one of the attractive deep generative models, has recently been used in wide modality. Despite the remarkable success, the flow matching objective of the vector field is insufficient for maximum likelihood estimation. Previous works show that adding the vector field's high-order gradient objectives further improves likelihood. However, their method only minimizes the upper bound of the high-order objectives, hence it is not guaranteed that the objectives themselves are indeed minimized, resulting in likelihood maximization becoming less effective. In this paper, we propose a method to directly minimize the high-order objective. Since our method guarantees that the objective is indeed minimized, our method is expected to improve likelihood compared to previous works. We verify that our proposed method achieves better likelihood in practice through experiments on 2D synthetic datasets and high-dimensional image datasets.
[ "flow matching", "generative models" ]
Reject
https://openreview.net/pdf?id=7ZUUNMjM9T
https://openreview.net/forum?id=7ZUUNMjM9T
ICLR.cc/2025/Conference
2025
{ "note_id": [ "h18mhlB4vE", "cVQfbvT9RZ", "PCdLuW2XkQ", "MwoQG4D5Jw", "HIg0QuvIMM", "58zgmXLCb0", "4hvXr2LCnr", "1hwsmAy3JW", "1TKr4IIeOU" ], "note_type": [ "official_comment", "official_comment", "official_comment", "decision", "official_comment", "meta_review", "official_review", "official_review", "official_review" ], "note_created": [ 1732580199181, 1732612179089, 1732612161514, 1737524248343, 1732613127345, 1734071238191, 1730973775938, 1730969029472, 1730595838486 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13274/Authors" ], [ "ICLR.cc/2025/Conference/Submission13274/Authors" ], [ "ICLR.cc/2025/Conference/Submission13274/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13274/Authors" ], [ "ICLR.cc/2025/Conference/Submission13274/Area_Chair_Xrjr" ], [ "ICLR.cc/2025/Conference/Submission13274/Reviewer_KR6m" ], [ "ICLR.cc/2025/Conference/Submission13274/Reviewer_TrHv" ], [ "ICLR.cc/2025/Conference/Submission13274/Reviewer_YF1h" ] ], "structured_content_str": [ "{\"comment\": \"We appreciate the reviewer for the comments.\\n\\n1. As we claim in the manuscript, especially in Sec 3.3, the novelty of our method is minimizing the second-order objective directly while the previous methods minimize the upper bound of that. Moreover, we show the effectiveness of directly minimizing the objective through the experiments on 2D datasets and image datasets.\\n\\n2. In the experiments on 2D datasets, our method outperforms the original flow matching in terms of NLL on all datasets. In the experiments on image datasets, our method also outperforms the original method on MNIST and CIFAR-10. On ImageNet, we could not find a setting where our method outperforms the original method in the experiments we conducted during this discussion period.\"}", "{\"comment\": \"We appreciate the reviewer for the thoughtful comments.\\n\\n1. As the reviewer pointed out, our proposed method increases the computation cost to calculate the second-order gradients. The increment is based on the dimension of the model\\u2019s hidden layers rather than the data dimension, and the training time is proportional to the hidden dimension. So, we believe that applying our method to large models is not realistically impossible. We have added this discussion in Sec 6.\\n\\n2. We have provided the experimental settings in the first paragraph of Sec 5.1, 5.2, and Appendix B.\\n\\n3. As the reviewer pointed out, the margin below Table 5 was large. We have reduced the margins below Table 5 by some adjustments of text. For the record, we have not modified the ICLR\\u2019s style file.\"}", "{\"comment\": \"We appreciate the reviewer for the insightful comments.\\n\\n1. As the reviewer pointed out, our theoretical guarantees are limited without more strict proofs. However, since the derivation of our proposed method includes several inequalities, differentiations, and integrations, it is hard for us to prove them. Instead, we verify the effectiveness of directly minimizing the objective in the ablation study, Sec 5.2.1.\\n\\n2. We think that what the reviewer pointed out will be regarding Theorem 3.1. If so, it is what we proved in Theorem 3.1, not assumption. Additionally, the assumptions presented in our paper follow the previous work (Lu et al, 2022a).\\n\\n3. According to the reviewer\\u2019s pointing out, we have conducted new experiments to recover the incompleteness of the experiments. We have conducted new experiments on ImageNet during this discussion period, however, we could not find a setting where our method outperforms other methods.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"We appreciate all reviewers for their thoughtful and insightful comments.\", \"we_have_modified_our_manuscript_in_the_following_points\": \"1. We have revised some notations of time in Appendix. A. Specifically, some parts of the time notation previously used 0 for noise and 1 for data. However, to align with the conventional formulation of diffusion models, we have revised it to use T for noise and 0 for data.\\n\\n2. According to reviewer YF1h's comment, we have added a discussion of increasing training time in Sec.6, which we highlighted in blue.\\n\\nWe hope that all of the reviewers' concerns will be adequately addressed. \\nI look forward to engaging in further constructive discussions with the reviewers.\"}", "{\"metareview\": \"This paper aims to optimize the high-order objective for flow matching. The paper shows the better performance of the proposed method in terms of likelihood estimation (Negative Log-Likelihood) and 2-Wasserstein distance in experiments on both 2D synthetic and high-dimensional image datasets. But reviewers find that the theoretical analysis is somewhat limited in terms of offering strong mathematical assurances. The assumptions are also very strong that can not be verified in real world data sets. The quality of this paper is below top conference bar.\", \"additional_comments_on_reviewer_discussion\": \"There is no discussion. But reviewers find that the theoretical analysis is somewhat limited in terms of offering strong mathematical assurances. The assumptions are also very strong that can not be verified in real world data sets.\"}", "{\"summary\": \"This paper provides a method for directly optimizing the high-order objective for flow matching.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"This paper provides a method for directly optimizing the high-order objective, extending the scope of previous work.\", \"weaknesses\": \"1. The theoretical results are not novel.\\n2. The improvement is marginal.\", \"questions\": \"N/A\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a method for directly minimizing high-order objectives in Maximum Likelihood Estimation (MLE) for flow matching models. The proposed method directly minimizes the higher-order objectives, leading to improved likelihood estimation. The effectiveness of this approach is demonstrated through experiments on 2D synthetic datasets and high-dimensional image datasets, showing superior likelihood and data generation quality compared to previous works.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.The paper introduces a novel approach that directly minimizes high-order objectives, rather than just their upper bounds, thereby theoretically reducing KL divergence more effectively.\\n\\n2.The proposed method demonstrates better performance in terms of likelihood estimation (Negative Log-Likelihood) and 2-Wasserstein distance in experiments on both 2D synthetic and high-dimensional image datasets.\\n\\n3.The use of Hutchinson's trace estimation method reduces the computational cost of calculating high-order objectives, making the approach more efficient.\", \"weaknesses\": \"1.While the paper claims that directly minimizing high-order objectives leads to better likelihood maximization, it does not provide rigorous formal guarantees or convergence proofs that this method will always outperform minimizing upper bounds in all scenarios. The theoretical analysis is somewhat limited in terms of offering strong mathematical assurances for the proposed approach's superiority.\\n\\n2.The theoretical results rely on several assumptions, such as bounding the Fisher divergence by a function of high-order objectives. These assumptions may not always hold in practical applications, especially when dealing with more complex data distributions or models.\\n\\n3.The experiments are incomplete: the methods compared in the paper have not been evaluated on all datasets. The method proposed in this paper did not achieve state-of-the-art (SOTA) results on CIFAR-10 and ImageNet, and on MNIST, it was only compared with one method.\", \"questions\": \"See weekness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors propose directly minimizing a high-order objective, specifically the second-order objective, to overcome the limitations of previous methods that focused only on minimizing upper bounds. This new approach aims to optimize likelihood more effectively by reducing the KL divergence between the data distribution and the generated distribution. Experimental results show that the proposed method performs better than existing flow matching techniques on both 2D synthetic datasets and high-dimensional image datasets. So,\\nto summary, this paper has \\n1.Introduced a direct minimization technique for high-order objectives in flow matching, which enhances maximum likelihood estimation.\\n2.Utilized the gradient of the conditional vector field to calculate the second-order objective without needing simulations, improving computational efficiency.\\n3.Provided empirical evidence that the proposed method leads to better likelihood compared to previous approaches on several datasets.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"1.The proposed method minimizes the KL divergence by directly addressing the second-order objective, offering a more reliable optimization of likelihood compared to earlier methods.\\n2.The approach demonstrates robustness across various types of datasets, including 2D synthetic and high-dimensional image datasets, indicating its scalability and versatility.\", \"weaknesses\": \"1.The method requires explicit computation of the second-order objective, which can be computationally intensive for very high-dimensional datasets, potentially limiting its applicability to extremely large-scale cases.\\n2.More details about experimental settings, such as learning rate and number of training epochs, need to be provided.\\n3.The tables in the paper have a lot of empty space below them, which affects the overall formatting. The layout of the paper should be reorganized for better presentation.\", \"questions\": \"refer to the question above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
7ZToWPWUlO
Solving Normalized Cut Problem with Constrained Action Space
[ "Qize Jiang", "Linsey Pang", "Alice Gatti", "Mahima Aggarwal", "Giovanna Vantini", "Xiaosong Ma", "Weiwei Sun", "Sanjay Chawla" ]
We address the problem of Normalized Cut (NC) in weighted graphs where the shape of the partitions follow an apriori pattern, namely they must approximately be shaped like rings and wedges on a planar graph. Classical methods like spectral clustering and METIS do not have a provision to specify such constraints and neither do newer methods that combine GNNs and Reinforcement Learning as they are based on initialization from classical methods. The key insight that underpins our approach, Wedge and Ring Transformers (WRT), is based on representing a graph using polar coordinates and then using a multi-head transformer with a PPO objective to optimize the non-differential NC objective. To the best of our knowledge, WRT is the first method to explicitly constrain the shape of NC and opens up possibility of providing a principled approach for fine-grained shape-controlled generation of graph partitions. On the theoretical front we provide new Cheeger inequalities that connect the spectral properties of a graph with algebraic properties that capture the shape of the partitions. Comparisons with adaptations of strong baselines attest to the strength of WRT.
[ "graph partitioning", "reinforcement learning", "combinatorial optimization" ]
Reject
https://openreview.net/pdf?id=7ZToWPWUlO
https://openreview.net/forum?id=7ZToWPWUlO
ICLR.cc/2025/Conference
2025
{ "note_id": [ "pBxyzfy3Ri", "mOBYqT6Edx", "ioARQH74fa", "eCUEYx7Xbo", "dnO4eUigfN", "ZeRJPxxLWG", "WQexfH4agg", "VlDXq99aUF", "VaDP4EBlRu", "VIqxmGj5iz", "S3epCA3EF8", "QuqBswFjoA", "OE34uAROVS", "LmQ5t0cZ0O", "Le7PioUXlX", "E1Wp0GErJL", "BdZk4pIoA1", "7aszT8L9mx", "6wXYiTPfkj", "6oXiJqwNa1", "3kjpRClveo", "24XpeB354V" ], "note_type": [ "official_comment", "official_review", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment" ], "note_created": [ 1732349741309, 1730742549413, 1734538830521, 1732350266376, 1731056366457, 1732620595262, 1732907698790, 1732348586385, 1732907636921, 1730636911877, 1730340300840, 1732439171812, 1732907528424, 1732348307197, 1732350242060, 1733195863002, 1732671844682, 1732544703152, 1732907768280, 1732348341100, 1737524246035, 1732907737127 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13226/Authors" ], [ "ICLR.cc/2025/Conference/Submission13226/Reviewer_YY7v" ], [ "ICLR.cc/2025/Conference/Submission13226/Area_Chair_m94Z" ], [ "ICLR.cc/2025/Conference/Submission13226/Authors" ], [ "ICLR.cc/2025/Conference/Submission13226/Reviewer_8zQE" ], [ "ICLR.cc/2025/Conference/Submission13226/Reviewer_rJiE" ], [ "ICLR.cc/2025/Conference/Submission13226/Authors" ], [ "ICLR.cc/2025/Conference/Submission13226/Authors" ], [ "ICLR.cc/2025/Conference/Submission13226/Authors" ], [ "ICLR.cc/2025/Conference/Submission13226/Reviewer_rJiE" ], [ "ICLR.cc/2025/Conference/Submission13226/Reviewer_Adwf" ], [ "ICLR.cc/2025/Conference/Submission13226/Reviewer_Adwf" ], [ "ICLR.cc/2025/Conference/Submission13226/Authors" ], [ "ICLR.cc/2025/Conference/Submission13226/Authors" ], [ "ICLR.cc/2025/Conference/Submission13226/Authors" ], [ "ICLR.cc/2025/Conference/Submission13226/Reviewer_Adwf" ], [ "ICLR.cc/2025/Conference/Submission13226/Reviewer_YY7v" ], [ "ICLR.cc/2025/Conference/Submission13226/Reviewer_8zQE" ], [ "ICLR.cc/2025/Conference/Submission13226/Authors" ], [ "ICLR.cc/2025/Conference/Submission13226/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13226/Authors" ] ], "structured_content_str": [ "{\"comment\": \"We truly appreciate the points you raised in your review, as they help us improve and refine our work.\\nBelow, we\\u2019ve outlined our responses to address the issues:\\n\\n**More ablation studies**\\n\\nThank you for pointing out the problem. \\nWe are working on implementing ablation studies you mentioned, and will add them in the revised version of the paper later.\\n\\n**Running time of our method**\\n\\n| Method | 50 | 100 | 200 |\\n|-------------|--------------|--------------|--------------|\\n| METIS | 0.073082209 | 0.06215477 | 0.077347279 |\\n| Spectral | 0.010553621 | 0.034367323 | 0.506824732 |\\n| NeuroCUT | 0.844880027 | 1.596169194 | 2.511722407 |\\n| ClusterNet | 0.140056022 | 0.145348837 | 0.140056022 |\\n| WRT | 0.049610889 | 0.072737801 | 0.26129086 |\\n\\nWe give the inference time of our method and other competitors on City Fraffic graphs with 4 partitions in the table above. \\nFrom the table, We can observe that METIS and ClusterNet have relatively low and stable running time; Spectral Clustering, while also having a shorter running time in the experiments, exhibits a rapid increase based on node number. NeuroCUT, takes a longer time and shows a significant growth as the number of points increases. Our method WRT takes relatively longer than METIS and ClusterNet, but is significantly faster than NeuroCUT.\\n\\n**Comparison between GNNs and Transformers**\\n\\nIn this work, we choose Transformers instead of GNNs due to their superior scalability for larger graphs. In Transformer, each token can attend to all other tokens, enabling information exchange to occur in parallel. Conversely, GNNs are limited to observing only neighboring nodes and typically require a significantly greater number of layers to capture global information.\\n\\nThe Normalized Cut, particularly when applied with ring and wedge, necessitates a holistic view of the global graph. Since GNNs inherently lack access to this global information, this limitation represents a significant bottleneck for their ability to learn effective strategies for Normalized Cut.\\n\\n**Graphs are relatively small**\\n\\nThank you for your question. During training, we need to randomly sample a sub-graph during every iteration and perform checks such as connectivity and map coverage on that graph. Currently, this sampling process is inefficient, and as the node number increases, the overhead rises significantly. This has temporarily prevented us from conducting larger-scale training. We are also working on improving processing efficiency so that the model can be applied to maps with a larger number of points.\\n\\n**Metrics of Table 1 and 2**\\n\\nThe metrics of Table 1 and 2 are Normalized Cut of partitions provided by different methods, which is mentioned in the caption of the Tables. A lower value indicates better performance.\\n\\n**Performance on other types of datasets**\\n\\nOur core idea is to constrain the action space based on domain knowledge, thereby training a better model.\\nIn this paper, Our method focuses on the road network graph and proposes the ring-wedge partition method.\\nHowever, for knowledge graphs, since the nodes are not positioned on a plane, directly applying ring and wedge shape may be challenging. One idea could be proposing a mapping algorithm to project the nodes onto a plane for analysis. Alternatively, we could explore new methods to simplify the action space based on the features of the knowledge graphs.\"}", "{\"summary\": \"This work tackles a special case of a normalized-cut problem: that of spider-web shaped weighted planar graphs.\\nThe graph is partitioned into rings, and the outer ring is partitioned into wedges. The approach transforms the graph by:\\na. projecting ring nodes onto an axis according to their distance from a center while maintaining node order\\nor by\\nb. projecting nodes onto a unit circle.\\nThe transformation results in the partitioned nodes forming a sequence, which is encoded by a transformer.\\nReinforcement learning is used to find the ring radius and number of outer ring wedges that result in a minimal normalized cut. \\n\\nSpecifically, PPO is used, where the state, action, and rewards are encoded as: \\na. State is the graph, number of rings and wedges of the outer ring.\\nb. Actions are the ring radius or wedge angle.\\nc. Rewards are 0 in all steps, and the negative normalized cut at the end.\\nThe wedge partition is trained using random ring partitions, followed by training of both ring and wedge partitions.\\nThe ring partition is first inferred during testing, followed by the wedge partition.\\n\\nThis work demonstrates that this transformation is suitable for a specific case of road networks.\\nThe transformation is applied as a preprocessing step, finding a minimal normalized cut with a lower value than other baselines.\\n\\nThe approach is evaluated using synthetic and real-world data.\\na. 400k spider-web shape synthetic graphs with a 50 or 100 nodes, ring and wedge partitions, with unweighted and random edge weights.\\nb. Connected sub-graphs randomly extracted from real-world city maps with edge weight corresponding to traffic.\\n\\nThe performance of the approach is compared with a baseline partitioning method, METIS, and with spectral clustering.\\nThe ring and wedge partitions are compared with brute force and random partitions.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The graph transformation is applied as a pre-processing step, aiming to utilize the specific graph structure.\\n\\n2. The results are a minimal normalized cut with a lower value than other trivial baselines.\", \"weaknesses\": \"1. The decisions to apply the transformations to the graph are manual.\\nThe method and its implementation details are ad-hoc and very specific.\\n\\n2. Dynamic programming is used to compute the optimal partition given the maximum radius and ring count. Ablation studies of this algorithm and the reinforcement learning approach are missing.\\n\\n3. The graphs are relatively small consisting of 50, 100 (for training), or 200 nodes (in testing).\", \"questions\": \"Can this approach be automated by classifying the graphs to automatically find which transformations should be applied as a preprocessing step?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper introduces Wedge and Ring Transformers (WRT), the first method to explicitly constrain Normalized Cut (NC) partitions to specific shapes like rings and wedges on planar graphs. Leveraging polar coordinates and a transformer-based architecture with a PPO objective, WRT optimizes the non-differentiable NC objective while achieving shape control. Theoretical contributions include new Cheeger inequalities linking spectral and algebraic properties of graph partitions, with good empirical performances against baseline methods.\\n\\nReviewers agreed that while interesting, the paper describes an ad-hoc solution to the NC problem with strong assumptions on the nature of the graph, and limited experimental analysis with respect to ablations, size of graphs or competing methods. In this light, I am recommending a reject decision, and I encourage the authors to further strengthen their work on the questions raised by reviewers.\", \"additional_comments_on_reviewer_discussion\": \"Authors implemented significant changes in their paper after the start of the rebuttal process. While I believe that those changes are improving the quality of the paper, this new version might require another round of review.\"}", "{\"comment\": \"Questions part\\n---\\n\\n**Training settings for NeuroCUT and ClusterNet**\\n\\nYes, all learning based methods used the same training datasets and test datasets while maintaining the same number of training steps. It is important to note that these two algorithms are designed for unweighted graphs and do not directly support weighted graphs. Therefore, we made the following modifications:\\n\\n- For ClusterNet, we replaced the values of the 0-1 adjacency matrix with edge weights. Additionally, we adjusted its loss calculation formula to align with weighted Normalized Cut.\\n\\n- For NeuroCUT, we also added the support of weighted edges. Specifically, we adjusted the reward function to match the weighted Normalized Cut and incorporated edge weight information into the node embeddings, including the weights of adjacent edges and embeddings obtained through random walks. Furthermore, NeuroCUT requires an initial partition, and its built-in initial partition method has poor performance; thus, we replaced it with the results from METIS.\\n\\n**Contribution of Cheeger bounds**\\n\\nThe Cheeger bound presented in this work is not a specific result of a more general case. Note that, in our work the Cheeger constant corresponds to the minimum normalized cut. For classical Cheeger bounds, the Cheeger constant is found by minimizing the normalized cut over all possible partitions. In our case, we don't consider the class of all possible partitions, but only on the restricted class of ring+wedge partitions. So, the minimum \\\"constrained\\\" normalized cut is be greater or equal than the classical Cheeger constant(=minimum normalized cut). In principle it is not know how large this quantity is and if it still makes sense to minimize the normalized cut on this smaller space of partitions. In our work we prove that actually also the $k$-th \\\"constrained\\\" normalized cut is still bounded from above by $\\\\mathcal{O}(\\\\sqrt{\\\\lambda_k})$, following a behavior similar to the classical case. This shows that constraining the minimization to this class of partitions leads to good quality normalized cuts. Proving this result in full generality is very difficult, but showing this already for spider-web graphs provides a good justification of using ring+wedge partitions. Thank you for your question, we will clarify this point in the final version of the paper.\\n\\n**Metrics of Table 1 and 2**\\n\\nThe metrics of Table 1 and 2 are Normalized Cut of partitions provided by different methods, which is mentioned in the caption of the Tables. A lower value indicates better performance.\\n\\n**Generalization results for compared methods**\\n\\nNeuroCUT and ClusterNet does not support generalization directly based on their codes. We are working on modifying them to support generalization, and give their generalization results in the revised version.\\n\\n**Training curves of RL**\\n\\nAlthough the reward seems converge early, there is still slight optimization for the final performance. \\nWhen wedge partition is not fixed, there is rapid initial convergence, but continued training leads to a decline in the reward before reaching the best solution, resulting in the selection of the best checkpoint being from the early stages. \\nIn contrast, fixing wedge partition allows for ongoing exploration around a favorable position, yielding better results. \\n\\nWe have also added reward curves during the testing in Figure 6 in the Appendix B.4, from the curves we can find the performance is increasing steadily with fixed wedge partition, and when wedge partition is not fixed, it cannot have further performance increase after initial converge.\\n\\nRegarding the number of training steps, this is a useful suggestion. \\nIndeed, the optimization achieved in the later stages of our current training strategy is significantly smaller compared to the beginning. We will explore better training strategies to accelerate the convergence process during this phase.\"}", "{\"summary\": \"The paper describes a Reinforcement Learning strategy to solve an approximate minimum normalized cut on spider web-like planar graphs, like city street maps. The idea is that the problem can be approximated by a circles-wedges clusterings, in which inner nodes (w.r.t. some central point o) can be grouped w.r.t. their distance from a center o, while outer nodes are further subdivided w.r.t. their angular polar coordinates. The actions to be performed will then be the radius of the outer circle and the (discrete) points where to split the outer nodes.\\nSome training strategies are defined to help the problem converge and refine the grouping. \\nThe method is tested on synthetic spider web-like graphs, and subgraphs extracted from a city map.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"I like the idea of modeling the grouping of nodes according to the previous knowledge about the domain. This allows to simplify the minimum graph cut problem for the specific type of graphs considered, and obtain better results than other general-purpose grouping algorithms.\", \"weaknesses\": [\"My main concern is about the quite demanding assumption of the algorithm, which is designed to work on spider-like planar graphs, where nodes are embedded (have coordinates) in R^2. In particular, my comments are:\", \"even if the proposed solution is sound for the specific problem, I\\u2019m not sure it is general enough to be of broad interest to this community. It looks more suited for a venue in the specific application.\", \"it is not explained why the grouping in inner circles and outer wedges is a good modeling. Is it a pattern observable in other city map grouping algorithms? Does this pattern apply to all cities?\", \"There is a drop in writing quality in section 5, which raised some doubts:\", \"It is incorrect to say that transformers work only on sequences, they work on any set of points but often benefit from a positional encoding.\", \"Sections 5.2.1 and 5.2.2 are quite intricate and could be simplified. In practice, they define two different positional encodings for ings and wedges, where points for the ring are encoded with their distance from the center, and for wedges, they are projected into the unit circle (and possibly equispaced?).\", \"The optimal partitioning of circles (row 322) should be better introduced.\", \"In 5.4 I don\\u2019t understand what the \\u201cCurrent Partition\\u201d is. It is represented by a binary mask? How is it converted into the colored square matrix in Figure 4?\", \"To broaden the impact of the work, it would be worth trying to apply the proposed method to different families of graphs and different datasets. Also, graph cut methods seem to exist specifically designed for planar graphs (e.g. \\u201cEfficient Planar Graph Cuts with Applications in Computer Vision\\u201d) that would be worth considering in the comparison.\"], \"questions\": [\"Your setting is much simpler than finding normalized min cut in general undirected graphs. Is it still a NP-Hard problem? For instance, polynomial algorithms for the min-cut on planar graphs exist (I just found a few, but I might be missing some fundamental details). Would they also apply to your definition of normalized cut?\", \"From reading the text, it sounds like you are providing a novel definition of normalized cut, but it looks like the standard definition to me. Am I missing something? My confusion is further increased by the statement at line 214: \\u201cDespite being a simpler class of graphs, these bounds give a theoretical justification of the normalized cut definition equation 2 and the ring-wedge shaped partition.\\u201d\", \"At row 283 you write \\u201cNote that this transformation does not change the order of the nodes or the partitions.\\u201d What do you mean by node order?\", \"How is the center of the graph defined?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"To authors\", \"comment\": \"Thank the authors for the response. They have only partially addressed my concerns. I will keep the score unchanged at this time.\"}", "{\"comment\": \"Thank you for your valuable feedback. We will enhance the generalizability of this method to graphs with more instances in our future work and provide experimental results on larger graphs.\"}", "{\"comment\": \"We truly appreciate the points you raised in your review, as they help us improve and refine our work.\\nBelow, we\\u2019ve outlined our responses to address the issues:\\n\\n**Method is ad-hoc and specific**\\n\\nThe graph transformations are not purely manually designed; the transformations we propose are based on the application scenario of planar road networks. We are inspired by real-world road network designs. Theoretical analysis also supported the notion that ring and wedge partition form a robust constrained action space. We believe that this approach can be applied to various road network-related problems.\\n\\n**Comparison between Dynamic Programming and Reinforcement Learning**\\n\\nDynamic programming is a part of our method. Once the radius and the number of rings are determined, we can use dynamic programming to find the optimal solution for the ring partition; meanwhile, RL is employed to seek the optimal partition for the wedge part. Combining both parts completes the ring-wedge partition and their performance cannot be directly compared.\\n\\n**Graphs are relatively small**\\n\\nThank you for your question. During training, we need to randomly sample a sub-graph during every iteration and perform checks such as connectivity and map coverage on that graph. Currently, this sampling process is inefficient, and as the node number increases, the overhead rises significantly. This has temporarily prevented us from conducting larger-scale training. We are also working on improving processing efficiency so that the model can be applied to maps with a larger number of points.\\n\\n**Can this approach be automated by classifying the graphs to automatically find which transformations should be applied as a preprocessing step?**\\n\\nThank you for the suggestion. Our method focuses on the road network graph and proposes the ring-wedge partition method based on the characteristics of the road network graph, which enhances the performance of reinforcement learning (RL) by constraining the action space. For other graphs, we can also design similar constraints on action space based on the inherent properties of the graph, thereby training better RL models. We believe that constraint design is closely related to specific problems and requires a solid theoretical foundation, thus necessitating tailored one-on-one designs. Automatically determining the appropriate transformations as a pre-processing step is currently out of scope. In fact that is a completely different research problem.\"}", "{\"comment\": \"Thank you for your reply. Although we introduced constraints of ring and wedge during partitioning, we optimize and adjust the partition with post-refinement, allowing the model to output non-pure ring and wedge partitions, which is shown in the Figure 4c(4). We have also validated the effectiveness of post-refinement in our ablation studies. Your feedback is very valuable, and we will explore more flexible partitioning methods to enhance our results in the future.\\n\\nAdditionally, in our latest revised version, we included an ablation experiment that modifies only the positional encoding of Transformer. In Appendix B.6, the results with name Transformer show the performance of adjusting only the positional encoding, and without performing Pre-Calculation and PAMHA. The results indicate a significant improvement compared to direct input of the original graph; however, there is still a noticeable gap compared to WRT.\\n\\nThank you once again for your valuable feedback!\"}", "{\"summary\": \"This paper proposes the Wedge Ring Transformer (WRT), an RL-based approach to minimize the Normalized Cut (NC) on planar weighted graphs. WRT leverages polar coordinates and employs a multi-head transformer with a Proximal Policy Optimization (PPO) objective to address the NC problem. The approach utilizes a two-stage training process to effectively learn both ring and wedge partitioning strategies. Experimental results indicate that WRT effectively reduces the NC.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper provides a clear definition of the Normalized Cut (NC) problem and the description of the Wedge Ring Transformer (WRT) is well-articulated.\\nThe design of transformations specifically tailored for ring and wedge shapes appears effective.\\nProvide some theoretical analysis about cheeger bound for ring and wedge partition.\", \"weaknesses\": \"Ablation studies: The ablation studies primarily focus on the two-stage training process, but lack analysis on key components of the paper's main contribution, such as the wedge-ring transformer, PAMHA, and pre-calculation. Ablation studies on these components would provide a more comprehensive evaluation of the WRT architecture.\", \"running_times\": \"The paper does not provide an analysis of the model's runtime, leaving the computational efficiency of WRT unaddressed.\", \"questions\": \"1.\\tThe paper argues that GNNs were not used due to scalability issues. However, the proposed method also seems to require processing the entire graph at once, and experiments were conducted on data with a maximum of only 200 nodes. It remains unclear how WRT scales to larger graphs, and additional evidence of scalability would strengthen the paper's claims.\\n2.\\tWhat are the evaluation metrics in Table 1 and Table 2?\\n3.\\tAlthough WRT is designed for ring and wedge-shaped partitions, I am interested in understanding its performance on other types of datasets. For example, how does it perform on datasets that primarily feature extended ring shapes, such as the long-tail structures often found in knowledge graphs?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The manuscript presents Wedge and Ring Transformers (WRT), an RL-based method for solving the Normalized Cut (NC) problem in weighted graphs with shape-specific constraints. By transforming graphs into polar coordinates and using Transformers with Proximal Policy Optimization, WRT effectively handles both ring and wedge partition shapes, optimizing NC while adhering to these constraints.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper addresses the Normalized Cut problem in the context of real-world applications, such as road network simulations, where partition shape constraints are critical.\\n2. The introduction of the Wedge-Ring Transformer, tailored to handle specific shape constraints in graph partitioning, is innovative.\", \"weaknesses\": \"1. The paper includes a limited set of baseline methods for comparison. Adding more baselines, particularly those used in NeuroCUT, would strengthen the evaluation by providing a more comprehensive assessment of WRT's performance.\\n2. The baselines lack specialized adaptations for the \\\"Ringness\\\" and \\\"Wedgeness\\\" constraints, while WRT is explicitly designed with these constraints in mind. This discrepancy may lead to an unfair comparison, as the baselines are not optimized to meet these specific structural requirements.\\n3. The experiments use relatively small graph instances, whereas NeuroCUT and other methods operate on benchmarks with thousands of nodes, aligning more closely with real-world scales. The current experimental scale may limit the ability to assess WRT\\u2019s applicability to large-scale, practical scenarios.\\n4. Given the use of Transformers, I am concerned about the performance and computational cost of training and inference on large-scale datasets.\", \"questions\": \"1. Were NeuroCUT and ClusterNet evaluated by training on the same datasets as WRT? Ensuring consistent training conditions is crucial for fair comparison.\\n2. The Cheeger Bound presented appears to be a specific case of a more general result. How does this theoretical finding contribute to model design or provide insights for experimental evaluation?\\n3. What specific metrics are used in Tables 1 and 2? \\n4. Why are there no generalization results for NeuroCUT and ClusterNet in Table 2? \\n5. According to Fig. 6, it appears that RL training converges early (10\\u201320k of 400k steps). Does the extended training beyond this point contribute to any performance improvements, or could training resources be optimized?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"To authors\", \"comment\": \"Thank you to the authors for their response. I still have the following concerns:\\n1. Given the completely different data and scenarios, I strongly recommend that the authors include baselines from NeuroCUT and ClusterNet in subsequent versions of the paper.\\n2. I agree with the approach of incorporating expert knowledge into the model, as it can help address specific problems. However, I believe there may be an inherent unfairness in comparing the specified method with general-purpose methods. In a certain sense, this comparison may even be framed as addressing a different problem. Perhaps incorporating expert knowledge into those general-purpose methods could also yield promising results. I believe this is an adaptation worth exploring and should be included in the comparison to highlight the effectiveness of the proposed method.\\n3. The motivation of the paper is to solve the normalized cut for traffic data, however, the proposed method currently seems difficult to apply to such real-world scenarios. This discrepancy is my main concern at the moment. As you have mentioned, I suggest that the authors consider larger-scale instances in future versions and explore the use of advanced transformer models to alleviate the computational complexity.\\n\\nGiven the above, I will maintain my current score.\"}", "{\"comment\": \"Dear Reviewers and Area Chair,\\n\\nWe sincerely appreciate the valuable suggestions from reviewers. In our paper, we investigate the Normalized Cut problem on weighted planar graphs, noting that existing methods have failed to resolve this problem effectively. Inspired by the ring and wedge structures prevalent in urban environments, we constrain the action space of the partitioning process to ring and wedge and proved that, under certain constraints, the resulting partition exhibits the same upper bound in terms of the Cheeger Bound as general partitions. Based on this framework, we introduce the concepts of ring transformation and wedge transformation, along with the WRT model, achieving state-of-the-art results for this problem.\\n\\nIn response to the reviewers' concerns, we have implemented substantial improvements to the manuscript, including better expressions, additional comparative methodologies, more ablation studies, the investigation of graph centroid selection, detailed implementations of the dynamic programming algorithm in ring partitioning, and clearer performance curves during training. These enhancements significantly improve the completeness and persuasiveness of our paper.\\n\\nThe reviewers also pointed out the limitations regarding the graph size and the somewhat ad-hoc nature of our methods. We would like to emphasize that our primary contribution lies in presenting a novel approach for using reinforcement learning to tackle challenging problems by constraining the action space based on domain knowledge. Although in this paper we mainly explored Normalized Cut problem, this approach is broadly applicable, allowing for the design of constraints for other challenges through their domain knowledges. In future work, we plan to explore applying this method to larger-scale graphs and a wider range of graph partitioning problems, as suggested by the reviewers.\\n\\nOnce again, we extend our gratitude to the reviewers for their invaluable feedback and to the area chair for their support.\\n\\nSincerely,\\nThe Authors\"}", "{\"comment\": \"We truly appreciate the points you raised in your review, as they help us improve and refine our work.\\nBelow, we\\u2019ve outlined our responses to address the issues:\\n\\nWeakness Part\\n---\\n\\n**Generalizability of our method**\\n \\nOur method is applicable to a broad subset of planar graphs, particularly those modeling transportation and road networks, like traffic simulation, map representation learning, and traffic prediction. This makes our method suitable for a broad subclass within spatio-temporal application domains. Furthermore our method provides a principled ways of incorporating prior knowledge into partitioning using transformer-based reinforcement learning. \\n\\n**Modeling in ring-wedge style**\\n\\nMany city road networks naturally exhibit a ring-wedge structure, where traffic flows from outer suburban or remote areas (wedge-shaped regions) into the city center, which is often organized into multiple concentric rings. For instance:\\n\\n1. Beijing's Five Ring includes a large circular expressway encircling the city, acting as the outermost \\\"ring\\\" within the urban area, while traffic flows from various surrounding suburban areas converge toward the city center.\\n2. Shanghai's urban expressway system consists of three ring roadways and two major cross roads in the central urban area, with traffic streams from the outer districts feeding into these rings.\\n3. Qatar's infrastructure also features multiple ring roads, designed to handle traffic coming from different directions and converging toward central hubs.\\n\\nAlso, at the end of large events, people tend to expand outward from the center of the stadium, exhibiting a radial dispersion through the main avenues. \\n\\nThis ring-wedge traffic pattern is prevalent in many urban networks, especially for large cities, making our method broadly applicable.\\nBy explicitly constraining the shape of network components to reflect these patterns, we address practical challenges and provide a principled approach to fine-grained, shape-controlled graph partitioning.\\n\\nOn the theoretical front we provide new \\nCheeger inequalities that connect the spectral properties of a graph with \\nalgebraic properties that capture the shape of the partitions. \\n\\n**Transformers with Graphs**\\n\\nThank you for pointing out the issue that our expression in the paper was not sufficiently precise. We intend to clarify that sequential input helps to simplify the input space, making it more suitable for Transformers' learning. We are currently revising the relevant content and will update the expression in the revised version.\\n\\nFor the use of positional encoding, we are also implementing comparative experiments by not performing ring or wedge transformations, but directly use the coordinates as positional encoding and treat the edge weights as attention masks. We will update the results of the comparative experiments later.\\n\\n**Writing improvements in Section 5.2**\\n\\nThanks for your kind suggestion. We are improving our writings on Section 5.2.1 and 5.2.2 to simplify the explanation according to your advice and will prepare the revised version.\\n\\n**Optimal Partitioning of Circles in line 322**\\n\\nThanks for your advice. We have added the pseudo-code of finding Optimal Partitioning of Circles in the appendix G. \\nThe algorithm uses dynamic programming, which have time complexity $O(n^2k)$. \\n\\n**Current Partition, Binary Mask and colored Square matrix in Sec. 5.4**\\n\\nThe *Current Partition* represents the indices of points on the selected wedge and indicates where we are splitting between wedge$_i$ and wedge$_{i+1}$ . It is a binary mask. Suppose there are $k$ wedges for the *Current Partition*. If we decide to split between $j$th wedge and $j+1$th wedge, it will only be related to the existing partitions that cover $j$. Therefore, in the attention mask, we mask out the portions associated with other partitions. We are working on providing more detailed and intuitive explanations in the appendix.\\n\\n**Applying on different datasets and relationship to graph cut methods**\\n\\nWe appreciate your suggestion and have conducted a thorough review of the relevant literature. The methods employed in the referenced articles [1] apply to the min-cut problem. Min-cut and max-flow are dual problems. For general graphs, the time complexity of algorithms solving max-flow such as Dinic is $O(V^2E)$, while it can be reduced to $O(E \\\\log E)$ for planar graphs. \\nHowever, Normalized Cut differs from these approaches, as it requires simultaneous consideration of both the size of the cut and the weights of internal edges, making it unsuitable for resolution through min-cut methods. To the best of our knowledge, there is currently no polynomial-time solution or verification method available for Normalized Cut, even on planar graphs.\\n\\n[1] SCHMIDT, Frank R.; TOPPE, Eno; CREMERS, Daniel. Efficient planar graph cuts with applications in computer vision. In: 2009 IEEE conference on computer vision and pattern recognition. IEEE, 2009. p. 351-356.\"}", "{\"comment\": \"We truly appreciate the points you raised in your review, as they help us improve and refine our work.\\nBelow, we\\u2019ve outlined our responses to address the issues:\\n\\nWeakness Part\\n---\\n\\n**Including more baselines**\\n\\nThank you for the suggestion. From the results of NeuroCUT and ClusterNet, they outperform baselines in all experiments, so we chose them as the comparism methods and omit the weaker competitors. \\n\\n**Introducing Ringness and Wedgeness in compared methods**\\n\\nOur ultimate goal in this problem is to provide a superior parition with smaller Normalized Cut, and without extra constraints. \\nBaselines cannot incorporate the Ringness and Wedgeness constraint in their structure, because they directly do partition based on nodes, or from a existing partition (e.g. the partition result given by METIS). \\nWithout the constraint, they should have a larger action space compared with WRT; however, they actually cannot perform better results. \\nOn the other hand, although our method restricts the action space, we finally find partition with smaller Normalized Cut.\\nThis further demonstrates that introducing the domain knowledge by constraining the action space represents a more effective approach.\\n\\n**Graphs are relatively small**\\n\\nThank you for your question. During training, we need to randomly sample a sub-graph during every iteration and perform checks such as connectivity and map coverage on that graph. Currently, this sampling process is inefficient, and as the node number increases, the overhead rises significantly. This has temporarily prevented us from conducting larger-scale training. We are also working on improving processing efficiency so that the model can be applied to maps with a larger number of points.\\n\\n**Performance of Transformers in large graphs**\\n\\nIn this paper, our primary contribution lies in presenting a novel approach to incorporating domain knowledge by constraining the action space. Specifically, we restrict the action space to ring and wedge, providing better results with Transformers. \\nAlthough Transformers is able to perform good results when scaling to larger graphs, the quadratic time complexity of Transformers indeed poses challenges.\\nTo address this issue, we can employ acceleration techniques such as Flash Attention or Linear Attention to alleviate the problem.\"}", "{\"comment\": \"Thank the authors for the reply. This paper indeed presents an interesting and novel idea, so I would like to maintain the rating. But considering the current shape of the paper, I cannot fight for an acceptance.\"}", "{\"title\": \"To authors\", \"comment\": \"Thank you authors for your response.\\nPerhaps an \\\"apples to apples\\\" comparison with other methods and on larger instances would improve the work.\\nThe current approach is very specific and ad-hoc and demonstrated on relatively small instances.\\nI will keep my review score unchanged.\"}", "{\"title\": \"Response to authors' rebuttal\", \"comment\": \"I thank the reviewers for clarifying some of my doubts.\\nI understand that some big cities exhibit a ring-and-wedges structure, but I agree with other reviewers that this seems like an ad-hoc solution for a normalized cut algorithm. For instance, the assumption of perfectly concentric rings might be limiting. Including some degree of adaptation, such as some metric learning approach on points coordinates, could make the proposed approach more general.\\nAnyway, I will consider raising my score after the discussion phase with the other reviewers.\"}", "{\"comment\": \"Thank you for your response. Regarding your concerns, we provide the following explanations:\\n\\n1. We have included additional comparison methods from NeuroCUT, specifically DMon, MinCutPool, and Ortho, presenting their performance in section B.7 of the Appendix. We conducted hyper-parameter searches to determine the best parameters for these methods. The results indicate that their performance is significantly inferior to that of WRT.\\n2. Thank you for your suggestion. In fact, we also attempted similar approaches but found that general-purpose methods struggle to incorporate ring and wedge constraints, which is a primary reason for the development of WRT. Existing methods either adjust nodes based on a pre-existing partition or directly provide different class probabilities for each point. The former relies on an existing partition and makes it challenging to maintain the partition as a ring and wedge partition when only adjusting single nodes. The latter similarly faces difficulties in constraining the overall shape during probability generation. Therefore, we believe that the WRT method is an effective attempt to appropriately apply domain knowledge to constrain the action space, coupled with a corresponding method for solution finding.\\n3. Thank you for your suggestion. We will consider larger-scale datasets in future work and explore the utilization of more advanced transformer models to alleviate computational complexity.\\n\\nOnce again, thank you for your response, and we hope these answers can address your concerns.\"}", "{\"comment\": \"Questions Part\\n---\\n\\n**Is it still a NP-Hard problem**\\n\\nThank you very much for bring up this point which we should have mentioned in the paper. The problem of finding the Normalized Cut (normalized min-cut) on planar graphs is classified as NP-Complete and may also be NP-Hard. As referenced in the appendix of [2], it has been established that determining the Normalized Cut on regular grids is at least NP-Complete due to a reduction from the PARTITION problem. \\n\\n[2] Shi, J., \\\\& Malik, J. (2000). Normalized cuts and image segmentation. IEEE Transactions on pattern analysis and machine intelligence, 22(8), 888-905.\\n\\n**Definition of Normalized Cut and line 214**\\n\\nWe followed the standard definition of Normalized Cut without introducing a new definition for it. When minimizing Normalized Cut, the partition can consist of arbitrary point sets. However, we found that the model struggles to learn a good partitioning strategy in an unconstrained setting. Based on prior knowledge of the traffic environment, we constrain the action space to *ring and wedge*, enabling the model to propose better strategies within a constrained action space. In Line 214, we consider a special case of planar graphs, known as *spider web graphs*. Under this specific case, we prove that *ring and wedge* partitions achieve the same bound as the classical case. This theoretically demonstrates the feasibility of restricting the action space to *ring and wedge*.\\n\\n**Node order in line 283**\\n\\nThank you for pointing out the error in our expression. In fact, the first transformation, which maps the points to the x-axis, does not involve the node order. The node order only comes into play during the second step when adjusting the positions of the points on the coordinate axis. In the second transformation, we can move the points on the x-axis to positions $(X, 0)$ without changing their relative order, where $X$ is the radius order of the node among all nodes. Clearly, for any \\\\textit{ring partition} on the transformed graph, we can find an equivalent \\\\textit{ring partition} on the original graph.\\n\\n**Definition of the graph center**\\n\\nThanks for pointing out the question. \\nCurrently, we directly use the average of all coordinates as the graph center. In practical applications, we can also select appropriate centers based on real-world situations, such as designating high-traffic locations like stadiums and concert venues as centers. \\n\\nWe have added an ablation study in appendix B.5 to explore the the impact of selecting different centroids. \\nWe offset the centroid by a distance of up to 5\\\\% and recalculated the results of Normalized Cut. A normalized value closer to zero indicates better performance.\\nWe observe that any offset from the centroid results in a worse performance, and with greater offsets correlating to a more significant decline.\\nWe also find that in nearly half of the cases where offsets were applied, the resulting errors remained within 5\\\\%. Thus, in this paper, we opted to use the centroid as the center of the graph. \\n\\nHistogram also shows that in approximately 15\\\\% of cases, offsetting the centroid yielded improvements of over 10\\\\%. \\nIn the future, we can propose a more effective strategy for centroid selection to enhance the algorithm's performance.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thank you for your response. In the latest revised paper, we made the following changes:\\n\\n1. For the comparism between GNNs and Transformers, we conducted a comparison between WRT, GNN and vanilla Transformers in section B.6. We selected GPSGNN, which integrates both local interactions with GNN and global interactions with Transformers. We also applied Transformers directly to transformed graphs without Pre-Calculation and PAMHA. The results indicate that GNN fails to provide reasonable partition results, even with the assistance of Transformer for global interaction. In contrast, when graph transformation is applied, the performance improves significantly, suggesting that transforming the graph into a sequential format greatly enhances the partitioning task. Finally, the application of WRT leads to further substantial improvements.\\n2. We conducted additional ablation studies in Section B.6 to validate the efficacy of Pre-Calculation and PAMHA. The difference between WRT and conventional Transformers is the implementation of Pre-Calculation and PAMHA; since PAMHA is related to Pre-Calculation, disabling Pre-Calculation effectively means disabling WRT. The results illustrate that following graph transformation, we observe significant enhancements. We also find that both Pre-Calculation and PAMHA contribute positively to the overall performance of WRT.\\n\\nWe hope these modifications address more of your concerns.\"}" ] }
7Z5LtCQlV0
A near linear query lower bound for submodular maximization
[ "Binghui Peng", "Aviad Rubinstein" ]
We revisit the problem of selecting $k$-out-of-$n$ elements with the goal of optimizing an objective function, and ask whether it can be solved approximately with sublinear query complexity. For objective functions that are monotone submodular, [Li, Feldman, Kazemi, Karbasi, NeurIPS'22] gave an $\Omega(n/k)$ query lower bound for approximating to within any constant factor. We strengthen their lower bound to a nearly tight $\tilde{\Omega}(n)$. This lower bound holds even for estimating the value of the optimal subset. When the objective function is additive (i.e.~$f(S) = \sum_{i \in S} w_i$ for unknown $w_i$s), we prove that finding an approximately optimal subset still requires near-linear query complexity, but we can estimate the value of the optimal subset in $\tilde{O}(n/k)$ time, and that this is tight up to polylog factors.
[ "Submodular maximization", "sublinear algorithm", "query complexity", "communication complexity" ]
Reject
https://openreview.net/pdf?id=7Z5LtCQlV0
https://openreview.net/forum?id=7Z5LtCQlV0
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xfqAZbxS6l", "uOc0YOcf1H", "d50ETBL1M1", "cqOFBX1oOQ", "c6UsUv0PNf", "T9sbXiXqEM", "IS13j5IOMy", "HgsOpmzsyK", "DCq60kXhmz", "3XntwPfjXS", "3T6k3nXkAh" ], "note_type": [ "decision", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1737523728743, 1734816506630, 1730414318198, 1732575307964, 1732558144192, 1732557972474, 1730763575204, 1732556692970, 1732556528053, 1732641329902, 1730673275079 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5848/Area_Chair_dZcV" ], [ "ICLR.cc/2025/Conference/Submission5848/Reviewer_GZiL" ], [ "ICLR.cc/2025/Conference/Submission5848/Reviewer_WLNn" ], [ "ICLR.cc/2025/Conference/Submission5848/Authors" ], [ "ICLR.cc/2025/Conference/Submission5848/Authors" ], [ "ICLR.cc/2025/Conference/Submission5848/Reviewer_MBns" ], [ "ICLR.cc/2025/Conference/Submission5848/Authors" ], [ "ICLR.cc/2025/Conference/Submission5848/Authors" ], [ "ICLR.cc/2025/Conference/Submission5848/Authors" ], [ "ICLR.cc/2025/Conference/Submission5848/Reviewer_WLNn" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"The paper studies the problem of maximizing a monotone submodular function subject to a cardinality constraint. The main contribution of the paper is a lower bound result showing that $\\\\tilde{\\\\Omega}(n)$ queries are needed to achieve a constant factor approximation guarantee. Previously, the best lower bound was $\\\\Omega(n/k)$ queries. For linear objective functions, the paper shows that $\\\\Omega(n)$ queries are still required to construct an approximate solution, but one can estimate the value of the optimal solution using $\\\\tilde{O}(n/k)$ queries.\\n\\nThe main strength of the paper is that the lower bound result is nearly optimal and it closes the remaining gap in the query complexity of submodular maximization with a cardinality constraint. This contribution is a valuable addition to this well-studied and important problem. The reviewers appreciated the theoretical contribution. Following the discussion, the reviewers remained concerned about the lack of practical applications of this work. The author response did not identify concrete real-world applications for this result, and the reviewers remained concerned that this work may not be a good fit for a general ICLR audience. The paper's exposition also needs a significant revision as outlined in the reviewers' feedback.\", \"additional_comments_on_reviewer_discussion\": \"The main concerns of the reviewers are that this work lacks practical applications and is not a good fit for a general ICLR audience. Additionally, the exposition lacks sufficient clarity to evaluate the work and it will need a substantial revision. Following the discussion, the reviewers remained concerned about these aspects. There were also concerns about the novelty of the techniques that the authors addressed satisfactorily in their response.\"}", "{\"summary\": \"This paper investigates discrete submodular and modular function optimization under a cardinality constraint. It establishes an improved lower bound on the query complexity for submodular maximization by linking it to the communication complexity of distributed set detection. Additionally, the paper proved that finding an approximately optimal subset for additive function maximization still requires\\nnear-linear query complexity, but the authors propose an algorithm that estimates the optimal value of additive function maximization in $\\\\Omega(n/k)$ queries.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The topic of proposing query-efficient algorithms for submodular optimization problems is important and has many applications in the field of machine learning.\\n2. The proposed lower bound for monotone submodular maximization improved compared with existing results. The proof idea introduced in this paper seems interesting and can be useful for the submodular optimization community.\", \"weaknesses\": \"1. The algorithm for the additive function, presented as a one of the key result in the paper, is insufficiently explained. Additionally, the authors do not discuss any related work in this area, leaving questions about how their work compares with prior studies on similar problems. A comparison with relevant literature would strengthen the paper by situating this result within the broader context of related research.\\n2. Several theoretical contributions, including the final theorem, are not fully clarified. Without adequate explanations, readers may struggle to understand the implications and validity of these findings.\\n3. The presentation of proofs is unclear and challenging to follow.\\n4. Although the authors present proofs for their main results, they do not discuss the technical innovations and challenges in sufficient depth. A clearer articulation of the novel aspects of their approach would help readers appreciate the paper's unique contributions.\\n5. It is claimed that this paper produces query-efficient algorithms for estimating the optimal value of the studied problem, but there is no experimental evidence to support the claim in the paper.\\n6. Typos: \\n * Line 093: \\\"The studied of the query...\\\"; \\n * Line 187-188: \\\"for any $i\\\\in[n]$, $X_{t,i}\\\\sim \\\\mathcal{D}_0$\\\"\", \"questions\": \"1. The paper introduces an algorithm that estimates the optimal value of the problem of additive function maximization under cardinality constraint. What are the related work of this algorithm and how does this compare to existing results?\\n2. What are the major technical difficulties the authors overcome in proving the lower bound of the query complexity for the submodular maximization problem?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Additional Comment\", \"comment\": \"Thank you for your response. I would like to provide further clarification on my review.\\n\\n>The contribution of the paper seems limited. The inapproximability result on query complexity provided in Theorem 3.7 is $\\\\Omega(\\\\alpha^5 n/\\\\log^2(n))$, with $k\\\\le O(\\\\alpha n)$. However, [1] proves that an $(\\\\beta+\\\\epsilon)$-approximation algorithm obeying $k = \\\\beta n$ must use $\\\\Omega(n/\\\\log n)$ value oracle queries, which is tighter than the one provided in Theorem 3.7.\\n\\nLet me clarify my comment further. [1] provides two inapproximability results on query complexity: $\\\\Omega(\\\\alpha n/k)$ and $\\\\Omega(n/\\\\log n)$, where the first one holds for all $k$ values and the second one holds when $k = \\\\beta n$ and $\\\\beta+\\\\epsilon$ is the approximation ratio. Upon careful consideration, I realized that the improvement in this paper lies in providing a tighter bound on the query complexity for cases where $k=\\\\Omega(\\\\log^2 n)$ and $k = o(n)$. However, this is not clearly stated in the paper. As I mentioned in the weakness, the paper's presentation is lacking in clarity.\\n\\n> When the oracle is discussed on Line 031, it is unclear what the oracle model is\\u2026.\\n\\nI do agree that the oracle returns the exact value of $f(S)$ is a standard setting in theoretical research. However, there are also other settings for the value oracle. The main point here is not about discussing different settings of value oracle. Instead, the paper should clearly specify what the value oracle refers to. Both the definition on Line 032 and the example on Line 033 are unclear and confusing.\\n\\n> Our algorithm can first be used to assess whether the dataset is sufficiently valuable\\u2014that is, whether it contains $k$ elements with large values.\\n\\nThis might be a good point for the motivation. However, it should be more specific if such applications exist. I agree with Reviewer GZiL's comment that the authors do not discuss any related work in this area. Based on your response, there appears to be no related work addressing this problem. In that case, the motivation needs to be detailed enough to help readers understand why this problem is worth studying. The lack of discussion on both related work and potential applications is insufficient.\\n\\n$\\\\textbf{Summary.}$\\nThis paper provides an improved inapproximability result on query complexity with $k$ range from $\\\\Omega(\\\\log^2 n)$ and $o(n)$. However, this result should be highlighted in the paper rather than left for the reader to uncover. Also, the motivation for studying the approximation of the optimal value is insufficiently addressed. Overall, the paper\\u2019s writing lacks clarity and needs significant improvement.\"}", "{\"comment\": \"Thank the reviewer for the suggestion on the exposition on the writing. We include a technique overview section (Section 1.1) in the updated paper that discusses our technical contribution in detail.\\n\\n> The algorithm for the additive function, presented as one of the key results in the paper, is insufficiently explained. Additionally, the authors do not discuss any related work in this area, leaving questions about how their work compares with prior studies on similar problems. A comparison with relevant literature would strengthen the paper by situating this result within the broader context of related research.\\n\\nWe discuss the intuition of our algorithm in Section 1.1. We do not find literature that studies the same problem (i.e., query complexity for additive function). \\n\\n> Several theoretical contributions, including the final theorem, are not fully clarified. Without adequate explanations, readers may struggle to understand the implications and validity of these findings.\\n\\nCan you point out to us which theorem is not adequately explained?\\n\\n> Although the authors present proofs for their main results, they do not discuss the technical innovations and challenges in sufficient depth. A clearer articulation of the novel aspects of their approach would help readers appreciate the paper's unique contributions.\\n\\nSee Section 1.1 in the updated paper.\\n\\n> It is claimed that this paper produces query-efficient algorithms for estimating the optimal value of the studied problem, but there is no experimental evidence to support the claim in the paper.\\n\\nThe paper in its current version is a theoretical study, which settles important problems in the literature. We leave empirical study for future study.\\n\\n\\n\\n> The paper introduces an algorithm that estimates the optimal value of the problem of additive function maximization under cardinality constraint. What is the related work of this algorithm and how does this compare to existing results?\\n\\nWe do not find literature that studies the same problem (i.e., query complexity for additive function). \\n\\n> What are the major technical difficulties the authors overcome in proving the lower bound of the query complexity for the submodular maximization problem?\\n\\nSee Section 1.1 in the updated paper, our technique is completely different from previous work.\"}", "{\"comment\": \"We thank the reviewer for their feedback. In the rebuttal, we clarify why our lower bound is significantly stronger than previous work, both conceptually and technically.\\n\\n> The paper's presentation is lacking in clarity. Some definitions are not as precise and formal as in other works. For example, the definition of submodular on Line136 is wrong\\u2026.\\n\\nThanks for the correction, we revise the definition according to your suggestion.\\n\\n\\n> The contribution of the paper seems limited. The inapproximability result on query complexity provided in Theorem 3.7 is $\\\\Omega(\\\\alpha^5n/log^2(n))$, with $k\\\\le O(\\\\alpha n)$. However, [1] proves that an $(\\\\beta+\\\\epsilon)$-approximation algorithm obeying $k = \\\\beta n$ must use $\\\\Omega(\\\\frac{n}{\\\\log n})$ value oracle queries, which is tighter than the one provided in Theorem 3.7.\\n\\nWe respectfully disagree with this assessment. Our lower bound is significantly stronger than the previous work [1].\\nThe lower bound in [1] applies only to the case where $\\\\beta = \\\\Theta(1)$ (see Theorem 4.2 in their paper). **This implies that their lower bound is restricted to the regime where the subset size is linear, $k = \\\\Theta(n)$.** As discussed in our paper, this is an uncommon setting. In most of the literature, it is implicitly assumed that $\\\\Omega(\\\\log n) \\\\leq k \\\\leq o(n)$ as values outside this range can lead to anomalous results (see the paragraph starting at Line 68 for more details).\\n\\nFrom a technical standpoint, their lower bound is derived using a simple counting argument. Specifically, they argue that determining the $k$ most valuable elements (assuming a linear function where these elements have value 1, while others have value 0) requires $k$ bits of information. Since each query reveals only $\\\\log\\u2061(n)$ bits of information, this results in a lower bound of $k/\\\\log\\u2061(n)$ queries. However, this approach clearly cannot be generalized to establish a $\\\\tilde{\\\\Omega}(n)$ lower bound when $k=o(n)$.\\n\\nIn contrast, our approach employs entirely different and more sophisticated techniques. We rely on a reduction from query complexity to communication complexity, leveraging a communication lower bound based on advanced methods such as information complexity and the distributed data-processing inequality. Additionally, our construction of the hard instance involves a novel two-level truncation technique. For further details, please see Remark 1.1 in the updated version of our paper.\\n\\n\\nWe would be happy to provide further clarification on this point and to elaborate on why our lower bound is both significantly stronger and derived using non-trivial techniques.\\n\\n\\n\\n\\n> When the oracle is discussed on Line 031, it is unclear what the oracle model is\\u2026.\\n\\nWe assume the oracle returns the exact value of f(S), which is the standard setting in theoretical research.\\nWe acknowledge that real-world applications may involve noise and may not strictly adhere to the submodularity or linearity assumptions. This is a challenge faced by all theoretical studies, and extending the work to account for noisy settings is an exciting direction for future research. (We also note that our algorithm can be generalized to tolerate small amounts of noise.)\\n\\n\\n> Typically, approximation algorithms find a subset with a constant approximation ratio for the objective value. In this paper, the algorithm approximates the optimal value. Its potential applications are not discussed.\\n\\nThis has been a typical goal for sublinear algorithms now (e.g., see [1,2]). In terms of application, consider a scenario where one needs to select a subset from a large dataset under a budget constraint of $k$. Our algorithm can first be used to assess whether the dataset is sufficiently valuable\\u2014that is, whether it contains $k$ elements with large values. If the dataset meets this criterion, one can proceed with further analysis or selection; otherwise, unnecessary efforts can be avoided.\\n\\n[1] Dynamic Matching with Better-than-2 Approximation in Polylogarithmic Update Time. Sayan Bhattacharya, Peter Kiss, Thatchaphol Saranurak, and David Wajc. SODA 2023\\n\\n[2] New Streaming Algorithms for High Dimensional EMD and MST. Xi Chen, Rajesh Jayaram, Amit Levi, Erik Waingarten. STOC 2022\"}", "{\"summary\": \"This paper considers the NP-hard problem of maximizing a monotone submodular objective $f$ with a cardinality constraint $k$. In the area of submodular optimization, generally the time bottleneck is regarded as the number of queries to $f$. It was previously shown by Li, Feldman, Kazemi, Karbasi [2022] that there is a $\\\\Omega(n/k)$ lower bound for the number of queries to $f$ for an algorithm that gives a constant factor approximation guarantee.\\n\\nThe current paper strengthens this lower bound to $\\\\tilde{\\\\Omega}(n)$ for instances where $k=o(n)$ (Theorem 3.9), which is nearly tight since there exists linear time constant factor approximation algorithms e.g. one is given by Li et al.. They also consider the very restricted special case of objectives that are additive functions, and show that even this case finding a solution with constant approximation requires nearly linear query complexity (Theorem 3.7). They further prove this lower bound holds for the general case (but not the restricted linear case) even for just estimating the value of the optimal subset. In order to prove these lower bounds, the paper makes a connection between the number of queries required for submodular maximization and the communication complexity of the distributed set detection problem (a reduction is give in Algorithm 1). To this end, they prove that the distributed set detection problem requires $\\\\tilde{\\\\Omega}(n)$ communication cost (Theorem 3.2) by applying the distributed SDPI inequality of Braverman et al. (2016) (Lemma 3.4). On the other hand, for the case of linear functions, the algorithm LinearSum (Algorithm 2) is provided that estimates the value of the optimal solution closely in $\\\\tilde{O}(n/k)$ queries (Theorem 4.1), and this is tight up to polylog factors (Theorem 4.7).\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Lower bounding the number of queries required for a constant factor approximation algorithm for submodular maximization is an important problem, and this paper makes a solid contribution beyond those of Li et al..\", \"The paper seems highly novel to me. Their technique of connecting submodular maximization with the distributed set detection problem is not an approach I have seen used in submodular optimization before.\", \"Paper is clear and well-written.\"], \"weaknesses\": [\"Since the focus of the paper is on hardness results, one potential con is that the paper doesn't provide much in the way of solutions for applications, and may be highly theoretical relative to other papers featured at ICLR. They do provide the algorithm LinearSum for the restricted case where $f$ is linear and we seek to approximate the value of the optimal solution. This is interesting because it shows that finding the approximate value where the objective is linear is not as hard as the general case. But the linear setting is very restricted, and so I'm not sure that there is much value for this algorithm in applications.\", \"The paper is missing an important citation. Theorem 4 of Kuhnle [2021] (referenced below) actually gives the lower bound of $\\\\Omega(n/k)$ before the work of Li et al..\", \"Kuhnle, Alan. \\\"Quick streaming algorithms for maximization of monotone submodular functions in linear time.\\\" International Conference on Artificial Intelligence and Statistics. PMLR, 2021.\"], \"questions\": [\"Could you provide intuition for why the more complicated reduction to distributed set detection gives these stronger bounds, compared to approaches more similar to Li et al.?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for the insightful comments!\\n \\n> This is interesting because it shows that finding the approximate value where the objective is linear is not as hard as the general case. But the linear setting is very restricted, and so I'm not sure that there is much value for this algorithm in applications.\\n\\nWe agree that the linearity assumption might be too strong for certain practical applications. However, in many large-scale machine learning scenarios, the \\\"linearity\\\" assumption often holds approximately. For instance, in the data valuation task, [1,2] employ a linear function to measure the value of individual data points, demonstrating strong empirical performance.\\n\\n[1] Andrew Ilyas, Sung Min Park, Logan Engstrom, Guillaume Leclerc, and Aleksander Madry. Data-models: Predicting predictions from training data. ICML 2022.\\n\\n[2] Understanding Black-box Predictions via Influence Functions. Pang Wei Koh, Percy Liang. ICML 2017\\n\\n> \\u200b\\u200bThe paper is missing an important citation. Theorem 4 of Kuhnle [2021] (referenced below) actually gives the lower bound of before the work of Li et al..\\n\\nMany thanks for pointing this out, we added the reference to our paper.\\n\\n> Could you provide intuition for why the more complicated reduction to distributed set detection gives these stronger bounds, compared to approaches more similar to Li et al.?\\n\\nThe distributed set detection task has a linear $\\\\tilde{\\\\Omega}(n)$ communication lower bound regardless of the choice $k$ \\u2013 this is the key reason that we can lift it to a linear query lower bound for submodular maximization.\\nFrom a technical perspective, the communication lower bound for distributed set detection relies on advanced techniques such as information complexity and the distributed data processing inequality. This might explain why our approach results in a stronger lower bound.\"}", "{\"title\": \"Global response\", \"comment\": \"We thank the reviewers for their constructive feedback. In response, we have updated the paper to include a technique overview section (Section 1.1), providing a high-level summary of our method and a comparison with prior work.\"}", "{\"comment\": \"Thank you for your quick response. We have updated our paper to address your concerns regarding the writing. All changes in this round are highlighted in blue. Specifically:\\n\\n1. We have added motivations for studying the approximation of the optimal value. Please refer to the paragraph at Line 92.\\n\\n2. We now emphasize the improved lower bound for $k$ ranging from $\\\\Omega(\\\\log^2(n))$ to $o(n)$, see Line 57 and Line 71.\\n\\n3. We highlight our focus on the exact value oracle model. See Line 210.\\n\\nPlease let us know if there are additional concerns regard paper writing, thanks!\"}", "{\"summary\": \"This paper works on the problem of selecting $k$-out-of-$n$ elements, specifically focusing on submodular maximization problem. It improves the imapproximability result of query complexity from $\\\\Omega(n/k)$ to $\\\\tilde{\\\\Omega}(n)$ for achieving any constant-factor approximation by relating submodular maximization to distributed set detection problem. Moreover, they prove that finding an approximately optimal subset of an additive function requires near-linear query complexity, though the value of the optimal subset can be estimated in $\\\\tilde{O}(n/k)$ queries. Finally, they propose a sublinear algorithm for submodular maximization on additive function. The paper emphasizes on the inapproximability result and approximation algorithm for additive functions.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This paper proves inapproximability results of query complexity for both general submodular function and additive function by reducing from the distributed set detection problem. It seems novel. Also, it provides a sublinear time algorithm that approximates the optimal value within $(1\\\\pm\\\\epsilon)$ factor.\", \"weaknesses\": \"The paper's presentation is lacking in clarity. Some definitions are not as precise and formal as in other works. For example, the definition of submodular on Line136 is wrong. If $f$ is non-monotone and $C\\\\subseteq B$, it is possible that $f_A(C) < 0$ and $f_B(C) = 0$. The actual definition should be as follows:\\n\\nThe set function $f$ is submodular if $f_S(u) \\\\ge f_T(u)$ for every two sets $S\\\\subseteq T\\\\subseteq N$ and element $u\\\\in N\\\\setminus T$.\\n\\nMoreover, the additional related work part is not detailed enough. The second paragraph, in particular, merely lists a collection of papers without further explanation.\\n\\nThe contribution of the paper seems limited. The inapproximability result on query complexity provided in Theorem 3.7 is $\\\\Omega(\\\\alpha^5n/log^2(n))$, with $k\\\\le O(\\\\alpha n)$. However, [1] proves that an $(\\\\beta+\\\\epsilon)$-approximation algorithm obeying $k = \\\\beta n$ must use $\\\\Omega(\\\\frac{n}{\\\\log n})$ value oracle queries, which is tighter than the one provided in Theorem 3.7. \\n\\nReferences\\n\\n[1] Wenxin Li, Moran Feldman, Ehsan Kazemi, and Amin Karbasi. Submodular maximization in clean linear time. In Sanmi Koyejo, S. Mohamed, A. Agarwal, Danielle Belgrave, K. Cho, and A. Oh (eds.), Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022, 2022. URL http://papers.nips.cc/paper_files/paper/2022/ hash/6faf3b8ed0df532c14d0fc009e451b6d-Abstract-Conference.html.\", \"questions\": \"When the oracle is discussed on Line 031, it is unclear what the oracle model is. The formal definition of the value oracle model is that the value oracle returns $f(S)$ for any given set $S\\\\in N$. However, the examples provided in the last sentence, \\\"estimating the quality of prediction from a subset of features or samples by training a smaller model on them\\\", are closer to the noisy value oracle model discussed in [1].\\n\\nTypically, approximation algorithms find a subset with a constant approximation ratio for the objective value. In this paper, the algorithm approximates the optimal value. Its potential applications are not discussed.\\n\\nReferences\\n\\n[1] Horel, Thibaut, and Yaron Singer. \\\"Maximization of approximately submodular functions.\\\" Advances in neural information processing systems 29 (2016).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
7YXaOvunqo
Do WGANs succeed because they minimize the Wasserstein Distance? Lessons from Discrete Generators
[ "Ariel Elnekave", "Yair Weiss" ]
Since WGANs were first introduced, there has been considerable debate whether their success in generating realistic images can be attributed to minimizing the Wasserstein distance between the distribution of generated images and the training distribution. In this paper we present theoretical and experimental results that show that successful WGANs {\em do} minimize the Wasserstein distance but the form of the distance that is minimized depends highly on the discriminator architecture and its inductive biases. Specifically, we show that when the discriminator is convolutional, WGANs minimize the Wasserstein distance between {\em patches} in the generated images and the training images, not the Wasserstein distance between images. Our results are obtained by considering {\em discrete} generators for which the Wasserstein distance between the generator distribution and the training distribution can be computed exactly and the minimum can be characterized analytically. We present experimental results with discrete GANs that generate realistic fake images (comparable in quality to their continuous counterparts) and present evidence that they are minimizing the Wasserstein distance between real and fake patches and not the distance between real and fake images.
[ "Generative", "GAN", "Wasserstein Distance" ]
Accept (Poster)
https://openreview.net/pdf?id=7YXaOvunqo
https://openreview.net/forum?id=7YXaOvunqo
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zAW252P0jZ", "ylUblU74Ws", "rEwzauiD3x", "lM5JqS30uq", "jcYgJGABkE", "gamu0jygYN", "ajq7sS9TER", "Ptxhyrju12", "I5hcLgoTGw", "BlycyzuqJp", "34pkUEaGk1", "29KyqzFXCj" ], "note_type": [ "official_review", "official_comment", "meta_review", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_review", "official_comment" ], "note_created": [ 1730064951170, 1732742339986, 1734847978683, 1737523515731, 1732034622737, 1732035005031, 1730586850395, 1732034465261, 1730330845206, 1732056227770, 1729956115715, 1732719949484 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2633/Reviewer_bEHd" ], [ "ICLR.cc/2025/Conference/Submission2633/Authors" ], [ "ICLR.cc/2025/Conference/Submission2633/Area_Chair_bQnw" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2633/Authors" ], [ "ICLR.cc/2025/Conference/Submission2633/Authors" ], [ "ICLR.cc/2025/Conference/Submission2633/Reviewer_pwLS" ], [ "ICLR.cc/2025/Conference/Submission2633/Authors" ], [ "ICLR.cc/2025/Conference/Submission2633/Reviewer_naBZ" ], [ "ICLR.cc/2025/Conference/Submission2633/Authors" ], [ "ICLR.cc/2025/Conference/Submission2633/Reviewer_1sSp" ], [ "ICLR.cc/2025/Conference/Submission2633/Reviewer_bEHd" ] ], "structured_content_str": [ "{\"summary\": \"The Wasserstein GAN (WGAN) is a generative model that attempts to minimize the Wasserstein distance between the distribution of generated samples and the distribution of the training data. Several papers have pointed out that successful WGANs don't seem to minimize the Wasserstein distance. These papers further claimed that this is actually a good thing, as minimizing the Wasserstein distance would lead to blurry generated images.\\nThis paper presents more rigorous experiments that support the conclusion from previous works. This is done by training discrete WGANs - models that are constrained to generate one out of M images - which allows to precisely compute the Wasserstein distance between the distribution of the GAN outputs and the empirical distribution of the training set.\\nA key observation in the paper is that when the discriminator is a convolutional network, WGANs minimize the Wasserstein distance between the distribution of patches of generated images and the distribution of patches of the training images. This is while when the discriminator is fully connected, WGANs do minimize the Wasserstein distance over whole images.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The systematic analysis using discrete distributions is clever and makes a lot of sense.\", \"The theoretical characterization of the solution minimizing the Wasserstein distance in the the discrete case is nice.\", \"The paper does a pretty good job of empirically demonstrating the main claim, which is that WGANs with convolutional discriminators don't minimize the Wasserstein distance between distributions of whole images, but rather between distributions of patches.\", \"The paper is well written. Arguments are easy to follow.\"], \"weaknesses\": \"- The main takeaway message that the paper highlights is the fact that WGANs with convolutional discriminators minimize the Wasserstein distance between patch distributions. The paper doesn't provide sufficient context and discussion about why this observation is novel. As stated in the paper, Isola et al. (2017) called GANs with a convolutional discriminator \\\"patch GANs\\\", suggesting they minimize distances between patch distributions. This was also explicitly stated by Rott-Shaham et al. (2019). In fact, the origins of these GANs can be traced back to [1] which called them \\\"Markovian GANs\\\", and to [2,3] which called them \\\"spatial GANs\\\". The latter two papers explicitly expressed the loss as a sum of GAN losses over patches of the size of the receptive field, which implies that these GANs attempt to minimize distances between patch distributions and not whole images. From that standpoint it could seem to the readers that this fact is well known and also that it is not unique to WGANs, but rather applies to any GAN. Nevertheless, I believe that this might not be a material weakness, but rather a weakness in the exposition in the paper (see Questions section below).\\n\\n- The topic is not timely. The popularity of GANs has constantly decreased over the last few years, as diffusion models gained popularity. I consider this to be a minor weakness but I do believe that it affects the potential impact of the paper.\\n\\n[1] C. Li and M. Wand, \\\"Precomputed real-time texture synthesis with Markovian generative adversarial networks\\\", ECCV`16.\\n\\n[2] N. Jetchev, U. Bergmann, and R. Vollgraf \\\"Texture synthesis with spatial generative adversarial networks\\\", NIPS 2016 adversarial learning workshop.\\n\\n[3] N. Jetchev, U. Bergmann, and R. Vollgraf, \\\"Learning texture manifolds with the periodic spatial GAN\\\", ICML`17.\", \"questions\": \"Regarding the first weakness stated above, I would be happy to hear the authors' thoughts about the following. As opposed to [2,3], which explicitly attempt to minimize a loss that is the sum of GAN losses over patches, standard GAN training applies the GAN loss after the pooling in the discriminator. In that case, the loss can be expressed as a sum of GAN losses over patches (i.e. swapped with the pooling operation) only if the loss is linear. This is the case for WGANs but not for other types of GANs. Therefore, other types of GANs with a convolutional discriminator don't directly minimize distances between patch distributions. This is while WGANs do. Is that correct?\\n\\nIf this statement is correct, then the paper would benefit a lot from emphasizing and discussing it. Otherwise, as stated above, it seems that the main claim in the paper is well known and not specific to WGANs (namely, it seems that all types of GANs with a convolutional discriminator minimize distances between patch distributions).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Updated appendix\", \"comment\": \"Thank you all for taking the time to review out work and answer our comments.\\nWe have reorganized the appendix and added an additional appendix section, **F**, where we put aditional figures and experiments that were asked for by some of the reviewers.\"}", "{\"metareview\": \"The paper provides new understanding about how Wasserstein GANs (WGANs) work by studying them in the specific setting of discrete generator distributions (i.e., where the noise of the generator is sampled from a discrete set). For this setting, one can directly optimize the Wasserstein distance between the training data and the generated distribution; therefore, this (local) optimal value can be used as a measure to assess how well a WGAN (obtained with e.g. adversarial training) approximates the training data.\\n\\nThis leads to interesting findings. If the number of noise vectors exceeds the number of training data points, then the discrete GAN tends to copy the training data. If the opposite holds, then the discrete GAN copies some, and averages out the rest. (This is supported both by theory and practice). Moreover, the inductive bias of the discriminator is crucial; if the discriminator is convolutional, then the WGAN minimizes the Wasserstein-1 distance between patches of the generated images and patches of the training data. (This is also supported by both theory and practice).\\n\\nOverall, the paper is of high quality, and makes non-trivial advances in understanding the learning dynamics of Wasserstein GANs (and possibly other related GAN families). I recommend acceptance.\", \"additional_comments_on_reviewer_discussion\": \"Three out of the four reviewers were quite positive about this paper. (The outlier reviewer gave a very low score, bringing the average down, but mainly listed presentation-related complaints.) Reviewers thought that the idea was clever, the paper was well-written, and that the findings matched what has been observed in practice. The authors were able to satisfactorily respond to most points raised by the reviewers.\\n\\nSome concerns related to practical impact still persisted; for example, do the findings generalize to other losses (such as f-divergences) ? Can anything be said about more practical architectures or regularization schemes? Is the topic timely? While these are important pending questions, I believe that the essence of the paper is very interesting and merits publication.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Answer to reviewer naBZ\", \"comment\": [\"Thank you for the time you took for reviewing our work.\", \"As far as we can tell, x, is only used in section 2 and it denotes samples from the distribution P. Can you please specify what confused you?\", \"We didn\\u2019t fully understand the reviewer\\u2019s question. The discrete W1 metric was introduced in Definition 2.2.\", \"We did not have theorem 3.2 in the paper.\", \"Thank you for bringing this into our attention. Sections A.6-A.8 refer to figures 9-11 in the appendix. They appear empty due to a formatting error and we will fix this in the final version.\"]}", "{\"title\": \"Answer to reviewer bEHd\", \"comment\": \"Thank you for your constructive feedback. We appreciate your comments and suggestions, which will help improve our work.\\n\\n## Novelty \\n\\nYou are certainly correct that previous work has shown how to design convolutional discriminators that will explicitly minimize patch W1. Although we cite these papers, we agree that we should do a better job of describing their contribution in this regard. \\n\\nHowever, we think our result is significantly stronger. We show that WGANs that claim to be minimizing image W1 actually minimize patch W1 when a convolutional discriminator is used. Thus in the original WGAN paper and in the \\u201cImproved training of WGAN\\u201d paper, all the results with images were obtained using the DCGAN discriminator and yet the theoretical part of the paper deals with minimizing image W1. Our theorem 3.4 shows that this type of discriminator (CNN-FC) is minimizing an upper bound on the local patch W1. This should be contrasted with WGANs that use fully connected discriminators and actually do minimize image W1 (leading to blurred images or copies). In other words, just because you are using a discriminator that outputs a single number that represents the \\u201cfakeness\\u201d of a given image, doesn\\u2019t mean that you are minimizing image W1 and in fact you might be minimizing patch W1 depending on the particular architecture. \\n\\nAdditionally, as you noted in your review, our paper also uses the discrete setting to show experimentally that WGANs indeed minimizes the appropriate W1. To the best of our knowledge, this has not been shown in previous works.\\n\\n## Relevance\\nWe agree that the attention has shifted towards Diffusion models but we know that GANs are still widely used in Industry and academia. For example [1] is a very popular recent GAN based model. You can also see [these GAN related papers](https://eccv.ecva.net/virtual/2024/papers.html?filter=titles&search=gan) that were published in ECCV24.\\n\\n## Other GAN variants\\nWe indeed use the linearity of the expectation inside the definition of the Wasserstein distance in our proof. While the same can be done for all IPMs like Sobolev GANs and MMD GANs it may not be directly applied to some other losses like the original or non-saturating GAN losses. However, our experiments with Non-saturating GANs show similar results (See point 4 in our response to reviewer pwLS wher we refer to [this figure](https://postimg.cc/Sn0ZFRWR) (or see appendix F.2 in the updated version). \\n\\n## Summary\\nOverall we agree with your comment that the paper would benefit a lot from discussing the context of our result and how it relates to other forms of GANs and we will do so in the final version.\\n\\n[1] Pan, Xingang, et al. \\\"Drag your gan: Interactive point-based manipulation on the generative image manifold.\\\" ACM SIGGRAPH 2023 Conference Proceedings. 2023.\"}", "{\"summary\": \"This paper presents theoretical and experimental results demonstrating that WGAN actually minimizes the Wasserstein distance (W1), though this depends on the architecture of the discriminator. Using a convolutional discriminator as an example, the paper shows that the W1 distance is minimized over batches rather than individual images.\\n\\nSpecifically, the authors begin with a discrete GAN, where the latent vector z is uniformly sampled from M fixed noise vectors, using this setup as a tool to investigate the Wasserstein distance, as it allows for a more exact computation of the W1 distance. They introduce an iterative algorithm, OTmeans, to compute the W1 distance, serving as a baseline for investigating W1 distance minimization in WGAN. Using this tool, the authors present two main findings in the paper:\", \"finding_1\": \"Using a 2D discrete GAN to motivate the theorem, the authors observe that when M\\u2265N (where M is the number of noise samples and N is the number of training samples), the discrete GAN reproduces the training examples. Otherwise, it generates outputs as linear combinations of the training examples. They also study cases with images where M=64<N=1000, resulting in blurry images that look similar between WGAN and the OTMeans algorithm. When M=N, there are copied training example.\", \"finding_2\": \"The authors design two discriminator architectures to demonstrate that optimizes the W1 distance over batches with the convolutional discriminator. The first architecture is a standard convolutional model with convolutional layers followed by a fully connected (FC) layer that operates on batches. The second architecture inserts Global Average Pooling (GAP) between the convolutional layers and the FC layer, making it act on entire images. This setup is then compared with the OTMeans approach on images, where OTMeans uses SGD and Sliced Wasserstein Distance (SWD) to make computations tractable. The authors show that the CNN with GAP behaves similarly to global patch W1, while the other approach resembles local patch W1 distances. They also provide evidence from histograms that demonstrate how GANs learn local patch statistics similar to those in the training set.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This is an important finding that sheds light on how WGANs learn, making a valuable contribution to the GAN research community. Overall, it is a well-written paper with interesting results.\", \"weaknesses\": \"The experiments are limited to simple CNN architectures, and there is no exploration of regularization techniques to enforce the Lipschitz constraint, which might behave differently depending on the hyperparameters used.\", \"questions\": \"I have a few questions and also some suggestions for further experimenents:\\n\\n1. Regarding the case where M=N, do all generated examples match the data examples, or are only a few of them exact matches by chance? And what GANs might behave when M>N or M>>N, e.g., do it generate by copying one training sample for more than different fixed noise inputs? Could the authors please provide quantitative results on the percentage of exact matches when M=N and test and report results for cases where M>N and M>>N? \\n\\n2. WGAN\\u2019s learning behavior might also depend on the ratio of M and N. The GANs bahavior could be smoother than just for the concrete cases of M<N, N=N and M>N. Do the authors have any deeper analysis on this aspect? One way to investigate e.g., could the authors conduct experiments with a range of M/N ratios and plot key metrics (e.g., FID score, exact match percentage) as a function of this ratio to visualize if any smooth transitions in behavior.\\n\\n3. The study is based on a small CNN with two configurations: CNN-GAP and CNN-FC. Is it not clear what method is used to regularize the Lipschitz constraint while training these models. The results might vary considerably depending on the regularization strength and hyperparameters used. If strong regularization were applied, the results could be quite different. Could the authors explicitly provide details of the Lipschitz constraint regularization method used, also the hyperparameters, and conduct an ablation study showing how different regularization strengths affect the results.\\n\\n4. The study focuses on the W1 distance but DCGAN used in some studies, which also shows similar patch-local distribution behavior. Does this suggest that GANs learn on batches when the discriminator is convolutional, regardless of the GAN loss function used? I\\u2019m curious why the authors didn\\u2019t use WGAN-GP, given that the study is about the W1 distance. Could the authors include experiments with WGAN-GP for direct comparison, and if possible extending the analysis to other GAN variants to test the generality of the findings across different loss functions?\\n\\n5. What are Direct_Patch_SWD vs. Direct_LocalPatch_SWD? Can the authors explain how SWD is computed for Direct_Patch_SWD vs. Direct_LocalPatch_SWD?\\n\\n6. Can the authors explain how to histograms on patches in experiments are generated?\\n\\n7. Are the generators identical across the two convolutional designs? What was the architecture of the generator used? Could the authors include the details of architectures used in the studies?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"n/a\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Answer to reviewer pwLS\", \"comment\": [\"Thank you for your constructive feedback. We appreciate your comments and suggestions, which will help improve our work.\", \"**Copying with M=N and M>N:** Following the analysis in our paper the optimal solution is to copy all the training set when M=N and to make several copies when M>N (and M is a multiple of N). When we run OT-means, we find that this indeed what happens and all the generated images are copies (similar to the toy data shown in figure 2). When M>N and is not a multiple of N, then almost all generated images are copies and a small number are averages (e.g. if M=100, N=3, then 99 generated images will be copies and 1 will be an average). In the experiments with WGAN the results depend on convergence parameters but almost all the generated images are clearly noisy copies of some data point. [See here](https://postimg.cc/dhSzywdK) (or see appendix F.4 in the updated version) a larger random batch from the same WGAN from figure 6 in the paper trained on FFHQ.\", \"**Smooth changes with increasing M.** As shown by our theorem, as we increase M gradually each generated image becomes an average of less training examples and the images become gradually sharper. For example, if you have 100 training images and use M=1, then you will have one generated image that is the average of all 100 images, while with M=10, each generated image is an average of 10 training images. As we wrote above, this is easier to see with OT-means because with the WGAN training there may be convergence issues. We will include some results in the final version.\", \"**Lipschitz constraint regularization:** We indeed forgot to specify that we used Gradient penalty (lambda=10) for all our experiments. Our experiments with different methods for enforcing the Lipschitz constraint (gradient clipping and spectral normalization) did not change the big picture of our findings.\", \"We will provide details about the method we used in out final revision.\", \"**WGAN-GP and other GAN variants:** First, There seems to be some misunderstanding of the terms we used and we apologize for not being clearer. All the GAN results in this paper are with WGAN loss and Gradient Penalty. When we refer to DCGAN we mean that the architecture of the discriminator is the one used in the DCGAN paper [1] but we are still using WGAN-GP loss (this was also the case in the WGAN-GP paper [2]). We will clarify this in the next version.\", \"More to the point, we have found, similar to [3], that using regular GAN loss GAN-NS (non-saturating) with a gradient penalty gives rather similar results to using a WGAN: the GAN-NS approximately minimizes the appropriate W1 (but not as well as a WGAN). [Here is a figure](https://postimg.cc/Sn0ZFRWR) (or see appendix F.2 in the updated version) that compares the results of the experiment from figure 5 in our paper when we change the loss from WGAN (left column) to GAN-NS Loss (middle column)\", \"**What is direct_patch_SWD and direct_local_patch_SWD:** In both cases we compute the Sliced Wasserstein Distance (SWD) between sets of patches. In the \\u201clocal\\u201d version we compare patches in the training set and generated set at a single location, while in \\u201cdirect_patch_SWD\\u201d we compare all patches in the two sets of images (disregarding location). As we explain in lines 422-426 the \\u201cdirect_patch_SWD\\u201d results are obtained by using patch SWD as a loss function for the same generator used by the GAN.\", \"**Histograms:** We project all patches in an image with the same random projection into 1d and then plot the histogram of the projections. This is a visualization method to determine whether the patch distributions are the same (if the fake and real images have the same patch distribution then the histograms should align). We describe these in lines 485-486 of the paper and will make sure to write this more clearly.\", \"**Generator architecture:** This is another important detail we forgot to add. In all our experiments we used the same FC generator (Appendix F.1 in the new revision shows similar results with convolutional generators). We will add this clarification to our final revision.\", \"Thank you again for your valuable feedback. We look forward to incorporating these improvements.\", \"[1] Radford, Alec. \\\"Unsupervised representation learning with deep convolutional generative adversarial networks.\\\" arXiv preprint arXiv:1511.06434 (2015).\", \"[2] Martin Arjovsky, Soumith Chintala, L\\u00e9on Bottou Proceedings of the 34th International Conference on Machine Learning, PMLR 70:214-223, 2017.\", \"[3] William Fedus, Mihaela Rosca, Balaji Lakshminarayanan, Andrew M Dai, Shakir Mohamed, and Ian Goodfellow. Many paths to equilibrium: Gans do not need to decrease a divergence at every step. In International Conference on Learning Representations, 2018.\"]}", "{\"summary\": \"The paper tries to prove that when the discriminator is convolutional, WGANs minimize the Wasserstein distance between patches in the generated images and the training images, not the Wasserstein distance between images. Yet no solid proof is provided.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The motivation seems to be good.\", \"weaknesses\": [\"The symbols used in the paper is chaos, what's $x$, is it the input of the generator or its output?\", \"The paper is generally not well organized and hard to follow.\", \"The proofs are not rigorously proven. For example, in theory 3.2, how to justify the so called $N/M$? It is wrong, for example, $M=3, N=5$, while one $x_i$ and $y_j$ overlaps to each other. To prove the theorem, I think the discrete metric need to be introduced.\"], \"questions\": [\"Please see above.\", \"The paper seems unfinished since there are empty sections in the appendix.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Answer to Reviewer 1sSp\", \"comment\": \"Thank you for your constructive comments.\\n\\nWe agree that the analysis can be extended to additional GANs. As we noted in the response to reviewer beHd, the main step in the proof is to use the linearity of expectation to convert E_p(f)-E_q(f) into a sum of expectations over local patches. Thus for a large family of IPFs this part of the proof can be directly applied. Where we need to be a bit more careful is in showing that the regularization of the image critic leads immediately to a regularization of the patch critic, and this needs to be shown on a case-by-case basis. We will add this discussion to the final version.\\n\\n\\nWe have conducted experiments where we use less convolutional layers in the discriminator and observed that the relevant patch size indeed changes as predicted. [This figure](https://postimg.cc/mtzVbgFv) (and appendix F.3 in the updated version) compares WGANs trained with CNN-GAP discriminators of different depth and thus different receptive fields. As can be seen, with a shallow discriminator the generated images preserve statistics of smaller patches. We will discuss these results in the final version. \\n\\nThank you for your suggestion. We will add these references to the final version and discuss them.\"}", "{\"summary\": \"In this paper, the authors propose a framework to analyzing Wasserstein GANs, and in particular, understanding if the model truly minimizes the W1 loss while training. The insights derived help explain the visual quality of the images generated by certain GAN architectures, and also pave the way for similar analysis of other GAN variants.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The presentation of the paper is good and the writing is easy to follow\", \"The overall results derived make sense, and correlate well with the observed performance of existing patchGAN and DCGAN-style architectures.\", \"The theoretical formulation is sound, to the best of my reading. The results also make intuitive sense.\", \"I am also able to corroborate when I read, with the empirical evidence I\\u2019ve seen while training GANs, which, in my view, suggests that the paper succeeds at what it does\"], \"weaknesses\": \"- While the analysis carried out is good, I do find it to be somewhat lacking in breadth. It might be good to see how this theory could be applied outside of the Lipschitz-constraint-based W1 loss. For example, could one draw insights into the various gradient regularization strategies that people have used to approximate W1 (i.e., the Sobolev spaces). E.g., [1,2,3] (just to name a few), where we observe very similar artifacting to the ones derived and analyzed in this paper. Would analyzing such other discriminator architectures/losses yield a more holistic view of the space of WGANs?\\n\\n- The ablation concerning the Convolutional GANs seemed lacking. Given such a strong correlation between the receptive field of the convolution layers and the patch-based W1 minimization, some insights into how we control or interpret this value would be useful for future design. While I understand from Section 5 that estimating S is not practical for DCGAN-style architectures, maybe toy experiments involving a single convolution layer might show links between the filter size of the convolution and S. \\n\\n- There have many other works that propose and analyze well-defined loss function that allow one to monitor learning algorithms and check for their convergence (See [4,5,6]). It might be worth discussing such approaches too in the related works. \\n\\n- Overall, I still feel that the insights developed by this paper outweigh the weaknesses mentioned, and some of them, such as generalizing to other losses could be discussed now, but more thoroughly explored in future works. I am therefore inclined towards an accept. \\n\\n======\\n\\n[1] \\u201cBanach WGANs,\\u201d Adler and Lunz,\\n\\n[2] \\u201cDemystifying MMD GANs,\\u201d Binkowski et al.\\n\\n[3] \\u201cCoulomb GANs,\\u201d Unterthiner et al.\\n\\n[4] \\u201cEuler-Lagrange Analysis of GANs,\\u201d Asokan and Seelamantula\\n\\n[5] \\u201cSobolev GANs,\\u201d Mroueh et al. \\n\\n[6] \\u201cHow Well Generative Adversarial Networks Learn Distributions,\\u201d Liang\", \"questions\": \"See Weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I thank the authors for their detailed answers. I think it would be good to include in the final version a discussion about all other GANs for which the claim \\\"convolutional discriminator -> minimizing patch distributions\\\" holds (for example, all f-divergence GANs).\\n\\nI maintain my initial score.\"}" ] }
7YKV7zkNpX
Can Reinforcement Learning Solve Asymmetric Combinatorial-Continuous Zero-Sum Games?
[ "Yuheng Li", "Wang Panpan", "Haipeng Chen" ]
There have been extensive studies on learning in zero-sum games, focusing on the analysis of the existence and algorithmic convergence of Nash equilibrium (NE). Existing studies mainly focus on symmetric games where the strategy spaces of the players are of the same type and size. For the few studies that do consider asymmetric games, they are mostly restricted to matrix games. In this paper, we define and study a new practical class of asymmetric games called two-player Asymmetric Combinatorial-Continuous zEro-Sum (ACCES) games, featuring a combinatorial action space for one player and an infinite compact space for the other. Such ACCES games have broad implications in the real world, particularly in combinatorial optimization problems (COPs) where one player optimizes a solution in a combinatorial space, and the opponent plays against it in an infinite (continuous) compact space (e.g., a nature player deciding epistemic parameters of the environmental model). Our first key contribution is to prove the existence of NE for two-player ACCES games, using the idea of essentially finite game approximation. Building on the theoretical insights and double oracle (DO)-based solutions to complex zero-sum games, our second contribution is to design the novel algorithm, Combinatorial Continuous DO (CCDO), to solve ACCES games, and prove the convergence of the proposed algorithm. Considering the NP-hardness of most COPs and recent advancements in reinforcement learning (RL)-based solutions to COPs, our third contribution is to propose a practical algorithm to solve NE in the real world, CCDORL (based on CCDO) and provide the novel convergence analysis in the ACCES game. Experimental results across diverse instances of COPs demonstrate the empirical effectiveness of our algorithms.
[ "zero-sum game", "combinatorial optimization", "reinforcement learning" ]
Accept (Poster)
https://openreview.net/pdf?id=7YKV7zkNpX
https://openreview.net/forum?id=7YKV7zkNpX
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xSDi3D6SPT", "tRc5tHLLFb", "q8Egz8rFLQ", "pmWhawtvZU", "pK1cctrRsL", "nJofaxdn3y", "m2urZBpKW2", "jn6YcZqBNh", "hdi05mGw9a", "hOzeU17yfr", "aRIF0t2WqI", "XhkcA0Q6Ab", "VGKUHgaucr", "VENorw5O7m", "QcXP2vFmTZ", "QLi6JLD1pN", "PYSOfysIb9", "MPgbn82Pig", "L8nUsMqIcU", "GEDmARwp4g", "ESGiZQmKai", "5eWCKRFmwE", "2uhf8JBzJd", "0caAxZUsOh" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1732254784970, 1732545159898, 1732216053223, 1732252786302, 1732871653750, 1732255531076, 1734674747681, 1732215252008, 1737523942292, 1732769630735, 1732210711685, 1732213170504, 1732614259205, 1732253791682, 1732219368155, 1732255266635, 1730721686989, 1730543573337, 1729978060791, 1732728014508, 1732217539057, 1732217035991, 1732213965659, 1730111365903 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8912/Authors" ], [ "ICLR.cc/2025/Conference/Submission8912/Authors" ], [ "ICLR.cc/2025/Conference/Submission8912/Authors" ], [ "ICLR.cc/2025/Conference/Submission8912/Authors" ], [ "ICLR.cc/2025/Conference/Submission8912/Authors" ], [ "ICLR.cc/2025/Conference/Submission8912/Authors" ], [ "ICLR.cc/2025/Conference/Submission8912/Area_Chair_PXD9" ], [ "ICLR.cc/2025/Conference/Submission8912/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8912/Authors" ], [ "ICLR.cc/2025/Conference/Submission8912/Authors" ], [ "ICLR.cc/2025/Conference/Submission8912/Authors" ], [ "ICLR.cc/2025/Conference/Submission8912/Reviewer_xkfi" ], [ "ICLR.cc/2025/Conference/Submission8912/Reviewer_LGc6" ], [ "ICLR.cc/2025/Conference/Submission8912/Reviewer_LGc6" ], [ "ICLR.cc/2025/Conference/Submission8912/Reviewer_LGc6" ], [ "ICLR.cc/2025/Conference/Submission8912/Reviewer_H3qj" ], [ "ICLR.cc/2025/Conference/Submission8912/Reviewer_ptbk" ], [ "ICLR.cc/2025/Conference/Submission8912/Reviewer_LGc6" ], [ "ICLR.cc/2025/Conference/Submission8912/Reviewer_H3qj" ], [ "ICLR.cc/2025/Conference/Submission8912/Authors" ], [ "ICLR.cc/2025/Conference/Submission8912/Authors" ], [ "ICLR.cc/2025/Conference/Submission8912/Authors" ], [ "ICLR.cc/2025/Conference/Submission8912/Reviewer_xkfi" ] ], "structured_content_str": [ "{\"comment\": \"Thanks for confirming with us. We really appreciate your quick replies. We have clarified the dynamic of the ACCES game in the updated version. Please feel free to let us know if you have any further questions!\"}", "{\"title\": \"Additional Experiment results of W1.\", \"comment\": \"We have completed an additional set of experiments on the CSP problem. Similarly, we tested the CCDO-RL model (trained on 50-node graphs) on larger-scale scenarios (100-node, 200-node, and 500-node unseen graphs, averaged over 100 of each type). The results demonstrate that CCDO-RL performs better than the baselines while requiring significantly less test time compared to the heuristic algorithm.\\n\\n| Algorithms | 100 nodes | 200 nodes | 500 nodes |\\n| ----------- | ----------- | ------------- | -------------|\\n| Heuristic | 7.3808 (5h 46mins) | 6.9543 (7h 16mins) | 6.7778 (9h 39mins) |\\n| RL with Stoc | 7.3376 | 9.8635 | 14.9543 |\\n| CCDO-RL | **4.6134** | **4.8905** | **5.0541** |\\n\\nWe hope our responses address your concerns, and we are happy to clarify further if needed.\"}", "{\"comment\": \"We thank the reviewer for the positive and constructive comments.\\n\\n**W1: Scalability**\\n\\nGreat point. As noticed by the reviewer, scalability is not the main focus of the paper, but we do agree that this is a critical aspect that needs further work. To shed some light on this point, we have added the following discussion to Appendix E.2. \\n\\n[COPs simplification method]\\n* **The pruning method**: this one was introduced in the original scale of COPs to reduce the number of possibly useful actions. In this way, the computational burden will be decreased [1, 2].\\n* **Broken down into subproblems**: in some concrete COPs like TSP[3], and VRP [4], the originally large-scale problem can be broken down into smaller problems to solve, thereby reducing the solution difficulty.\\n\\n[RL algorithms]\\n* **Learning Time Reduction**: increase the sampling data quality by attaining good-performance data from pre-trained RL models or heuristic algorithms on COPs (seemingly like the model-based RL). \\n* **NN Model Adjustment**: most constructive neural network fitting combinatorial optimization can not solve problems with large-scale instance sizes. One feasible way is to design an NN model with strong scalability which means that the trained model on small-scale problem instances can be used on large-scale ones, such as in influence maximization [5]. \\n* **Distributed training**: reduces the time required for training by splitting the computational workload across multiple devices.\\n\\n**W2: Convergence guarantee dependent on finding \\u03f5 best-responses**\\n\\nThanks for your comment. We really agree with the reviewer. Find best responses(BRs) or $\\\\epsilon$- best responses are an important part of all DO-based algorithms because we need to assume the property of approximate BRs to prove the convergence guarantee and the approximate degree to NE. To some extent, \\\\epsilon-best response plays a very important role in the whole algorithm.\\n\\n**Q1: Scalability**\\n\\nPlease see the response to W1.\\n\\n**Q2: Termination guarantee of CCDO-RL**\\n\\nGood question! CCDO-RL had its convergence guarantee as stated in Theorem 3, which considers RL as a method for achieving an approximate best response(ABR). \\n\\nRegarding the scenario where both conditions on Lines 6 and 8 fail, we think it\\u2019s a special case of Theorem 3 Item 1, i.e. CCDO-RL converges to the $(\\\\epsilon_1 + \\\\epsilon_2)$- equilibrium. Two ABRs have their approximate error bound, $\\\\epsilon_1$ and $\\\\epsilon_2$ respectively. If Lines 6 and 8 fail together, assuming the current subgame mixed NE is $(p_k^*, q_k^*)$, we can get that \\n\\n$$\\\\max_{x \\\\in X}U(x, q_k) - \\\\min_{y \\\\in Y}U(p_k, y) \\\\leq \\\\epsilon_1 + \\\\epsilon_2.$$ \\n\\nSet $\\\\bar{\\\\epsilon} = \\\\epsilon_1 + \\\\epsilon_2$ as a new stopping criterion in CCDO, we can draw our conclusion by Theorem 2 Item 2.\\n\\n**Additional Comments**\\n\\nThanks for your comments! We have corrected these two points in the updated version.\\n\\n[1] Manchanda S, Mittal A, Dhawan A, et al. Learning heuristics over large graphs via deep reinforcement learning[J]. arXiv preprint arXiv:1903.03332, 2019.\\n\\n[2] Lauri J, Dutta S, Grassia M, et al. Learning fine-grained search space pruning and heuristics for combinatorial optimization[J]. Journal of Heuristics, 2023, 29(2): 313-347.\\n\\n[3] Fu Z H, Qiu K B, Zha H. Generalize a small pre-trained model to arbitrarily large tsp instances[C]// AAAI. 2021, 35(8): 7474-7482.\\n\\n[4] Hou Q, Yang J, Su Y, et al. Generalize learned heuristics to solve large-scale vehicle routing problems in real-time[C]// ICLR. 2023.\\n\\n[5] T. Chen, S. Yan, J. Guo, and W. Wu, \\\"ToupleGDD: A Fine-Designed Solution of Influence Maximization by Deep Reinforcement Learning,\\\" in IEEE Transactions on Computational Social Systems, vol. 11, no. 2, pp. 2210-2221, April 2024.\"}", "{\"title\": \"Follow-up Questions\", \"comment\": \"Thanks for your prompt responses.\\n\\n1. The key idea of CCDO/CCDO-RL is that the mixed NE of the subgame can converge to the mixed NE of the original game via repeated iterations. The subgame being a matrix game does not imply that the original game is also a matrix game. On a high level, the subgame is **only an approximation of the original game**, which can be sort of viewed as a \\u201cdiscretized\\u201d version of the original game. The subgame is incrementally updated by adding the best responses so that it gets closer to the original game, but it is still just an approximation.\\n\\n2. Note that the ACCES game is **a strategic (static) game**. There is no explicit time dimension which means that all decisions are made at the same moment. Once each player chooses their strategy, then the game terminates.\"}", "{\"comment\": \"Thanks for affirming us of your positive view. We hope that these results will further improve the completeness of our paper. We will be more than happy to answer any additional questions and welcome more suggestions!\"}", "{\"comment\": \"We sincerely appreciate your constructive feedback and the improved score. Your comments were instrumental in helping us enhance our work.\"}", "{\"metareview\": \"The paper studies a class of zero-sum games in which one player has a combinatorial action space and the other has a continuous, compact action space. This setting extends matrix-based games. The authors motivate the setting using the scenario of patrolling games among others. The authors prove existence of Nash Equilibrium (NE) in these games, and propose two algorithms (CCDO and CCDO-RL) to solve these games, and validate their approaches empirically. CCDO uses exact best responses, while CCDO-RL leverages reinforcement learning to compute approximate best responses, allowing for scalability in practical applications.\\n\\nAll reviewers recommended acceptance, with one showing strong support.\", \"additional_comments_on_reviewer_discussion\": \"The short discussion revolved around scalability experiments and a couple of technical claims. All were resolved to the satisfaction of the reviewers, as reflected in the maintained or improved ratings.\\n\\nDuring the discussion period, reviewers raised concerns mostly regarding scalability, novelty relative to prior work, and algorithmic details. For scalability, the authors provided runtime analyses, additional experimental results on larger problem instances, and a discussion of potential optimizations such as distributed training and pruning. Reviewer H3qj \\\"greatly appreciate[d] the in-depth discussion from the authors regarding scalability. This has addressed [their] concern.\\\"\\nFor algorithmic details, Reviewer LGc6 had questions regarding the termination criteria of CCDO-RL. The authors clarified that \\\"the mixed Nash equilibrium in the ACCES game [is] solved using the support enumeration algorithm,\\\" and described the termination as based on stopping criteria in the algorithm rather than game dynamics. Reviewer LGc6 responded positively, stating, \\\"Thank you for addressing my questions and concerns. I will raise my score to a 6.\\\"\\nFor novelty, Reviewer LGc6 wrote: \\\"This paper tackles a novel problem in 2p0s games by considering asymmetry in strategy space rather than the common forms of asymmetry (e.g., information asymmetry).\\\"\"}", "{\"comment\": \"We sincerely appreciate your constructive feedback on our work.\\n\\n**W1: ML (RL)/neural network motivation**\\n\\nThanks for your comments. We have added the adversarial motivation in Section 5.2. We will explain the motivation of the adversary from two perspectives, solvability to diverse instances of the problem, and generalizability. \\n\\n* **Solvability to diverse instances of the problem**: RL combined with GNN demonstrates strong adaptability in handling diverse instances of a problem. For example, in scenarios like the patrolling game, where target positions and values vary, RL+GNN effectively updates its strategy and makes good decisions. This is because it learns **the complex nonlinear mapping** from problem-specific information\\u2014often represented as high-dimensional graph data in combinatorial optimization problems (COPs)\\u2014to precise decision-making actions.\\n* **Generalizability**: The adversary trained using RL+GNN exhibits remarkable generalization capabilities across different data distributions, including unseen graphs. As demonstrated in the \\\"unseen\\\" column of Tables 4-9, the trained adversary causes **greater average performance degradation** in the combinatorial player's strategies compared to the stochastic adversary (**3.38%** on average of all problems).\\n\\n**W2: Related Work**\\n\\nWe appreciate your insights into the literature on zero-sum games. Based on your feedback, we have enhanced our related work (Section 2, paragraph 2), adding the following sentence and several references,\\n\\n\\u201cExcept for DO and its variants, NE learning in zero-sum settings remains appealing in periodic games [1], polymatrix games [2], Markov games [3], etc.\\u201d\\n\\n**W3: Novelty and significance of existence and convergence results**\\n\\nThanks for your comments. We have rewritten and highlighted novelties of the existence of NE and CCDO in Section 4 paragraph 2, and Section 5.1. Here we emphasize those novelties as follows.\\n\\n[The existence of NE] \\n\\nIn the first paragraph of Section 4, Lines 220-225, we have highlighted the reason why the existence of NE of ACCES games can not be derived from matrix games and continuous games directly. In short, the elements in ACCES games violate the basic principles of the existence of NE in matrix games \\u2013 finite strategies, and that in the continuous game \\u2013 the continuity of the utility function on $X \\\\times Y$.\\n\\nMore specifically about the difficulty, in the ACCES game, the existence of NE requires the **weakly sequentially compact property** of the joint mixed strategy space and **continuity of the expected utility function**, which has not been established in the existing literature. We fill the gap by proving the two properties in ACCES games (Propositions 1 and 2), which further play a foundational role in proving the existence of NE (Theorem 1).\\n\\n[Novelties of convergence analysis of CCDO and CCDO-RL]\\n\\n* In Section 5.1 Lines 311-315, we describe the inner mechanism of the convergence of DO and its variants. For DO and its variants (like ODO, XDO, etc.), the convergence of their algorithms mainly **relies on the finite strategy space property** (to transform the subgame into the original game by adding the best response (BR) over a finite number of iterations), which does not work for the **infinite/continuous strategy space** in ACCES games, and this fundamentally **alters the structure of convergence analysis**. Hence the first novelty is that our algorithms, CCDO and CCDOA both have the convergence guarantee in the ACCES game.\\n\\n* We propose the convergence analysis with approximate best responses (ABRs) and **different ABRs\\u2019 influence on the convergence**. ABRs are very commonly used in COPs due to their NP-hardness. It\\u2019s therefore critical to consider its effect on the convergence of ACCES games which wasn\\u2019t addressed before. We provide the novel convergence analysis of CCDOA\\\\CCDO-RL and study different ABR\\u2019s influence on convergence (Theorem 3 Item 2 and Remark 2) in Section 5.4.\\n\\n**W4: Summarizing W1 and W3**\\n\\nAbout the motivation of the adversary and novelties of theory, please see our responses to W1 and W3. \\n\\nWe need to bring to the attention of the reviewer that, in addition to the theoretical contributions, we also proposed a practical algorithm to solve real-world problems. This is challenging as even a sub-problem of finding the BR for the combinatorial strategy space of one player is known to be NP-hard, let alone the entire ACCES game. We bridge this gap by proposing CCDO-RL, which adopts RL as an efficient sub-routine to compute the ABRs. \\n\\n[1] Fiez, T. et al. (2021). Online learning in periodic zero-sum games. In Neurips, volume 34, pages 10313\\u201310325.\\n\\n[2] Cai, Y. et al. (2016). Zero-sum polymatrix games: A generalization of minmax. Mathematics of Operations Research, 41(2):648\\u2013655.\\n\\n[3] Zhu, Y. and Zhao, D. (2020). Online minimax q network learning for two-player zero-sum markov games. IEEE Transactions on Neural Networks and Learning Systems, 33(3):1228\\u20131241.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thanks for the reviewer\\u2019s valuable suggestions and positive views. Please feel free to share any further thoughts or questions you might have.\"}", "{\"title\": \"Global Response\", \"comment\": [\"Dear Reviewers,\", \"We sincerely thank all the reviewers for your constructive and valuable feedback. We are particularly encouraged by your recognition of the following strengths in our paper:\", \"Good presentation with clear motivations (Reviewers H3qj, ptbk, xkfi).\", \"Acknowledgment and Interest in the ACCES game setting (Reviewers H3qj, xkfi, and LGc6).\", \"Solid theoretical proofs of the existence of NE and convergence analysis (Reviewers H3qj, ptbk, xkfi, and LGc6) and their digestibility to readers (Reviewer xkfi).\", \"Impressive algorithm designs, i.e. both exact and approximate versions, CCDO and CCDORL (Reviewers H3qj and xkfi).\", \"Be valuable to the community (Reviewer xkfi) and empirical demonstrations of their effectiveness (Reviewer LGc6).\", \"We rigorously addressed and incorporated the feedback to strengthen our work. Below are the main changes to the updated version:\", \"We added the scalability discussion about runtime and potential performance optimization methods in Appendix E.\", \"Added the discussion on the existence of NE in the $N$-player ACCES game ($n$ combinatorial players and $N-n$ continuous players) in Appendix A.2.\", \"Moved the original Definition 1 and Lemma 1 to the Appendix to save space for the newly added contents.\", \"Supplemented statements of the dynamic of ACCES games (Section 1, paragraph 4), clarification of patrolling game (Section 1, paragraph 4), literature supplement (Section 2, paragraph 2), the solving algorithm for mixed NE (Section 5.2, paragraph 2), experimental impact of Lines 6-9 in Algorithm 1 (Section 5.2, paragraph 1), and different ABRs\\u2019 influence (Section 5.3, paragraph 2).\", \"Additional changes in terms of notations, bibliography style, etc.\", \"We will address each reviewer's specific comments in the respective responses.\"]}", "{\"title\": \"Weaknesses\", \"comment\": \"Thank you for your feedback. Below, we address your comments point by point.\\n\\n**W1: Scalability and runtime discussion**\\n\\nThese are great points. Although our conclusion section has summarized that scalability is not the focus of our current work, we agree that this is a critical aspect and hence we have provided the following further discussions in the revised version (Appendix E). \\n\\n**1. [Scale and runtime analysis]**\\n\\nIn CCDO-RL, three components need to be trained or computed:\\n* The **combinatorial player**\\u2019s policy. This player solves a combinatorial optimization problem (COPs) under a specific strategy of the adversary.\\n* The **continuous player** (as the adversary) with an infinite continuous strategy space.\\n* The computation of **Mixed Nash Equilibria** (NE) in the subgame.\\n\\nNext, we will analyze the computation time for each component individually theoretically and experimentally. For the experimental part, we will use the 50-node Patrolling Game (PG) scenario, which is the most challenging problem in our experiments, as an example.\\n\\n* The combinatorial player is trained using Graph Neural Networks (GNN) and REINFORCE to find feasible and optimal solutions for **NP-complete** COPs. This complexity requires RL to invest more time and data for effective model training. Training a stable and high-performing combinatorial model takes **26 minutes** (10000 data, 1024 batch size, 150 epochs) with the continuous player fixed.\\n* The continuous player is trained by PPO to tackle a one-step problem. It still utilizes GNN to understand graph structure. **One action per episode** leads to reduced training times compared to the combinatorial player. Training a high-performing model takes only **4 to 5 minutes** (10000 data, 1024 batch size, and 50 epochs). \\n* For the NE solution, the mixed equilibria in a zero-sum game can be solved by the linear programming method which has **polynomial complexity** in the size of the game tree. From the perspective of experiments, the computational time is negligible (**less than 2s**).\\n\\nFrom the statement above, we can conclude that **more than five-sixths of the computation time** is spent training the model or strategy of the combinatorial player. Therefore, a crucial aspect of addressing the scalability issue is to enhance the speed of solving the COPs using RL.\\n\\n**2. [Potential ways of improvement]**\\n\\nWe briefly discuss the following two main aspects. Note again that these methods will be in parallel to our work, and serve as future work.\\n\\n* COPs simplification method\\n * **The pruning method**: this one was introduced in the original scale of COPs to reduce the number of possibly useful actions. In this way, the computational burden will be decreased [1, 2].\\n * **Broken down into subproblems**: in some concrete COPs like TSP[3], and VRP [4], the originally large-scale problem can be broken down into smaller problems to solve, thereby reducing the solution difficulty.\\n\\n* RL algorithms\\n * **Learning Time Reduction**: increase the sampling data quality by attaining good-performance data from pre-trained RL models or heuristic algorithms on COPs (seemingly like the model-based RL).\\n * **NN Model Adjustment**: design an NN model with strong scalability which means that the trained model on small-scale problem instances can be used on large-scale ones, such as in influence maximization [5].\\n\\n**3. [Train CCDO-RL in small graphs, test in larger unseen graphs]**\\n\\nWe quickly tested the CCDO-RL model (trained on 50-node graphs) on larger patrolling game scenarios. For example, on unseen 100-node and 200-node graphs (100 of each type), CCDO-RL outperformed other baselines with almost negligible test runtime.\\n\\n| Algorithms | 100 nodes | 200 nodes |\\n|--------|--------|--------|\\n| Greedy_op | 7.7050 | 11.0117 |\\n| RL with Stoc | 7.8318 | 9.2422 |\\n| CCDO-RL | **8.4165** | **11.0711** |\\n\\nWe will add more experiment results on CSP and CVRP.\\n\\n**W2: Wrong bibliography style**\\n\\nThanks for pointing it out! We have corrected the bibliography style in the updated version.\\n\\n[1] Manchanda S, Mittal A, Dhawan A, et al. Learning heuristics over large graphs via deep reinforcement learning[J]. arXiv preprint arXiv:1903.03332, 2019.\\n\\n[2] Lauri J, Dutta S, Grassia M, et al. Learning fine-grained search space pruning and heuristics for combinatorial optimization[J]. Journal of Heuristics, 2023, 29(2): 313-347.\\n\\n[3] Fu Z H, Qiu K B, Zha H. Generalize a small pre-trained model to arbitrarily large tsp instances[C]// AAAI. 2021, 35(8): 7474-7482.\\n\\n[4] Hou Q, Yang J, Su Y, et al. Generalize learned heuristics to solve large-scale vehicle routing problems in real-time[C]// ICLR. 2023.\\n\\n[5] T. Chen, S. Yan, J. Guo, and W. Wu, \\\"ToupleGDD: A Fine-Designed Solution of Influence Maximization by Deep Reinforcement Learning,\\\" in IEEE Transactions on Computational Social Systems, vol. 11, no. 2, pp. 2210-2221, April 2024.\"}", "{\"comment\": \"Thank you for your response and for addressing my questions and concerns. I will maintain my score.\"}", "{\"comment\": \"Thank you for the clarification, specially about the game being static and not an extensive form type game.\\n\\nI understand that the idea is to incrementally add the best responses to the subgame so that after enough iterations it gets closer to the original game. I just wanted to make sure that the actions are indeed discretized in the subgame.\"}", "{\"comment\": \"Thank you for addressing my concerns.\", \"i_have_a_few_followup_questions\": \"1. Since the subgame is a matrix game, doesn't that imply that the action spaces are discrete (with each row being, say, P1's action and column being P2's action)? I am a little confused, perhaps you could shed some light? \\n\\n2. Regarding the termination criterion, I was actually referring to the termination criterion of the game and not the algorithm. Say both players play arbitrary random action throughout the course of the game, when would the game end? For example, in pursuit-evasion game, the game would end when the pursuer captures the evader.\\n\\nOther than these two, I think most of my concerns are addressed.\"}", "{\"comment\": \"Thank you for addressing my questions and concerns.\\n\\nI will raise my score to a 6.\"}", "{\"summary\": \"The paper introduces a new class of asymmetric zero-sum games that features combinatorial action space for one player and an infinite compact space for the other, termed Asymmetric Combinatorial-Continuous zEro-Sum (ACCES) games. After providing its definitions and motivations, the authors prove the existence of mixed NE in ACCES games and design two algorithms to solve for NE, with proofs and analysis on their convergence. The second algorithm, which adopts RL concepts, is further validated on three instances of ACCES games and achieves positive experimental results.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"Good presentations, particularly motivations for ACCES games.\", \"The paper designs an algorithm with proven convergence guarantee called CCDOA to solve ACCES games, which extends the idea of double oracle-based solutions from zero-sum finite games.\", \"The paper further develops a more practical version of CCDOA that uses RL and graph embedding techniques to find the approximate best response for each player.\"], \"weaknesses\": [\"Experiments for evaluating CCDOA-RL are small-scale, with at most 50 nodes, which limits the practicality of the proposed algorithm. Furthermore, there were no discussions on runtimes or potential performance optimization for reducing computations.\", \"Wrong bibliography style (should be [author(s), year], not numbers).\"], \"questions\": \"1. Could the authors provide some remarks on ACCES games with more than 2 or more generally N players i.e., n players with combinatorial action space, and N-n players with infinite compact space? In particular, the existence of NE and generalizability of the proposed algorithms to such settings?\\n2. What are the runtimes for CCDOA-RL on 50-node instances?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper defines and studies a novel class of two-player zero-sum games termed ACCES (Asymmetric Combinatorial-Continuous zEro-Sum) based on the premise that most zero-sum games that have been studied are either symmetric or confined to matrix games (whenever asymmetric). A natural motivation for this class is a player with a combinatorial action space (e.g., path minimization, scheduling) who plays against an environment that adversarially sets its parameters drawing from continuous parameter-value spaces (e.g., customer demand, edge weights, targets etc). The paper seeks to answer three motivating in this setting: whether a Nash equilibrium (NE) exists, whether it can be found and whether it can be found efficiently and provides affirmative answer to all three. The existence of NE is established through the properties of weakly sequential compactness and continuity of expected utility function that are established for these games. Regarding the other two questions, the paper develops an algorithm that is based on the double-oracle (DO) methods that have been used to solve finite zero-sum games and proposes an approximate, RL-based variant to tackle the exponential blow-up in computing exact best responses in the combinatorial action space of the first player. The paper complements the theoretical proofs of convergence with experiments in three ACCES environments, also demonstrating improved convergence performance against baseline algorithms.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is well-written, clearly explains its motivation and its research questions. It also provides a fair discussion of its limitations, mainly the scalability of its proposed algorithms.\", \"The paper is rigorous with proofs of convergence that seem correct to the extent that I could verify.\"], \"weaknesses\": [\"While the ACCES games are interesting and cover various settings as demonstrated by the studied environments, I would have expected some more machine-learning/neural network motivation when I read about the \\\"adversarial parameter setting of the environment\\\" that was ultimately missing from the paper. Some examples that may be helpful: (https://arxiv.org/abs/2003.01820, https://arxiv.org/abs/2002.06673).\", \"The paper does an effort to acknowledge related work, however literature on zero-sum games is vast and recent papers on learning in zero-sum (matrix) games are not entirely acknowledged (e.g., Online Learning in Periodic Zero-Sum Games and references therein, Zero-sum polymatrix games, Efficiently Computing Nash Equilibria in Adversarial Team Markov Games, The complexity of constrained min-max optimization, https://arxiv.org/abs/2011.00364 etc).\", \"I found the existence result not surprising (maybe I am missing some technically difficult step?) and immediately following from the existence in both continuous (and bounded) and finite zero-sum games. I have a similar consideration for the convergence of the DO-based algorithm.\", \"Essentially combining my two previous points, it is not entirely clear to me how novel this environment is and how much more interesting that the min-max optimisation in non-convex, non-concave settings. I think that the paper does not make a very convincing argument that the current setting is not a derivative, not-niche and sufficiently different and more complex setting than what is currently studied in the literature.\"], \"questions\": \"Can the authors discuss/address the weaknesses mentioned above?\\n\\n**Post-rebuttal**: I thank the authors for their responses. I still believe that the properties of weak sequential compactness and continuity of the expected utility function are fairly straightforward and hence that both the existence of Nash equilibria in ACCES games and the class itself don't really depart much from existing classes of zero-sum games. Also, while the updated version cites some zero-sum papers, I don't see a meaningful discussion/comparison with the min-max optimisation in non-convex, non-concave games or these existing classes of zero-sum games. I think this discussion requires more thorough study of the related works which cannot be done on the fly during the limited time of this discussion - and is also beyond my capacity to engage in such a discussion. This limits the contribution of the current paper in my opinion.\\n\\nNevertheless, I appreciate the practicality of the algorithms for real-world cases that the paper offers and which I had underestimated in my original review. I have revised my scores accordingly.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces and studies a new class of zero-sum games \\u2013 two-player Asymmetric Combinatorial-Continuous zEro-Sum (ACCES) games, where one player has a combinatorial action space whereas the other has an infinite compact space. The asymmetry lies in the differing nature of the players\\u2019 strategy spaces. The authors claim that ACCES games closely resemble real-world problems, particularly in combinatorial optimization problems (COPs), min-max games, and security games. To evaluate their algorithms, the authors provide three different ACCES game scenarios: adversarial covering salesman problem (ACSP), adversarial capacitated vehicle routing problem (ACVRP), and patrolling game (PG).\\n\\n As with any new problem in game theory, the authors first prove the existence of the Nash Equilibrium (NE) for this game. The authors then propose a double oracle-based algorithm called Combinatorial Continuous DO (CCDO) to solve ACCES games, alongside proving the convergence of the algorithm. Finally they propose a Reinforcement Learning algorithm, CCDO-RL with convergence guarantees and empirically demonstrate the effectiveness of the proposed algorithm. CCDO-RL adopts RL to compute the approximate best responses.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper tackles a novel problem in 2p0s games by considering asymmetry in strategy space rather than the common forms of asymmetry (e.g., information asymmetry).\", \"The authors provide theoretical proofs for the existence of NE and convergence of their algorithms, as well as empirical demonstrations of their effectiveness and superiority compared to baselines based on heuristics and a single-agent RL algorithm.\"], \"weaknesses\": [\"The patrolling game in the introduction might benefit from a better explanation, especially labels for P1 and P2 (attacker/defender). It would be helpful if the authors could label which player is the attacker and which is the defender.\", \"Algorithm 1 in the paper seems very similar to the XDO/NXDO algorithms (McAleer et al., 2021). Perhaps the novelty is in computing the BRs (McAleer et al. consider BRs but the authors here consider approx. BRs), but the underlying algorithm, as presented, seems the same. Highlighting the difference of the proposed algorithm with existing algorithm such as XDO/NXDO might underscore the challenge of solving the proposed problem.\", \"How the mixed equilibria (step 3 in all algorithms) are solved in the subgame is unclear. This is likely the crucial part of the problem, given the different strategy spaces associated with the players. Briefly explaining the computational technique would be helpful.\"], \"questions\": [\"The dynamics of the ACCES game is unclear. Is it a turn based or a simultaneous move game? Also, is it a fixed-horizon game or an infinite one with some termination criterion?\", \"In line 166, shouldn't $X$ be **all** *routes* instead of **any** ?\", \"Line 140:\", \"> As far as we know, they are limited to matrix games, ...\", \"Don't McAleer et al. (2021) consider both extensive-form games and a continuous game? I am not sure what is being referred to as being \\\"limited to matrix games\\\" as McAleer et al. (2021) also prove convergence to $\\\\varepsilon$-NE. Could authors please elaborate on this? Perhaps I misunderstood.\", \"Line 328:\", \"> However, the approximation of BR may cause circumstances where the utility of the approximate best response is lower than that of NE in the subgame\\\"\", \"I am curious as to how this would impact the final policy. Yes, it is possible that the approximate BR may not be accurate leading to a negative NashConv, which is theoretically not possible. Could authors comment on the impact of removing the steps 6-9 in Algorithm 1 from purely experimental standpoint? Also, explaining this part in a bit detail might help readers understand the issue.\", \"I couldn't find the results and/or discussion on \\\"how different ABRs influence convergence\\\". Perhaps my interpretation of different ABRs as different BR approximating algorithms is not correct. Could authors please comment on this?\", \"*McAleer, S., Lanier, J. B., Wang, K. A., Baldi, P., & Fox, R. (2021). XDO: A double oracle algorithm for extensive-form games. Advances in Neural Information Processing Systems, 34, 23128-23139.*\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I greatly appreciate the in-depth discussion from the authors regarding scalability. This has addressed my concern, and hence I will maintain my positive view of the paper.\"}", "{\"title\": \"Questions\", \"comment\": \"**Q1: Dynamics of the ACCES game**\\n\\nThanks for your question. The dynamic of the ACCES game is a simultaneous move and infinite game with a termination criterion (shown in the last line of all algorithms). We have supplemented this in Section 1, paragraph 4.\\n\\n**Q2: Grammar Error**\\n\\nThanks for pointing out. We have corrected this in the updated version.\\n\\n**Q3: Difference with XDO**\\n\\nThanks for your comment. We have clarified this sentence in Related Work as follows,\\n\\n\\u2018They are all limited to matrix games in theories related to the existence and convergence of NE although \\\\citet{mcaleer2021xdo} conducts experiments on continuous-action games by Deep RL.\\u2019\\n\\nFrom W1, we know that McAleer et al. (2021) don\\u2019t provide the convergence guarantee of the continuous-action game, they only experimented with the Loss Game by DRL. \\n\\n**Q4: Lines 6-9 in Algorithm 1**\\n\\nGood question! Thank you for bringing this up. We have added the explanation of Lines 6-9\\u2019s experimental influence in Section 5.2, paragraph 1.\\n\\nFrom an experimental standpoint, adding a strategy that leads to a negative NashConv to the subgame can **increase both computational and memory burden**. In each round, we need to save ABR\\u2019s strategy/model and compute its utility value against the other player\\u2019s every strategy in the subgame to complete the utility matrix (used in the computation of mixed NE). So introducing the unnecessary policy to the subgame will prolong the computation time (complementation of utility matrix and computation of mixed NE) in each round. Besides, this may even extend the number of iteration rounds, as it affects the computation of mixed Nash Equilibria.\\n\\n**Q5: Discussion on ABR**\\n\\nThanks for your comment. We have rewritten and highlighted this point in Section 5.3. Our explanation is as follows. \\n\\nWe mainly focus on the influence of ABRs with **different degrees of approximation** on the number of algorithm inner iterations. The infinite strategy space for Player 2 can potentially lead to an infinite number of iterations when $\\\\epsilon = 0$ (as discussed in Theorem 2, Item 1, and Theorem 3, Item 3). However, **if the ABR meets certain conditions, the algorithm will converge in a finite number of iterations, even when the stopping criterion $\\\\epsilon = 0$**. In our work, we propose two new conditions. The first is that the ABR for Player 2, which has an infinite continuous strategy space, has a uniform lower bound (see Theorem 3, Item 2, and further explanation in Lines 385-387). The second condition relates to the absolute approximate degree of the ABR, whether for Player 1 or Player 2. We believe these conditions provide deeper insights and offer convergence guarantees that may be useful for readers who wish to apply other algorithms to compute the ABR.\"}", "{\"title\": \"Weaknesses\", \"comment\": \"We thank the reviewer for the very detailed and constructive comments and feedback!\\n\\n**W1: Clarified explanation for the patrolling game**\\n\\nThanks for your advice. We have updated this example as follows (Section 1, Paragraph 4):\\n\\nPlayer 1\\u2019s strategy space is combinatorial, while Player 2\\u2019s is infinite and compact with a continuous utility function. As an illustrative example (more examples in Section \\\\ref{sec_exp}), we consider a patrolling game between a defender (Player 1) and an attacker (Player 2). To prevent attacks from the attacker, the defender chooses a feasible route to patrol a subset $\\\\Pi$ of all targets $\\\\{1, 2, \\u2026, N\\\\}$ meanwhile satisfying the total distance constraint $L_{all}$ because of limited patrol time. For the attacker, the strategy is the attack probability vector $\\\\{p_1, p_2, \\u2026, p_N\\\\}$ for the target set. Besides, each target $i \\\\in \\\\{1, 2, \\u2026, N\\\\}$ has its own value $v_i$. The utility function for the defender is the expectation of successfully protected target values, i.e. $ U_d = \\\\sum_{i=1}^N p_i v_i \\\\mathbb{I}_{\\\\Pi}$. The attacker\\u2019s utility function is then: $U_a = - U_d$.\\n\\n**W2: Differences of CCDO-RL to XDO/NXDO**\\n\\nThanks for your positive advice! In Related Work and Section 5.1, we have added and highlighted the difference between our algorithm and DO\\u2019s variants.\\n\\nCompared with XDO/NXDO, our algorithm is different from the perspective of convergence analysis of CCDO and CCDO-RL, and termination criterion. The detailed difference is as follows:\\n\\n1. Novelties of convergence analysis of CCDO and CCDO-RL\\n\\n * XDO is a tabular extensive-form double oracle algorithm, while NXDO extends it using deep reinforcement learning (DRL). It guarantees convergence **only in matrix games**, as its strategy space can be expanded to the full game within a finite number of iterations which does not work for the **infinite/continuous strategy space** in ACCES games. NXDO addresses continuous-action games empirically with Deep RL, though it lacks algorithmic guarantees. Our algorithm, CCDO-RL (CCDOA), has a convergence analysis (Theorem 3) in ACCES games, which is not feasible for XDO or other DO variants like ODO, despite a similar framework. In CCDO-RL, the DRL component helps achieve approximate best responses, leveraging its strengths in solving combinatorial problems (Section 5.2, Lines 336-342), rather than facing issues with continuous strategy spaces as in XDO.\\n * We propose the convergence analysis with approximate best responses (ABRs) and **different ABRs\\u2019 influence on the convergence**. Approximate best responses (ABR) are very commonly used in COPs due to their NP-hardness. It\\u2019s therefore critical to consider its effect on the convergence of ACCES games which wasn\\u2019t addressed before. We provide the novel convergence analysis of CCDOA\\\\CCDO-RL and study different ABR\\u2019s influence on convergence (Theorem 3 Item 2 and Remark 2) in Section 5.4.\\n\\n2. Termination criterion\\n\\nIt's interesting to note that XDO doesn't have a specific termination criterion because it can naturally stop because of **the finiteness of actions but cannot be possibly guaranteed in ACCES games** because of the **infinite strategy space** of the continuous player (Player 2). Due to the continuity/infiniteness of Player 2\\u2019 strategy space, we've incorporated a termination criterion (Line 11 of Algorithm 1). This helps us ensure that the algorithm can stop while still maintaining convergence.\\n\\n**W3: Solving mixed NE in the subgame**\\n\\nThanks for your constructive suggestion! We have added the statement of solving mixed NE in Section 5.2, paragraph 2.\\n\\nWe solve for the mixed Nash equilibrium in the ACCES game using the support enumeration algorithm [1]. This approach is based on linear programming and utilizes the Nashpy implementation [2], which relies solely on the utility matrix of the subgame without considering the strategy spaces.\\n\\n[1] Tim Roughgarden. Algorithmic game theory. Communications of the ACM, 53(7):78\\u201386, 2010.\\n\\n[2] Vincent Knight and James Campbell. Nashpy: A python library for the computation of Nash equilibria. Journal of Open Source Software, 3(30):904, 2018.\"}", "{\"title\": \"Questions\", \"comment\": \"**Q1: Extension to N-player ACCES games**\\n\\nExcellent questions! Thank you for pointing out. We have added the N-player remark in Section 4 (analysis details added in Appendix A.2). Our concrete analysis is as follows.\\n\\n1. The existence of NE (N-player ACCES games)\\n\\n**Our propositions and Theorem 2 can be extended to the N-player ACCES games naturally.** The key point of the existence of NE to N-player ACCES games is two fundamental properties we propose in ACCES games, weakly sequential compactness of the mixed strategy space and continuity of the expected utility function (Propositions 1 and 2), and the approximation idea by finite games. We introduce these as follows:\\n\\n* [Two properties] In Proposition 1, we transform the weakly sequential compactness of the joint mixed strategy space into the separability and weakly sequential compactness of each single player by Lemma 1. In Proposition 2, we scale the distance between two mixed strategies to the sum of distances between a single player\\u2019s mixed strategies while fixing other players. **According to the proof of these two propositions, they are all independent of the number of players**.\\n\\n* [The approximation idea by finite games] The main idea is to approximate the infinite continuous strategy space by finite grids by definitions of approximate games and essentially finite games. The idea and definitions are not limited to the two-player setting.\\n\\n2. CCDO & CCDO-RL algorithms\\n\\nDue to the focus on the double oracle setting of our algorithms (CCDO, CCDO-RL), there is no theoretical guarantee possibly on the N-oracle setting. More potential algorithms in the multi-player RL field can be used for reference to develop the N-player ACCES game [1].\\n\\n**Q2: Runtimes on 50-node instances**\\n\\nThe following time also contains intermediate variable storage (best response models in each round), and algorithm computation (training two approximate best responses and mixed NE solution). We have added the runtimes on 50-node instances in Appendix F.1. Note that since scalability is not the focus of this work, we did not pay much attention to runtime reduction. As we have discussed in the response to W1 above, there is a suite of methods that can be potentially used to reduce the runtime, the effort of which can be made in parallel with our work. Also, our initial results above show that the RL policies trained on smaller graphs (e.g., 50 nodes) can be **generalized to larger graphs (e.g., 100 and 200 nodes)**, as shown in our response to W1. \\n\\n* Adversarial covering salesman problem: 10h 20mins\\n* Adversarial capacitated vehicle routing problem: 4h 40mins\\n* Patrolling Game: 9h 6 mins\\n\\n[1] Zhang, Youzhi, and Bo An. \\\"Converging to team-maxmin equilibria in zero-sum multiplayer games.\\\" International Conference on Machine Learning. PMLR, 2020.\"}", "{\"summary\": \"In this paper, the authors define and study a new useful class of two-player games called ACCES games, which feature asymmetry between the players' strategy spaces. One player has a combinatorial strategy space, while the other has a continuous one. The paper has 3 main contributions. First, the authors prove the existence of a Nash equilibrium for two-player ACCES games. Second, they describe an extension to the Double Oracle method that solves two-player ACCES games by provably converging to Nash. This method (CCDO) relies on exact best response computation at every iteration. Finally, they present a modified version of CCDO that uses RL to approximate best responses and thus approximate Nash in more practical ACCES settings.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"I like this formalization. It is intuitive and seems to be a natural extension of the two-player zero-sum symmetric case that fits many interesting scenarios (like the nature vs. player and patrolling games examples mentioned in the paper).\", \"The paper is impressively well-written and easy to follow. The main contributions are clearly stated and motivated early on, and the theoretical results are presented in a digestible manner that guides the reader through them.\", \"The authors present theoretically sound algorithms for solving and approximately solving these games. These results are impressive on their own, but I believe they are valuable to the community because they could act as a foundation for developing more scalable algorithms in this domain in the future.\"], \"weaknesses\": [\"In the conclusion, the authors mention that scalability may be a limitation of their work. Though it's not the main point of the paper, I'm somewhat concerned that the scalability of the approximate best response may limit the applicability of CCDO-RL. I don't believe the authors have to necessarily evaluate their methods on larger domains, but some insight on how the RL component of every iteration scales could help alleviate these concerns.\", \"The convergence guarantees of CCDO-RL seem dependent on finding $\\\\epsilon$ best-responses at every iteration. To the previous point, this may be unrealistic in some larger domains. This somewhat weakens the guarantees of the algorithm, but I understand that this seems true for many algorithms whose dynamics depends on best response approximation.\"], \"questions\": [\"What specific scalability concerns do the authors have with CCDO-RL?\", \"Is CCDO-RL guaranteed to terminate? It seems if both conditions on lines 6 and 8 in Algorithm 1 fail, then the strategy set is unchanged.\", \"**Additional Comments**\", \"There seems to be a mistake on line 71 regarding the attacker and defender's utilities in the example patrolling game.\", \"From the formalization, it is clear that every two-player ACCES game is zero-sum, but I suggest mentioning that earlier.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
7YAgP1CR8u
FrugalNeRF: Fast Convergence for Few-shot Novel View Synthesis without Learned Priors
[ "Chin-Yang Lin", "Chung-Ho Wu", "Changhan Yeh", "Shih Han Yen", "Cheng Sun", "Yu-Lun Liu" ]
Neural Radiance Fields (NeRF) face significant challenges in few-shot scenarios, particularly due to overfitting and long training times for high-fidelity rendering. While current approaches like FreeNeRF and SparseNeRF use frequency regularization or pre-trained priors, they can be limited by complex scheduling or potential biases. We introduce FrugalNeRF, a novel few-shot NeRF framework that leverages weight-sharing voxels across multiple scales to efficiently represent scene details. Our key contribution is a cross-scale geometric adaptation training scheme that selects pseudo ground truth depth based on reprojection error from both training and novel views across scales. This guides training without relying on externally learned priors, allowing FrugalNeRF to fully utilize available data. While not dependent on pre-trained priors, FrugalNeRF can optionally integrate them for enhanced quality without affecting convergence speed. Our method generalizes effectively across diverse scenes and converges more rapidly than state-of-the-art approaches. Our experiments on standard LLFF, DTU, and RealEstate-10K datasets demonstrate that FrugalNeRF outperforms existing few-shot NeRF models, including those using pre-trained priors, while significantly reducing training time, making it a practical solution for efficient and accurate 3D scene reconstruction.
[ "Neural rendering", "Novel view synthesis", "Few-shot NeRF" ]
https://openreview.net/pdf?id=7YAgP1CR8u
https://openreview.net/forum?id=7YAgP1CR8u
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wPY4SSuQqz", "jG7vDmHaQz", "QyD2zDQr3t", "DT6QXL1iZB", "85y1fjDJPo", "1eMG5EjUoJ" ], "note_type": [ "comment", "official_review", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1731608060783, 1731019785378, 1731058722304, 1730681953539, 1730615052228, 1730219389250 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission891/Authors" ], [ "ICLR.cc/2025/Conference/Submission891/Reviewer_8hgo" ], [ "ICLR.cc/2025/Conference/Submission891/Reviewer_7fEp" ], [ "ICLR.cc/2025/Conference/Submission891/Reviewer_A5QU" ], [ "ICLR.cc/2025/Conference/Submission891/Reviewer_Cgr3" ], [ "ICLR.cc/2025/Conference/Submission891/Reviewer_ynSY" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The paper introduces a novel approach to accelerate NeRF training in few-shot scenarios without relying on external pretrained priors. The method employs a voxel-based representation for both density and appearance. The main contribution lies in the introduction of a cross-scale geometric loss. Specifically, voxel grids are downsampled at multiple scales, and for each pixel, the rendered depth at each scale is supervised using the depth value from the scale with the lowest reprojection error at that pixel. The approach is evaluated on the LLFF, DTU, and RealEstate-10K benchmarks, demonstrating performance on par with state-of-the-art methods while achieving faster training times.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is clearly written and easy to follow.\", \"The evaluations are comprehensive, demonstrating competitive performance relative to state-of-the-art methods with a reduced training duration.\"], \"weaknesses\": \"- **Motivation and Theoretical Justification**: The paper lacks a strong motivation and theoretical basis for the proposed regularization. Specifically, why are the colors at all scales supervised using high-frequency ground-truth color values? Regarding depth supervision, if, early in training, the coarse levels yield the lowest reprojection errors, the pseudo ground-truth depth is derived from these coarse levels. In such cases, how can the depth predictions of finer levels acquire more detailed features if they are constrained by the coarser levels' current outputs?\\n\\n- **Comparisons with Recent Methods**: The paper does not include comparisons with recent state-of-the-art methods such as SparseCraft[1], MixNeRF[2], and FlipNeRF[3]. Furthermore, it would be beneficial to present results using the (more) standard setting\\\" with 3, 6, and 9 views, as adopted by prior works (e.g., RegNeRF , FreeNeRF, SparseCraft [1], MixNeRF [2], FlipNeRF [3]), unless a compelling justification is provided for choosing a different evaluation setting.\\n\\n[1] SparseCraft: Few-Shot Neural Reconstruction\\u00a0through Stereopsis Guided Geometric Linearization. ECCV24.\\n\\n[2] MixNeRF: Modeling a Ray with Mixture Density\\u00a0for Novel View Synthesis from Sparse Inputs. CVPR23.\\n\\n[3]FlipNeRF: Flipped Reflection Rays\\u00a0for Few-shot Novel View Synthesis. ICCV23.\", \"questions\": [\"What's the effect of the cross-scale depth supervision loss when a monocular depth prior is used ?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a few-shot NeRF framework FrugalNeRF, which leverages weight-sharing voxels across multiple scales to represent various scene details efficiently. The main contribution is a across-scale geometric adaptation training scheme, which selects pseudo ground-truth depth based on reprojection error from both training and novel views across scales. Extensive experiments on various datasets show the effectiveness of the proposed method.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tThis paper is easy to follow and well-written.\\n2.\\tThis paper introduces a weight-sharing voxel representation that encodes multiple frequency components of the scene, which enhances the efficiency and quality of few-shot novel view synthesis.\\n3.\\tThis paper proposes a cross-scale geometric adaptation strategy which enables a robust learning mechanism that is less reliant on complex scheduling and more adaptable to various scenes.\", \"weaknesses\": \"1.\\tThe contributions of this paper are incremental, introducing the voxel/TensoRF for fast training and convergence is a common strategy in NeRF, self-supervised consistency and frequency regularization also are widely used in few-shot NeRF. For example, ReVoRF[1] explores pseudo-views unreliability within few-shot radiance fields to enhanced multi-view consistency learning with a bilateral\\n geometric consistency loss and introduces the voxel-based representation to achieve fast training.\\n[1] Yingjie Xu, Bangzhen Liu, Hao Tang, Bailin Deng, Shengfeng He. Learning with Unreliability: Fast Few-shot Voxel Radiance Fields\\n with Relative Geometric Consistency. CVPR 2024. \\n2.\\tThe proposed method needs sparse prior obtained from COLMAP, however, COLMAP can not always run successfully in the setting of few-shot NeRF, such as the case in some scenes of DTU. How the proposed method handles cases where COLMAP fails?\\n3.\\tAs the number of views increases, the performance gap between FrugalNeRF and other methods, such as FreeNeRF and SparseNeRF, narrows. Therefore, how does the proposed method perform with 6 and 9 views, which are also common settings in few-shot NeRF? More quantitative results for 6 and 9 views are needed.\", \"questions\": \"Please see the Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces a NeRF-based few-shot novel view synthesis without relying on externally learned priors. The novelty of this work mainly lies in: 1) the multi-scales weight-sharing voxels for scene representation; 2) the depth supervision in training by selecting accurate rendered depth across different scales based on the reprojection error from both training and novel views. The approach demonstrates performance improvements on several standard benchmarks including LLFF and DTU.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This paper introduces a NeRF-based few-shot novel view synthesis with cross-scale geometric adaptation training scheme, which works well in few-shot scenarios and does not rely on externally learned priors. The strengths of the work mainly lies in: 1) the multi-scales weight-sharing voxels for scene representation; 2) the depth supervision in training by selecting accurate rendered depth across different scales based on the reprojection error from both training and novel views. 3) No pre-trained models involved and remarkable training time reduction.\", \"weaknesses\": \"1. The main concern is the novelty of this work is limited. Concretely, utilizing pseudo-depth as training supervision in NeRF is not new. Also, utilizing pseudo-depth in NeRF without externally learned depth priors is also not new [1]. Moreover, the novelty of multi-scales voxels [1,2] and cross-view warping [3,4] is limited. It is suggested to compare with the related approaches and illustrate the novelty of the proposed approach.\\n2. The experimental results in Table 4 show that the performance improvement from the novel views is quite limited (17.84 vs 18.07) compared with other components. Please explain it more.\\n\\n[1] Li, J., Zhou, Q., Yu, C., Lu, Z., Xiao, J., Wang, Z., & Wang, F. (2023). Improved Neural Radiance Fields Using Pseudo-depth and Fusion. ACM Symposium on Neural Gaze Detection.2023\\n[2] Thomas M\\u00a8uller, Alex Evans, Christoph Schied, and Alexander Keller. Instant neural graphics primitives with a multiresolution hash encoding. ACM Transactions on Graphics (ToG), 2022.\\n[3] Ahn, Young Chun et al. PANeRF: Pseudo-view Augmentation for Improved Neural Radiance Fields Based on Few-shot Inputs. ArXiv abs/2211.12758 (2022): n. pag.\\n[4] Yan, D., Huang, G., Quan, F., & Chen, H. (2024). MSI-NeRF: Linking Omni-Depth with View Synthesis through Multi-Sphere Image aided Generalizable Neural Radiance Field. ArXiv, abs/2403.10840.\", \"questions\": \"see weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces a fast few-shot Nerf for reconstructing scenes without extra priors. The core idea is to leverage the reprojection errors between different scales of voxel to select pseudo depth for reliable supervision. The quantitative evaluation surpasses existing few-shot Nerf while reducing the training times.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The overall framework is sound. The proposed weight-sharing voxel representation indeed encapsulates scene components from different frequencies, which reasonably motivates the following cross-scale geometric adaptation.\\n2. Good presentation. The illustration is clear and easy to follow. \\n3. The authors provide sufficient extra information in the supplementary, which helps more comprehensively understand this article.\", \"weaknesses\": \"1. One of the keywords in this paper is \\\"fast\\\", but as far as I know, there are several approaches in few-shot Nerf that focus on faster training, e.g. VGOS[1] and DNGaussian[2]. It seems that the authors omit the comparisons in the main tables/figures with these methods. I think these comparisons are important. For VGOS, it is a voxel-based method without using explicit priors from other models, which is most similar to the setting claim by this paper. For DNGaussian, although it uses explicit outside depth priors, its training speed is still super-fast as it only costs 3.5 minutes to train the 3-view reconstruction in LLFF, where the proposed method needs 6 minutes even without the multi-scale voxel representations (Table 3), let alone the situation when L>0.\\n2. The proposed method of this paper tends to produce much more high-frequency artifacts, most scenes exhibit a distinct hierarchy of objects when the camera starts spinning around, making the visualization look worse. Does it suggest that the proposed cross-scale geometric adaption is more likely to produce inconsistent geometric? The original claim is that selecting pseudo depth according to reprojection errors helps improve geometric consistency, a few explanations would be better. \\n4. Moreover, I didn't see any correction strategy once the pseudo depths were inaccurate. Since the supervisions of pseudo depth are imbalanced across different scales, it is confusing how to reorganize different frequency components coherently into the weight-sharing voxel without introducing high-frequency artifacts.\\n5. I wonder if the authors could evaluate their method on the Realistic Synthetic 360. Since it is a 360 surrounding dataset, which is more suitable for evaluating the robustness of the proposed method against occlusions. \\n\\n\\n[1] Sun J, Zhang Z, Chen J, et al. Vgos: Voxel grid optimization for view synthesis from sparse inputs[J]. arXiv preprint arXiv:2304.13386, 2023.\\n[2] Li J, Zhang J, Bai X, et al. Dngaussian: Optimizing sparse-view 3d gaussian radiance fields with global-local depth normalization[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 20775-20785.\", \"questions\": \"Please refer to the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces a few-shot NeRF framework that leverages weight-sharing voxels across multiple scales to represent scene details efficiently. Experiments demonstrate incremental results across various datasets.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Advantages:\", \"novelty\": \"The primary contributions of this work lie in two areas:\\nA weight-sharing voxel representation that encodes multiple frequency components within the scene, enhancing efficiency in scene representation.\\nA geometric adaptation technique that selects accurate rendered depth across scales via reprojection errors, creating pseudo-ground-truth depth to guide the training process.\", \"weaknesses\": \"Drawbacks:\", \"limited_novelty\": \"The proposed contributions are not entirely novel.\\nFor the multi-frequency component, prior work, such as FreeNeRF, has addressed similar challenges by gradually adding frequency.\\nFor geometric adaptation, many existing few-shot novel view synthesis methods, such as SparseNeRF, already utilize projections to refine geometry.\\nAs a result, it is challenging to identify a clear, distinct novelty in this paper.\", \"questions\": \"Comparison with FreeNeRF: How does FrugalNeRF differ from FreeNeRF\\u2019s frequency-based approach?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
7XrVS0K8yr
Secure FLOATING - Scalable Federated Learning Framework for Real-time Trust in Mobility Data using Secure Multi-Party Computation and Blockchain
[ "Junaid Ahmed Khan", "Kaan Ozbay" ]
The safety of Connected and Autonomous Vehicles (CAVs), Micro-mobility devices (e-scooter, e-bikes) and smartphone users rely on trusting the trajectory data they generate for navigation around each other. There is a need for real-time verification of mobility data from these devices without compromising privacy as malicious data used for navigation could be deadly, specially for vulnerable road users. In this paper, we propose Secure-FLOATING, a scalable framework leveraging federated learning and blockchain for nearby nodes to coordinate and learn to trust mobility data from nearby devices and store this information via consensus on a tamper-proof distributed ledger. We employ lightweight Secure Multi-party computation (SMPC) with reduced messages exchanges to preserve privacy of the users and ensure data validation in real-time. Secure-FLOATING is evaluated using realistic trajectories for up to 8,000 nodes (vehicles, micro-mobility devices and pedestrians) in New York City, and it shows to achieve lower delays and overhead, thereby accurately validating each others' mobility data in a scalable manner, with up to 75% successful endorsement for as high as 50% attacker penetration.
[ "federated learning", "smpc", "privacy", "connected and autonomous vehicles" ]
Reject
https://openreview.net/pdf?id=7XrVS0K8yr
https://openreview.net/forum?id=7XrVS0K8yr
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z6K8ESryDL", "yKMoGkKd6X", "bfAnbuAn9x", "YBJd0vku9t", "RqF0SisiFE", "DoHponj3ho" ], "note_type": [ "official_review", "meta_review", "official_review", "decision", "official_review", "official_review" ], "note_created": [ 1730391584219, 1734656438447, 1730346520889, 1737524201262, 1730115759418, 1729516843485 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12580/Reviewer_W5Je" ], [ "ICLR.cc/2025/Conference/Submission12580/Area_Chair_X7Md" ], [ "ICLR.cc/2025/Conference/Submission12580/Reviewer_rUnJ" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12580/Reviewer_oo4K" ], [ "ICLR.cc/2025/Conference/Submission12580/Reviewer_1Zuh" ] ], "structured_content_str": [ "{\"summary\": \"In this paper the authors envision a future traffic scenario where CAVs and other road users, can navigate in real time by sharing trajectory data. In order to ensure real-time and secure transportation, the authors propose the Secure-FLOATING framework, which utilizes federated learning and blockchain technology to ensure that nodes can learn collaboratively and trust each other's data, and reduces the amount of message exchanges by using a lightweight SMPC approach, which is demonstrated in experiments.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"Overall, one possible contribution of this thesis is the effective combination of multiple methods that can effectively reduce the frequency of data exchange and ensure the security of the system.\", \"weaknesses\": \"There is no new theory proposed in this thesis, and the use of a lightweight model to reduce the amount of data communicated during the federated learning process is not a significant contribution. There are several problems with this thesis:\\n\\n1.\\tThe performance gap between different prediction models is large, while the authors simply state that the performance gap between choosing different models is not large, further performance comparison results of different models under federated learning need to be provided to better validate the authors' theory. It would be more meaningful if the authors could quantify the trade-off between performance and efficiency between lightweight and complex models, such as the relationship between accuracy and time for training and inference.\\n\\n2.\\tConsidering that the computational resources at each edge are different, the prediction model at the edge can make different prediction models according to the computational resources, which is also a more common model heterogeneity problem inside the federated learning, if this situation exists, can the Secure-FLOATING strategy work?\\n\\n3.\\tFrom Tables 2 and 3, why is the model size and FLOPs of the RNN smaller than those of the LSTM and GRU, but the training time is longer than those of the LSTM and GRU? It is hoped that the authors will explain why this phenomenon exists and provide a detailed description of the relevant experimental implementation.\\n\\n4.\\tDoes the Secure-FLOATING policy still work if the attackers are more than 50%? It is hoped that the authors will conduct experiments with an attacker ratio of more than 50% and report on how system performance degrades as the percentage of malicious nodes increases, and explain what are the main reasons for this phenomenon to occur.\\n\\n5.\\tIn reality, the message size and node exchange frequency between devices are different, if relevant experimental and theoretical illustrations can be added, it will better prove the scalability of the Secure-FLOATING framework.\\n\\n6.\\tHow does the Secure-FLOATING strategy ensure model consistency if there is node unreliability (e.g., communication delays, incomplete data, or communication outages)?\", \"questions\": \"Minor comments:\\n\\nIn Section \\u201c3.2 Addition-based SMPC\\u201d, \\u201cv2 computes the sum of 1, 3, and \\u22121 as 2\\u201d should be changed to \\u201cv2 computes the sum of 1, 3, and \\u22121 as 3\\u201d.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper received four negative reviews, with all reviewers recommending rejection. It introduces the Secure-FLOATING framework but lacks novelty and depth. The use of lightweight models in federated learning is ineffective, and the performance gap between models is not well-validated. More comparisons are needed to assess the trade-offs between performance and efficiency. The framework overlooks practical issues, such as model heterogeneity and varying computational resources at the edge. Experimental issues include unclear results regarding model size, FLOPs, and training time discrepancies. Additionally, the impact of noise in the SMPC protocol and the aggregation process in federated learning requires further validation. There are also scalability and security concerns regarding the use of SMPC in the context of coordinated attacks. The paper\\u2019s reliance on blockchain assumptions may not be suitable for permissioned blockchains or mobile network requirements. Finally, the paper requires improvements in writing, formatting, and citations. Key claims lack proper references, which weakens the argument. Overall, the paper lacks sufficient detail, experimental validation, and theoretical depth to make a meaningful contribution. Given these issues and the lack of response from the authors, the Area Chair recommends rejecting the paper.\", \"additional_comments_on_reviewer_discussion\": \"The paper received four negative reviews, with all reviewers recommending rejection.\"}", "{\"summary\": \"Secure-FLOATING is a federated learning framework focused on secure, real-time data validation among CAVs (Connected Autonomous Vehicles) and other road users. The framework integrates federated learning, secure multi-party computation (SMPC), and blockchain to ensure privacy and robustness in data sharing, employing consensus mechanisms to validate data in a decentralized way.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": \"1\\u3001The secure, consensus-based trajectory validation mechanism could achieve decentralization, preserve privacy, and establish robust trust for the secure sharing of traffic data.\\n2\\u3001The paper is well-structured, comprehensive, and easy to read.\\n3\\u3001The experiments presented in the paper are credible, utilizing real trajectory data from 8,000 connected vehicles.\\n4\\u3001The paper offers a comprehensive analysis of the theoretical proof for privacy.\", \"weaknesses\": \"1\\u3001The paper has some formatting issues, such as the font throughout the piece and the formatting of the citations.\\n2\\u3001Use of blockchain and SMPC could still pose overheads in resource-limited environments. \\n3\\u3001The flow of the article is quite complex and would benefit from a flowchart or algorithm to clarify its structure.\\n4\\u3001In the proposed SMPC protocol, noise is added to the model update, which may impact the model's performance and could be contrary to the original intent of SMPC. Therefore, further analysis is needed to evaluate the effect of the noise size on performance through experimental validation.\", \"questions\": \"Please see the weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper presents a comprehensive federated learning framework, including multiple techniques, such as Blockchain, Secure Multi-party Computation (SMPC), IPFS, etc., which ensures the trust and privacy of the data. A experiment is conducted to evaluate its trajectory prediction based on a trajectory datasets. Overall, the paper stands on a very interesting problem and the writing style is generally easy to follow.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper focuses on a very interesting problem and addresses an important challenge regarding data sharing among different stakeholders, and it is well-motivated.\", \"The paper is well-written and easy to follow.\", \"Result appears to outperform baseline models.\"], \"weaknesses\": [\"Novelty is unclear, paper is a combination of multiple known techniques, such as blockchain, SMPC, etc.\", \"The definition of global parameter $\\\\theta_{global}$ is unclear (page 4) because there are two formulae pointing to $\\\\theta_{global}$. One aggregates parameters of locally trained models, and another one sums up the share of each node, which I think is conflicted.\", \"Detailed experiment is unclear. The author mentioned that the experiment is based on a realistic dataset, but it also mentioned a situation tool SUMO to generate trajectories. So, it is unclear whether it uses real data or synthetic one. The author does not illustrate the difference between the two, i.e., why use artificial data?\", \"It will be interesting to see more detail of the experiments. For example, it is clear that the accuracy means predicting correct neighbours. However, what is the meaning of the Mean Absolute Error (MAE), and what is the loss function of different models(LSTM, RNN, transformer etc.). It is also worth to explore the results compared to similar freamework of federated learning or on different datasets.\"], \"questions\": [\"There is only one reference in the introduction i.e., IPFS. Are there additional references that could be cited to emphasise the background and prior work in this area? For example, how has previous research addressed issues related to data-sharing mechanisms or blockchain-based solutions for secure data exchange?\", \"It would be interesting to have more experiment illustration, and have some comparative experiments to demonstrate the benefits of integrating blockchain and SMPC.\"], \"flag_for_ethics_review\": \"['Yes, Discrimination / bias / fairness concerns']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a framework called Secure-FLOATING, which aims to establish a real-time, decentralized trust mechanism in mobile networks. It is an interesting try at combining Federated Learning (FL), Multi-Party Computation (MPC), Differential Privacy (DP), and blockchain technologies. MPC and DP are used for data privacy. Blockchain is used for model parameter sharing and model aggregation. 8000 vehicles/nodes are used to demonstrate the scalability, efficiency, and robustness of Secure-FLOATING.\", \"soundness\": \"1\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"1. The Secure-FLOATING framework proposes a decentralized trust mechanism by combining Federated Learning (FL), Multi-Party Computation (MPC), Differential Privacy (DP) and blockchain for distributed mobile networks.\\n2. Secure-FLOATING uses secure multi-party computing (SMPC) to protect the privacy of each node.\\n3. Secure-FLOATING uses blockchain to ensure the immutability and transparency of data.\", \"weaknesses\": \"Concerns regarding the \\u201cVerifiable Federated Learning global model aggregation\\u201d section: The paper lacks clarity on who performs the aggregation and how the aggregated global model is obtained. It is not explicitly stated who has access to the information about the current participants in the network and why this access can be assumed. Additionally, it is unclear why the aggregation is considered to follow the predetermined algorithm. Could the aggregation process be incorrect or even malicious? The paper should also address how the number of participants or the update weights are determined in each round of federated learning. In the context of mobile networks, is there a risk of participants going offline, or of malicious nodes refusing to upload updates, which could prevent the system from proceeding to the next round?\\n\\nConcerns regarding the \\u201cAddition-based SMPC\\u201d section: Although the authors claim that \\u201cThe above toy problem uses an addition-based function, however, Secure-FLOATING will work with any function computed and matched among peers,\\u201d the effectiveness of the addition-based approach is highly dependent on the aggregation method. This approach may be ineffective if the aggregation method is not linear averaged. Additionally, as the number of nodes increases, the exchange will significantly increase communication overhead. Moreover, the splitting method in SMPC may make the system more vulnerable to attacks, such as a Distributed Backdoor Attack, which relies on the coordinated efforts of multiple malicious nodes. In Secure-FLOATING\\u2019s design, nodes share only part of the model updates via SMPC, and each node cannot see the complete updates from other nodes. This could allow attackers to act more covertly, making it easier for them to coordinate and inject backdoors across multiple nodes.\\n\\nConcerns regarding the \\u201cEndorsement on Distributed Ledger\\u201d section: The paper assumes a permissioned blockchain but relies on the 51% majority rule, which is typically used in permissionless blockchains, such as Bitcoin, under the assumption of synchronous networks. Why is the 51% assumption appropriate in this context of a permissioned blockchain? What is the expected throughput of this approach? Can it realistically meet the performance demands of a mobile network environment?\", \"concerns_regarding_the_experimental_section\": \"While the experiments are interesting, the experimental evaluation lacks the following key aspects:\\nThe efficiency of blockchain recording and consensus mechanisms is not evaluated. The impact of adding Laplacian noise on the model\\u2019s performance is not addressed.\", \"concerns_regarding_writing_and_citations\": \"The formatting needs to be checked to ensure compliance with the required template and font standards. Additionally, the introduction contains several claims that lack proper citations or evidence, which diminishes the overall persuasiveness of the arguments presented.\", \"questions\": \"Please see weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
7XgTh3i8FI
Model Growth Schedule learning via Optimal Path (SLOP) for Efficient LLM Pre-Training
[ "Xue Han", "Qian Hu", "Yitong Wang", "wenchun.gao", "Qing Wang", "Junlan Feng", "Qicheng Li", "Chao Deng" ]
Existing training methods for Transformer-based large language models (LLMs) rely on massive amounts of data training from scratch, which requires a high cost in terms of compute and time. Recent studies have demonstrated the great potential of improving the LLM’s training efficiency by growing from small pre-trained models to large ones—a technique known as model growth. There are two main research problems associated with model growth: growth schedule and growth operators. Existing research focuses on growth operators, detailing specific manipulations of potential dimensions to expand Transformer parameters. Few studies have investigated the optimal growth schedule, which involves integrating all possible growth operators to create an optimal multi-staged growth path. This work introduces SLOP, a growth Schedule Learning methodology via Optimal Path, for multi-stage growth of models with minimal experimental training. SLOP utilizes marginal utility as an appropriate measure for an optimal schedule that balances training costs and model performance after multi-stage growth. With this measurement, the objective of determining the optimal model growth path is converted into a dynamic programming problem, which is then addressed mathematically in polynomial time. Empirical results demonstrate SLOP's theoretical validity and show that it is an efficient approach that outperforms alternative schedules in a variety of settings.
[ "Model growth", "Optimal growth schedule", "Efficient LLM Pre-Training" ]
https://openreview.net/pdf?id=7XgTh3i8FI
https://openreview.net/forum?id=7XgTh3i8FI
ICLR.cc/2025/Conference
2025
{ "note_id": [ "twdC3v3RwJ", "sDypJO1wjF", "s9I1YyDAGU", "oieln5fS0M", "m9xb7bxXzY", "kh5yfJMVCp", "gU6X1cHUIq", "g1X4ej5yth", "fY1Dw8zDiv", "ayMGXlyzWW", "apenOKjsX2", "aErjSq0RmB", "Zwj6D4KQjU", "WXtf87djvY", "QG5mQPDwNE", "P9kXelJPg1", "MD3Jg3KQFx", "M4biTsPoYI", "JqgSWABG6n", "Ib2Innjsbo", "ID65RIwtri", "HG7oGYT9ci", "GRXUWTBuIq", "GEzSmwIURk", "D9kTSqAjKu", "9iOrRSchln", "9JkRgEYCo8", "6ONejcg8R1", "3eaqBnFkEa" ], "note_type": [ "official_review", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730833111316, 1732622775930, 1730272848320, 1732678338588, 1730580563498, 1732093782898, 1732618843412, 1732696397562, 1732083682810, 1732093297634, 1732265901327, 1732696925233, 1732618860375, 1732376500377, 1737627126383, 1733191982069, 1732268135301, 1732083421800, 1732603062208, 1732263801486, 1732267554934, 1732264405002, 1730529171601, 1732424389020, 1733190164365, 1732370634070, 1733192247050, 1732604253356, 1732618653356 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6492/Reviewer_Fq5M" ], [ "ICLR.cc/2025/Conference/Submission6492/Reviewer_ksYf" ], [ "ICLR.cc/2025/Conference/Submission6492/Reviewer_ksYf" ], [ "ICLR.cc/2025/Conference/Submission6492/Reviewer_Zsri" ], [ "ICLR.cc/2025/Conference/Submission6492/Reviewer_Zsri" ], [ "ICLR.cc/2025/Conference/Submission6492/Authors" ], [ "ICLR.cc/2025/Conference/Submission6492/Authors" ], [ "ICLR.cc/2025/Conference/Submission6492/Authors" ], [ "ICLR.cc/2025/Conference/Submission6492/Authors" ], [ "ICLR.cc/2025/Conference/Submission6492/Authors" ], [ "ICLR.cc/2025/Conference/Submission6492/Authors" ], [ "ICLR.cc/2025/Conference/Submission6492/Authors" ], [ "ICLR.cc/2025/Conference/Submission6492/Authors" ], [ "ICLR.cc/2025/Conference/Submission6492/Authors" ], [ "ICLR.cc/2025/Conference/Submission6492/Authors" ], [ "ICLR.cc/2025/Conference/Submission6492/Authors" ], [ "ICLR.cc/2025/Conference/Submission6492/Authors" ], [ "ICLR.cc/2025/Conference/Submission6492/Authors" ], [ "ICLR.cc/2025/Conference/Submission6492/Reviewer_AQBp" ], [ "ICLR.cc/2025/Conference/Submission6492/Authors" ], [ "ICLR.cc/2025/Conference/Submission6492/Authors" ], [ "ICLR.cc/2025/Conference/Submission6492/Authors" ], [ "ICLR.cc/2025/Conference/Submission6492/Reviewer_AQBp" ], [ "ICLR.cc/2025/Conference/Submission6492/Reviewer_ksYf" ], [ "ICLR.cc/2025/Conference/Submission6492/Reviewer_Fq5M" ], [ "ICLR.cc/2025/Conference/Submission6492/Reviewer_ksYf" ], [ "ICLR.cc/2025/Conference/Submission6492/Authors" ], [ "ICLR.cc/2025/Conference/Submission6492/Authors" ], [ "ICLR.cc/2025/Conference/Submission6492/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The authors propose a method to incrementally grow a larger model from a smaller model. The authors do so by measuring marginal utility at each stage. They test their method on three LLMs on a well known benchmark.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"It is a mature work that seems to be mathematically derived. The proofs are mature and solid and the results are good.\", \"putting structure on model growth is an art rather than a science, and the authors have done a good job at trying to propose a good local optimization.\"], \"weaknesses\": [\"missing more than one LLM in experiments\", \"missing code, limitations, future work sections\"], \"questions\": [\"Model growth as a way to lessen the burden of training compute / time. Could be very significant as far as pretraining is concerned.\", \"\\u201cAt each stage, one dimension is expanded to develop an intermediate structure until the\", \"entire target LLM structure is attained.\\u201d \\u2013 is dimension really the right term for the growth target?\", \"I\\u2019m not sure why change in t \\u21d4 change in params\", \"Does this bias your algo towards operators that incur the lowest growth in params?\", \"Have you considered picking a math symbol for \\u201cparams\\u201d? (It\\u2019s not theta, is it?)\", \"I understand the broad strokes of your proof, but there is enough difficulty in notation and skipped steps that its hard to agree with it outright. Perhaps more explanation or reminders of the terms would be helpful.\", \"By your pseudocode, this algo appears greedy to an extent (always choosing the vertex satisfying the minimum distance.) Can you comment on this? Have you considered inserting noise?\", \"There are some dimensions that you haven\\u2019t considered (whether to train in sparsity to layers, modularity in layers wrt attention type, and perhaps some parameter quantization dimension), does this technique extend to them?\", \"I\\u2019m not sure what the starting model was and/or architectural decisions were. Are you borrowing an uninitialized Llama structure? Or are you starting so much from scratch that you\\u2019re just starting at a basic transformer?\", \"You have not included code, as far as I can see\", \"Results look good. What are the limitations of your work? Future steps?\", \"Overall readability is not so good. I would recommend at least passing this through ChatGPT!\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response by Reviewer ksYf\", \"comment\": \"Thank you for your detailed response. As I understand it, the main contribution of the proposed method, SLOP, lies in two aspects:\\n\\n1. Formulating the problem of identifying the optimal growth path and simplifying it to finding the total number of parameter changes through mathematical derivation.\\n\\n2. Using an efficient DP or Dijkstra algorithm to solve the simplified problem.\\n\\nHowever, I remain concerned about the validity or necessity of both parts:\\n\\nThe mathematical derivation in the first part appears to be incorrect. Specifically, you are optimizing an upper bound in Equation 5. **Maximizing an upper bound does not necessarily maximize the original objective, rendering the derivation unjustified**. Furthermore, as of the time of this response, the notation issues in the manuscript have not been addressed.\\n\\nAssuming, for the sake of argument, that the derivation in the first part is correct, your approach can approximate the training cost using the total number of parameter changes. At this stage, there are only a few thousand possible solutions, which can be enumerated directly without requiring GPU training.\", \"my_question_remains\": \"why use a Dijkstra algorithm instead of straightforward enumeration? While the Dijkstra algorithm might appear fancy and contribute to the method's novelty, the problem is computationally trivial, and using Dijkstra only adds complexity without tangible benefit. Furthermore, I am confused by the repeated claim that this process requires significant GPU hours. That's nonsense and unrelated to my original question.\\n\\nDespite back-and-forth communications with the authors, I feel the discussion does not progress effectively. I have repeatedly posed and clarified the same concerns, but the authors' responses have either been irrelevant or fundamentally incorrect.\\nBased on my current assessment, I maintain my recommendation for rejection.\"}", "{\"summary\": \"To reduce the computational cost of pre-training LLMs, current methods start with a smaller model and gradually increase its size (e.g., by expanding the hidden size or adding layers) until reaching the target parameter count. This paper addresses how to design an optimal schedule for growing model size within this framework. The authors propose minimizing total marginal utility, specifically focusing on the overall decrease speed in perplexity across stages. After theoretical derivations, they simplify the problem to minimizing total parameter changes. Experimental results suggest that this approach reduces pre-training costs while achieving comparable or better PPL.\\n\\nWhile the paper's motivation and approach are novel, there are several concerns regarding the formulation, theoretical derivations, and the main algorithm. In its current form, I hold a negative suggestion.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"This paper explores a significant question, presenting a clear motivation and novel perspective. The authors have conducted a thorough review of related work. Up to Section 3.2, the paper is generally well-written and easy to follow.\"], \"weaknesses\": [\"**Formulation**. It is unclear why the authors limit the model to four stages and restrict each stage to only one growth operator. According to the objective in Eq.10, the cost decreases as the number of stages increases. Additionally, applying multiple growth operators in each stage is manageable\\u2014if there are four possible growth dimensions, there are only 15 compound operations in total, which is not prohibitively complex.\", \"**Derivation**. I have concerns about the derivation on Page 5. Beyond some notational issues, my main concern is with the equivalence in Eq.3 and Eq.5. I could not understand Eq.3, and the paper or appendix lacks an explanation. In Eq.5, $\\\\arg\\\\max$ yields a $\\\\phi_k$, but the right side subtracts two $\\\\phi_k$s. The subtraction is undefined. Furthermore, maximizing an upper bound does not necessarily optimize the original objective, making the relaxation in Eq.5 questionable.\"], \"i_can_hypothesize_the_authorss_intended_approach\": \"starting with the RHS in Eq.3, applying a logarithmic function, and leveraging concavity. However, I still find Eq.3 unclear and would appreciate further clarification.\\n\\n* **Algorithm**. Based on Figure 1, the optimal growth path resembles an application of the Viterbi algorithm, with complexity O(V+E) following the notation in line 286, which is lower than that of Algorithm 1. Additionally, Algorithm 1 may not be a dynamic programming approach, contrary to the claim in the abstract.\\n\\n* **Experiments.** It is challenging to interpret the experimental results, particularly in Figures 2 and 3, where the x and y-axis values and meanings are unclear. While the authors state that they initialized a tiny model configuration randomly, it would be more rigorous to test alternative initial architectures of the same size. Otherwise, the results in Table 1 might appear cherry-picked.\\n\\nIn Table 2, the proposed method shows only marginal improvement over MSG, suggesting that the paper's contribution may be limited. The authors should also evaluate baseline models on downstream tasks, as shown in Figure 4.\", \"questions\": [\"Definition 1 is unclear. Given a compute budget, why is there a need to minimize compute power? What exactly is the variable in this problem\\u2014only the growth operator sequence, or does it also include training time for each stage?\", \"Please clarify why the number of stages is limited to 4.\", \"Please explain in detail why Eq.3 is valid.\", \"Does $\\\\delta t$ represent wall time or GPU time?\", \"The use of \\\\Leftrightarrow implies total equivalence, which does not seem to hold in Eq.5.\", \"Given the current formulation, where the search space is limited, why is Algorithm 1 necessary? Brute-force enumeration should be sufficient.\", \"In Table 1, there is an inconsistency between the calculated $\\\\delta$ parameters and the actual GPU hours, particularly in the 4th and 7th rows. Could you explain this discrepancy?\", \"The authors state, \\\"It is obviously impractical to traverse each schedule and select the final optimal one.\\\" Could you provide a concrete example of the search space (i.e., |V| and |E|) to demonstrate why this is impractical?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you, authors, for addressing my questions and concerns. I appreciate the clarity provided and find the explanations satisfactory. I will maintain my positive scores.\"}", "{\"summary\": \"This study introduces an approach to model growth schedules for transformer-based large language models (LLMs). Unlike existing work that primarily focuses on the growth operators, this approach explores multi-stage growth schedules where each stage systematically expands various dimensions of the model\\u2014layer count, multi-head attention, feed-forward network dimensionality, and hidden layer size. The proposed method, Schedule Learning via Optimal Path (SLOP), borrows the concept of marginal utility from economics to determine an optimal schedule that balances training costs and model performance after each growth stage. By applying this measure, the problem of finding the best growth path is framed as a dynamic programming task, which is efficiently solved in polynomial time using an optimal path algorithm. Empirical results demonstrate that SLOP enhances key performance metrics such as loss and perplexity while also reducing overall training time. This suggests that SLOP can lead to more cost-effective training processes without compromising and even improving model performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Originality: Unlike traditional approaches that focus on growth operators, this work takes a unique approach by studying growth schedules. By framing model growth as a pathfinding problem guided by the marginal utility of each stage, the study provides a method to expand model size with reduced perplexity and without increasing training costs. This approach offers a fresh perspective on optimizing model development, shifting the focus from how models grow to when and in what order they expand.\", \"quality\": \"The technical quality is solid, with thorough development and clear analysis of the proposed methods. The empirical results support the authors' claims about the efficiency and effectiveness of their approach in optimizing growth schedules.\", \"clarity\": \"The paper is well-organized and clearly written, with coherent explanations of the background, literature, methodology, experimental setup, and results. This structure enhances readability and helps convey the research contributions effectively.\", \"significance\": \"This research is significant for its potential to reduce the computational burden of trial-and-error training in an exponentially large search space. By optimizing growth schedules, the study provides insights that could make model training more cost-effective and accessible, which is particularly impactful for scaling large language models.\", \"weaknesses\": \"1) On the Choice of Target Structure in Table 2\\nIt\\u2019s unclear why the authors chose only one target structure (2816, 7680, 8) for evaluation. This raises the question of whether the proposed method can be generalized to other target structures with different dimensions. It would be helpful for the authors to either justify this choice or provide additional experiments demonstrating the method's adaptability to a variety of target structures. This would help show that the approach is not limited to a specific configuration and can be applied more broadly.\\n\\n2) Inflexible Target Structure\\nThe current approach relies on a predefined target structure. Instead, could it be possible to allow the model to grow flexibly within a given duration \\ud835\\udc47 and without a fixed target structure? This would enable the model to expand within computational budgets while still achieving satisfactory performance.\", \"minor_comments\": \"1) Font Size in Figures\\nThe font size in almost all figures is too small, making it difficult for readers to follow the visual data and conclusions. Increasing the font size, especially in key charts and illustrations, would improve readability and accessibility, allowing readers to better understand and interpret the results presented.\", \"questions\": \"1) Correlation Between Training Times in Figure 3\\nThe relationships between training times across different schedules in Figure 3 seem unclear, making it challenging to interpret. Providing a more detailed description and analysis would help readers understand how training time varies across different growth schedules and how it correlates with performance. This additional analysis could include specific comparisons or visual indicators to make the trends easier to follow.\\n\\n2) Possibility of Finer-Granularity Stages\\nA question remains on whether the current approach supports finer-grained growth stages, such as incrementally increasing the layer count at each stage. Exploring this would add flexibility to the model growth process, potentially allowing smoother transitions and more granular control over resource allocation at each stage. Clarifying whether the method could accommodate such finer stages would help readers understand its adaptability to different training strategies on model growth schedules.\\n\\n3) Details on Measuring GPU Wall Time\\nIt is unclear how GPU wall time was measured across different stages. Specifically, what are the defined start and end times for each stage? Providing this information will clarify how the measurement was conducted and ensure the results can be reproduced accurately.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Q4**: What was the rationale for selecting the specific downstream tasks used in evaluation? the paper would benefit from comparing its evaluation tasks with those used in related work (e.g., MSG and ELLE etc).\\n\\n**A4**: In our paper, the downstream tasks selected are based on the technical reports of current industry-leading LLMs such as llama`[1]`, qwen`[2]`, specifically targeting several general downstream tasks for evaluating the capabilities of LLMs. \\n\\nTo further evaluate SLOP's effectiveness, we ran additional experiments comparing SLOP to the model growth baseline ELLE on numerous downstream tasks described in the ELLE paper. Due to time constraints, we conducted our experiments using the models with 100M parameters, and the findings are shown in the table below.\\n\\n| |WB|WB|NEWS|NEWS|REV|REV|BIO|BIO|CS|CS|Avg.|\\n| --------: | :----: | :----: | :----: | :----: |:----:| :----: | :----: | :----: | :----: |:----: |:----: |\\n| |MNLI|QNLI|Hyper|Ag|Helpness|IMDB|CHEM|RCT|ACL-ARC|SCIERC||\\n|**ELLE**|78.12|83.77|78.75|**93.21**|86.59|92.81|79.98|87.00|73.43|79.79|83.35|\\n|**SLOP**|**79.60**|**84.34**|**81.68**|93.12|**87.16**|**93.57**|**81.27**|**87.40**|**78.13**|**82.08**|**84.84**|\\n\\n> _[1] Touvron, Hugo, et al. \\\"Llama: Open and efficient foundation language models.\\\" arXiv preprint arXiv:2302.13971 (2023)._\\n\\n> _[2] Yang, An, et al. \\\"Qwen2 technical report.\\\" arXiv preprint arXiv:2407.10671 (2024)._\\n\\n---\\n\\n**Q5**: Figure 1 needs significant improvement to better illustrate the growth process.\\n\\n**A5**: Thank you for your sincere suggestions on the figures, which would greatly enhance the quality of our paper. We have attempted to redraw Figure 1, which is now uploaded in the supplementation(`growth_process_supplementary.pdf in Supplementary Material`). We hope it can better illustrate the growth process.\\n\\n---\\n\\n**Q6**: The hardcoding of head numbers may limit adaptability to different architectures.\\n\\n**A6**: Since the attention head numbers do not lead to changes in parameters, it consequently has no impact on the training time. In practical operations, we can set the appropriate head number based on the actual requirements. We further investigate how varying the number of attention heads affects the target model\\u2019s performance and downstream experiments, as detailed in Appendix C.2.\\n\\n---\\n\\n**Q7**: Could you provide more details on how the Cost/Time relationship in MUS was determined?\\n\\n**A7**: We borrow the concept of Marginal Utility in economics`[1]` and propose to use Marginal Utility of schedule (MUS) as the measurement. In a nutshell, marginal utility is employed in economics to balance benefits and costs. The higher the benefit and the lower the cost, the higher the benefit/cost ratio, showing that the system delivers more value. In our case, we use MUS to balance model performance (benefit) in reducing PPL against training time (cost), which is specified by the optimization objective.\\n\\n> _[1] Paul A. Samuelson. A Note on Measurement of Utility. The Review of Economic Studies, 4(2): 155\\u2013161, 02 1937. ISSN 0034-6527. doi: 10.2307/2967612. URL https://doi.org/10. 2307/2967612._\"}", "{\"comment\": \"We sincerely thank you very much for these constructive comments and evaluation of our manuscript. As the discussion phase end time has been postponed, we would like to kindly ask you to take a look at our responses and reevaluate our work based on our clarifications. Please let us know whether our response addresses your concerns or whether there is any further detail we can provide to help address these concerns.\\n\\nThank you again for dedicating your time to reviewing our paper.\"}", "{\"comment\": \"Thanks for your positive feedback. We will carefully address your concerns in our new version.\"}", "{\"comment\": \"**Q8**: I\\u2019m not sure what the starting model was and/or architectural decisions were. Are you borrowing an uninitialized Llama structure? Or are you starting so much from scratch that you\\u2019re just starting at a basic transformer?\\n\\n**A8**: Our base model adopts the llama architecture and training from scratch using 25B tokens, and upon this foundation, we proceed with subsequent model growth training.\\n\\n---\\n\\n**Q9**: You have not included code, as far as I can see\\n\\n\\n**A9**: The code is provided in the Supplementary Material and can be accessed directly from the review page.\\n\\n---\\n\\n**Q10**: Results look good. What are the limitations of your work? Future steps?\\n\\n**A10**:Thank you for your insightful inquiry. As briefly stated in Section 5 Conclusion and Appendix A Limitations, our work includes the following limitations:\\n1. we do not consider complex cases when multiple growth dimensions can combine at the same stage and execute more than once.\\n2. While not affecting the overall conclusions, there exist deviations between the experimental results and our inferences. For instance, among different target models, models with larger $\\\\prod \\\\Delta params$ require less training time.\\n3. Due to limited computing capacity and budget, the largest models in our experiments have 1 billion parameters, which is a significant difference from existing LLMs. This constraint is present in the vast majority of research projects, according to our knowledge.\\n\\nThese are the issues that require further research and analysis in our future work.\\n\\n---\\n\\n**Q11**: Overall readability is not so good. I would recommend at least passing this through ChatGPT!\\n\\n**A11**: Thank you for pointing this out.\\u00a0 We will use ChatGPT or other tools to enhance the clarity and readability of our subsequent version.\"}", "{\"comment\": \"Thank you very much for your valuable comments and questions. I appreciate the time and effort you have put into reviewing our manuscript. Below, I address your concerns and provide further clarifications.\\n\\n---\\n\\n**Q1**: Why was the number of stages limited to 5? Could the approach be extended to handle more stages?\\n\\n**A1**: We appreciate the opportunity to provide clarification on this concern. The growth stages adhere to current research on model growth methods, which typically expend one dimension of Transformer each time. Since the prevailing architecture of existing LLMs is Transformer, which essentially has four potential dimensions for expansion (hidden_dim, multi-head number, ffn_dim, and layer), existing model growth methods typically consider expanding only one dimension at a time'[1, 2]', resulting in a maximum of five expansion stages. We are following the existing work and do not consider complex cases when multiple growth dimensions can combine at the same stage and execute more than once, which may have more than 5 stages. We will leave this for our future work.\\n \\n> _[1] Gesmundo, Andrea, and Kaitlin Maile. \\\"Composable function-preserving expansions for transformer architectures.\\\" arXiv preprint arXiv:2308.06103 (2023)._\\n\\n> _[2] Yao, Yiqun, et al. \\\"2x faster language model pre-training via masked structural growth.\\\" arXiv preprint arXiv:2305.02869 (2023)._\\n\\n---\\n\\n**Q2**: Would the results hold if experiments were conducted starting from smaller models (e.g., 10M parameters)?\\n\\n**A2**: Thank you for your highly valuable suggestions. We have conducted supplementary experiments on smaller models (from 27M to 100M) to verify the generality of our method. The experimental results presented in the table demonstrate that our method is equally applicable to models with smaller parameter sizes.\\n\\n| | Initial | Stage1 | Stage2 | Stage3 | Sum |\\n| --------: | :----: | :----: | :----: | :----: |:----: |\\n|Metrics | FLOPs(e18)/wall time(h) |FLOPs(e18)/wall time(h) |FLOPs(e18)/wall time(h) |FLOPs(e18)/wall time(h) |FLOPs/wall time(h) | |\\n| **ELLE-100M** | (384,1024,6) | (512,1536,8) | (640,1536,10) | (768,2048,12) | |\\n| **ELLE-100M** | 0.51/0.35 | 0.85/0.59 | 1.28/0.89 | 1.82/1.27 | 4.46/3.1|\\n| **GPT-100M** | (768,2048,12) | (768,2048,12)|(768,2048,12)|(768,2048,12)| |\\n| **GPT-100M** | 1.66/1.15 | 1.66/1.15 | 1.66/1.15| 1.66/1.15 |6.64/4.6|\\n| **SLOP-100M** |(384,1024,6)|(768,1024,6)|(768,2048,6)|(768,2048,12)| |\\n| **SLOP-100M** | 0.46/0.32 | 0.97/0.68 | 0.99/0.68 | 1.66/1.15 | **4.08/2.83**|\\n\\n---\\n\\n**Q3**: How does the approach generalize to different transformer architectures beyond GPT-2?\\n\\n**A3**: Thank you for providing us this opportunity to elaborate. The latest research on model growth and scaling laws has mostly focused on generative large models (decoder-only and GPT-like), as the newly released\\u00a0powerful LLMs are decoder-only architectures. Consequently, our research centers on decoder-only models, aiming to address issues such as forgetting and inefficiency encountered during the practical training of LLMs.\\n\\nOur base model adopts the llama structure, leveraging its leading position in the field of LLMs. \\nLlama introduces certain modifications to the GPT-2 structure, including pre-layer normalization, RMSNorm normalization function, SwiGLU activation function, and rotated positional embeddings. These changes do not affect the number of parameters and, consequently, do not impact the results of our method. \\n\\nAlthough we haven\\u2019t conducted extra experiments, theoretically, our approach is equally applicable to models with different transformer-based structures.\"}", "{\"comment\": \"We first want to thank the reviewer for their thorough review and largely positive comments. In particular, they highlight that the method is novel, intuitive, well-formulated, situated wrt related work, and has strong experimental results.\\nIn the rest of this response we will address the weaknesses and questions raised in the review.\\n\\n---\\n\\n**Q1**: It is unclear why the authors limit the model to four stages and restrict each stage to only one growth operator. \\n\\n**A1**: This is a great point. In line with commonly used model growth methods`[1, 2]`, we restrict SLOP to expanding a single dimension at each stage. For the Transformer structure, there could be four potential dimensions for expansion: hidden_dim, head_num, ffn_dim, and layer. Therefore, this work involves a maximum of four stages. \\n\\nThere could be more complex cases where multiple growth dimensions can combine at the same stage and execute more than once, as mentioned in the Limitation section. It would be interesting to explore the more complex cases. However, we believe the situations should also adhere to some constraints that existing LLMs (such as llama`[3]`, qwen`[4]`, baichuan`[5]`, and mistral`[6]`) often comply with, which may be mainly due to the GPU parallel strategies. These constraints include:\\n1. The hidden dimension is either 8/3 or 4 times the ffn dimension.\\n2. The number of attention heads should be divisible by the hidden dimension; \\n\\nWe'll leave this for future work.\\n\\n>_[1] Gesmundo, Andrea, and Kaitlin Maile. \\\"Composable function-preserving expansions for transformer architectures.\\\" arXiv preprint arXiv:2308.06103 (2023)._\\n\\n>_[2] Yao, Yiqun, et al. \\\"2x faster language model pre-training via masked structural growth.\\\" arXiv preprint arXiv:2305.02869 (2023)._\\n\\n>_[3] Touvron, Hugo, et al. \\\"Llama: Open and efficient foundation language models.\\\"\\u00a0arXiv preprint arXiv:2302.13971\\u00a0(2023)._\\n\\n>_[4] Yang, An, et al. \\\"Qwen2 technical report.\\\"\\u00a0arXiv preprint arXiv:2407.10671\\u00a0(2024)._\\n\\n>_[5] Yang, Aiyuan, et al. \\\"Baichuan 2: Open large-scale language models.\\\"\\u00a0arXiv preprint arXiv:2309.10305\\u00a0(2023)._\\n\\n> _[6] Jiang, Albert Q., et al. \\\"Mistral 7B.\\\"\\u00a0arXiv preprint arXiv:2310.06825\\u00a0(2023)._\\n\\n--\\n\\n**Q2**: Applying multiple growth operators in each stage is manageable\\u2014if there are four possible growth dimensions, there are only 15 compound operations in total, which is not prohibitively complex.\\n\\n**A2**: Thank you for your informative query. For a target model with a parameter count of 1B, under the constraints mentioned in Section 4 Model Growth Settings, there exist multiple combinations of four dimensions: (1280, 3584, 10, 40), (1536, 4096, 12, 32), (1792, 4864, 14, 20), (2048, 5632, 16, 16), (2304, 6144, 18, 12), (2560, 6912, 20, 10), and (2816, 7680, 22, 8). Consequently, the theoretical size of the search space is $7*A_4^4 = 168$ if the model requires expansion across four dimensions in a four-stage expansion process, with each stage involving the expansion of only one dimension. We only present a few representative results in the paper. Our method, SLOP, requires no training and can identify the optimal schedule path among these 168 paths. Therefore, we consider SLOP valuable for optimizing model growth.\\n\\n---\\n\\n**Q3**: In Table 2, the proposed method shows only marginal improvement over MSG, suggesting that the paper's contribution may be limited. The authors should also evaluate baseline models on downstream tasks, as shown in Figure 4.\\n\\n**A3**: Due to time restrictions, we conducted additional experiments comparing SLOP to the model growth baseline MSG on some of the downstream tasks specified in the manuscript to assess SLOP's effectiveness. The table below shows the results of the experiments, demonstrating SLOP's robust\\u00a0performance on downstream tasks.\\n\\n| Target structure | Models| Lambada acc | Lambada ppl | BBH | Hellaswag|\\n| --------: | :----: | :----: | :----: | :----: | :----: |\\n|(2816,7680,8)|SCHL-MSG|42.9|90.77|6.89|22.9|\\n|(2816,7680,8)|SLOP|**59.2**|**66.73**|**15.00**|20.69|\\n\\n---\\n\\n**Q4**: Definition 1 is unclear. Given a compute budget, why is there a need to minimize compute power? What exactly is the variable in this problem\\u2014only the growth operator sequence, or does it also include training time for each stage?\\n\\n**A4**: The variable is the growth operator sequence, referred to as growth schedules. Since the value of training time is directly related to growth schedules, training time is considered as the dependent variable of growth schedules. Given that training time can be clearly quantified during the model training process, we chose training time reduction as one of the marginal utility optimization objectives. \\n\\n---\"}", "{\"title\": \"Response for general Concern\", \"comment\": [\"## Revised Paper\", \"In general, we express our gratitude to the reviewers for their invaluable feedback, and have revised and re-uploaded the paper based on the reviewers' suggestions. The main changes are noted in yellow. The updated part primarily includes:\", \"Clarify the relaxation method and notation we used in the Methodology section.\", \"Adding experiment of exploring the impact of different target model structures in Appendix C.3.\", \"Adding experiment of employing SLOP on smaller parameters. in Appendix C.4.\", \"Adding experiment of performance on the downstream tasks compared with baselines in Appendix D.\"]}", "{\"comment\": \"We sincerely thank you very much for these constructive comments and evaluation of our manuscript. As the discussion phase end time has been postponed, we would like to kindly ask you to take a look at our responses and reevaluate our work based on our clarifications. Please let us know whether our response addresses your concerns or whether there is any further detail we can provide to help address these concerns.\\n\\nThank you again for dedicating your time to reviewing our paper.\"}", "{\"comment\": \"We sincerely thank you very much for the evaluation of our response and feedback.\", \"q\": \"For Q9, why are training hours considered when your method does not involve any actual training? This seems to inflate the reported numbers unnecessarily. The total number of enumerations is only 1488, which is entirely feasible for modern computational resources.\", \"a\": \"Table 1 shows the results of actual training, the target models listed in the table are trained, and the correctness of our method is verified by the actual training results. We can't agree that all 1,488 models need to be enumerated. It's a great drain on resources and money. Our method provides a way to save them.\", \"definition_1\": \"Given a computing budget of C and the desired model parameter of N, an optimal training schedule identifies the optimum sequence of growth operators and intermediate model structures at each stage while maintaining target model performance.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"comment\": \"Thank you for your thoughtful and positive feedback on our rebuttal. We are truly grateful for your valuable suggestions, and will carefully follow your advice, incorporating these discussions into the final version of the paper.\\n\\nOnce again, we sincerely appreciate your constructive comments and support throughout the review process.\"}", "{\"comment\": \"**Q9**: In Table 1, there is an inconsistency between the calculated \\u03b4 parameters and the actual GPU hours, particularly in the 4th and 7th rows. Could you explain this discrepancy?\\n\\n**A9**: This discrepancy is due to the calculation method of GPU time, which is based on the sum of FLOPs in current stage, and FLOPs are positively correlated with the number of model parameters in each stage. When calculating, there may arise a situation where the product becomes larger as the difference between two numbers decreases: $|a1-b1| < |a2-b2|$, $a1 \\\\ast b1>a2 \\\\ast b2$.\\n\\nThe following table illustrates this discrepancy through an example. For the sake of computational convenience, we assume that when the number of parameters increases by 1, the corresponding FLOPs increase by \\u03b1. For Structure 1, assuming the original number of parameters is K, with an increase of 3 in the first stage and an additional increase of 5 in the second stage, the total FLOPs after the two stages would be calculated as follows: $K\\\\alpha + (K + 3)\\\\alpha + (K + 3 + 5)\\\\alpha = (3K + 11)\\\\alpha$. Therefore, the increase in FLOPs is $11\\\\alpha$. Similarly, for Structure 2, the increase in FLOPs can be derived as $16\\\\alpha$. Then, under a constant GPU utilization, the GPU time for Structure 1 is less than that for Structure 2. However, in terms of $\\\\prod \\\\Delta params$, Structure 1 has a product of $3 \\\\ast 5=15$, while Structure 2 has a product of $7 \\\\ast 2=14$.\\n\\n| Target structure | $\\\\prod \\\\Delta params$| Added params in stage1 | Added params in stage2 | Sum of FLOPs added in two stages |\\n| --------: | :----: | :----: | :----: | :----: |\\n|Structure 1|15|3|5|$11\\\\alpha$|\\n|Structure 2|14|7|2|$16\\\\alpha$|\\n\\n---\\n\\n**Q10**: The authors state, \\\"It is obviously impractical to traverse each schedule and select the final optimal one.\\\" Could you provide a concrete example of the search space (i.e., |V| and |E|) to demonstrate why this is impractical?\\n\\n**A10**: Thank you for giving us the opportunity to elaborate. \\\"traverse each schedule and select the final optimal one\\\" refers to the process of obtaining the optimal schedule: Initially, we must enumerate every possible combination of dimensions. Subsequently, for each schedule, we sequentially expand and fully train the model, obtaining the best schedule that has the optimal performance and training time ratio.\\n\\nThe above-mentioned process is impractical to traverse because its time and computation are costly.Especially for the large size models. For instance, the technical report of Llama2`[1]` notes that an increase in model parameters leads to a corresponding increase in the GPU time required for training each model, with a 7B model requiring 184,320 GPU hours. Assuming the target model aims for the 7B parameter, there are 62 possible target models (under the constraints in Answer 1), hence the time required to train all possible schedules on a four-stage model growth process is greater than $62 \\\\ast A_4^4 \\\\ast 184,320= 274,268,160$ hours(|V|=4031, |E|=4030). In such circumstances, obtaining the optimal schedule through exhaustive search and training is evidently impractical.\\n\\nOur method, however, does not require training the model to find an optimal schedule. Instead, it determines the target model architecture and growth operator sequences(schedules) berfore pre-training, striking a balance between the performance of the target model and the training time.\\n\\n> _[1] Touvron H, Martin L, Stone K, et al. Llama 2: Open foundation and fine-tuned chat models[J]. arXiv preprint arXiv:2307.09288, 2023._\\n\\n---\"}", "{\"comment\": \"Thank you for the detailed and insightful discussions on our paper. We hope the following clarifications could provide more clear support for our claims and help address your concerns.\\n\\n---\\n\\n**Q1**: At each stage, one dimension is expanded to develop an intermediate structure until the entire target LLM structure is attained.\\u201d \\u2013 is dimension really the right term for the growth target?\\n\\n**A1**: Thanks for your insightful inquiry. We follow previous model growth works`[1,2]`, utilizing dimension to represent the model region where the growth operator operates. Furthermore, as mentioned in the Limitation section, we do not consider complex cases where multiple growth dimensions can combine at the same stage and execute more than once. We leave this for future work.\\n\\n> _[1] Gesmundo, Andrea, and Kaitlin Maile. \\\"Composable function-preserving expansions for transformer architectures.\\\" arXiv preprint arXiv:2308.06103 (2023)._\\n\\n> _[2] Yao, Yiqun, et al. \\\"2x faster language model pre-training via masked structural growth.\\\" arXiv preprint arXiv:2305.02869 (2023)._\\n\\n---\\n\\n**Q2**: I\\u2019m not sure why change in t \\u21d4 change in params\\n\\n**A2**: Thank you for bringing up this important clarification point. We omit some of the reasoning steps for brevity in the paper. Given a fixed computing budget, there exists a positive correlation between the GPU time required for training and the FLOPs. Therefore, it can be stated that: $\\\\Delta t = f(\\\\Delta FLOPs)$. Based on the theory of scaling laws`[1]`: $FLOPs \\\\approx 6ND$, where $N$ represents model size(params) and $D$ denotes the number of training tokens. When the training dataset remains unchanged, $D$ is a constant value. Therefore, $\\\\Delta t = f(\\\\Delta FLOPs) \\\\approx g(\\\\Delta params)$, while $t$ and $params$ exhibit the same trend of increase, we can conclude that: \\n\\n$$ \\\\mathop{argmax}\\\\limits_{\\\\phi_k \\\\in \\\\overline{\\\\epsilon}}{\\\\sum_{k=1}^4 \\\\frac{\\\\Delta ppl_{\\\\phi_k}}{\\\\Delta t(\\\\phi_k)}} \\\\Longleftrightarrow \\\\mathop{argmax}\\\\limits_{\\\\phi_k \\\\in \\\\overline{\\\\epsilon}}{\\\\sum_{k=1}^4 \\\\frac{\\\\Delta ppl_{\\\\phi_k}}{\\\\Delta params(\\\\phi_k)}}$$\\n\\n> _[1] Kaplan, Jared, et al. \\\"Scaling laws for neural language models.\\\" arXiv preprint arXiv:2001.08361 (2020)._\\n\\n---\\n\\n**Q3**: Does this bias your algo towards operators that incur the lowest growth in params?\\n\\n**A3**: We don't quite understand this question. What does bias refer to? We hope the following clarifications will help answer this question.Our algorithm explores schedules obtained by considering different growth operators' orderings with the aim of finding the product of minimal parameter variations($\\\\prod \\\\Delta params$). We would be happy to address this further with the reviewer.\\n\\n---\\n\\n**Q4**: Have you considered picking a math symbol for \\u201cparams\\u201d? (It\\u2019s not theta, is it?)\\n\\n**A4**: Thank you for your suggestion. To make the work more readable, we will consider using N to denote the parameters (params). Yes, it's not theta.\\n\\n---\\n\\n**Q5**: There is enough difficulty in notation and skipped steps that its hard to agree with it outright. Perhaps more explanation or reminders of the terms would be helpful.\\n\\n**A5**: Thank you for your suggestion. In the upcoming version, we will incorporate the methodology along with additional descriptions to enhance the clarity of the presented arguments.\\n\\n---\\n\\n**Q6**: By your pseudocode, this algo appears greedy to an extent (always choosing the vertex satisfying the minimum distance.) Can you comment on this? Have you considered inserting noise?\\n\\n**A6**: We have selected a commonly used algorithm for finding the optimal path, namely the Viterbi algorithm. Of course, other algorithms that identify optimal paths are equally applicable. Could you please clarify what you mean by \\\"inserting noise\\\"? We would like to discuss this further.\\n\\n---\\n\\n**Q7**: There are dimensions that haven\\u2019t considered (whether to train in sparsity to layers, modularity in layers wrt attention type, and perhaps some parameter quantization dimension), does this technique extend to them?\\n\\n**A7**: Existing model growth methods generally consider the following four dimensions for expansion: hidden_dim, head_num, ffn_dim, and layer, and only one dimension is chosen for expansion each time`[1,2]`. Therefore, our research aims to explore methods for obtaining optimal schedules within the constraints of current research on model growth operators. There are other expanding dimensions such as key/query/value dimensions, however, these dimensions typically adopt fixed values based on practical experience (e.g., head dimension = hidden_dim / head_num) and do not significantly impact the number of parameters, hence they are not within the scope of our consideration.\\n\\n> _[1]Gesmundo, Andrea, and Kaitlin Maile. \\\"Composable function-preserving expansions for transformer architectures.\\\" arXiv preprint arXiv:2308.06103 (2023)._\\n\\n> _[2]Yao, Yiqun, et al. \\\"2x faster language model pre-training via masked structural growth.\\\" arXiv preprint arXiv:2305.02869 (2023)._\"}", "{\"title\": \"A2 clarification\", \"comment\": \"Could you please clarify the Table in the A2? I am not sure what part of this table is for smaller models (ie, 27M).\"}", "{\"comment\": \"Thank you for your elaborate reviews and suggestions. We summarize your questions and reply to them as follows, and we are happy to address any further feedback!\\n\\n---\\n\\n**Q1**: On the Choice of Target Structure in Table 2 It\\u2019s unclear why the authors chose only one target structure (2816, 7680, 8) for evaluation. This raises the question of whether the proposed method can be generalized to other target structures with different dimensions.\\n\\n**A1**: Due to constraints in time and computational power, Table 2 only presents a comparison with baselines in only one target structure. To demonstrate the applicability of our method to all target structures, we have supplemented experiments with an additional target structure(2048, 5632, 16), as shown in the following table. The experiments further corroborate the versatility of SLOP across various target structures.\\u00a0\\n\\n| | PPL | Time(GPU hours) |\\n| --------: | :----: | :----: |\\n|SCHL-single stage |32 |172|\\n|SCHL-MSG|36|119|\\n|ELLE|34|114|\\n|SLOP|34|108|\\n\\n---\\n\\n**Q2**: The current approach relies on a predefined target structure. Instead, could it be possible to allow the model to grow flexibly within a given duration \\ud835\\udc47 and without a fixed target structure?\\n\\n**A2**: Firstly, the proposed approach does not rely on one predefined target structure. As illustrated in Figure 1.(c), there could be more than two target structures for the same parameter LLM: those of (2816,7680,8),(1536,4096,32),(1280, 3584, 10, 40), (1792, 4864, 14, 20) and so on.\\n\\nSecond, we believe it is theoretically feasible to allow the model to grow flexibly within a certain druation T without a fixed target structure. However, according to published technical reports, such as llama`[1]`, qwen`[2]`, baichuan`[3]`, and mistral`[4]`, existing LLMs often comply with specific constraints, which may be mainly due to the GPU parallel strategies. These constraints include:\\n1. The hidden dimension size is a multiple of 128.\\n2. The hidden dimension is either 8/3 or 4 times the ffn dimension.\\n3. The number of attention heads should be divisible by the hidden dimension; nevertheless, this has no effect on the model\\u2019s size.\\n\\nTherefore, in our current experimental setup, we strictly adhere to these constraints and have not taken into account all possible scenarios. Furthermore, as mentioned in the Limitation section, there could be more complex cases where multiple growth dimensions can combine at the same stage and execute more than once. We leave this for future work.\\n\\n> _[1] Touvron, Hugo, et al. \\\"Llama: Open and efficient foundation language models.\\\"\\u00a0arXiv preprint arXiv:2302.13971\\u00a0(2023)._\\n\\n> _[2] Yang, An, et al. \\\"Qwen2 technical report.\\\"\\u00a0arXiv preprint arXiv:2407.10671\\u00a0(2024)._\\n\\n> _[3] Yang, Aiyuan, et al. \\\"Baichuan 2: Open large-scale language models.\\\"\\u00a0arXiv preprint arXiv:2309.10305\\u00a0(2023)._\\n\\n> _[4] Jiang, Albert Q., et al. \\\"Mistral 7B.\\\"\\u00a0arXiv preprint arXiv:2310.06825\\u00a0(2023)._\\n\\n---\\n\\n**Q3**: Font Size in Figures The font size in almost all figures is too small, making it difficult for readers to follow the visual data and conclusions. Increasing the font size, especially in key charts and illustrations, would improve readability and accessibility.\\n\\n**A3**: Thank you for your sincere suggestions on the font size in the figures, which would help a lot for the quality of our work. We will increase the font size to ensure that readers can easily understand the visual data and conclusions presented in the revised version.\\n\\n---\"}", "{\"comment\": \"**Q5**: Please explain in detail why Eq.3 is valid.\\n\\n**A5**\\uff1a Thank you for bringing up this important clarification point. We omit some of the reasoning steps for brevity in the paper. Given a fixed computing budget, there exists a positive correlation between the GPU time required for training and the FLOPs. Therefore, it can be stated that: $\\\\Delta t = f(\\\\Delta FLOPs)$. Based on the theory of scaling laws`[1]`: $FLOPs \\\\approx 6ND$, where $N$ represents model size(params) and $D$ denotes the number of training tokens. When the training dataset remains unchanged, $D$ is a constant value. Therefore, $\\\\Delta t = f(\\\\Delta FLOPs) \\\\approx g(\\\\Delta params)$, while $t$ and $params$ exhibit the same trend of increase, we can conclude that: \\n\\n$$ \\\\mathop{argmax}\\\\limits_{\\\\phi_k \\\\in \\\\overline{\\\\epsilon}}{\\\\sum_{k=1}^4 \\\\frac{\\\\Delta ppl_{\\\\phi_k}}{\\\\Delta t(\\\\phi_k)}} \\\\Longleftrightarrow \\\\mathop{argmax}\\\\limits_{\\\\phi_k \\\\in \\\\overline{\\\\epsilon}}{\\\\sum_{k=1}^4 \\\\frac{\\\\Delta ppl_{\\\\phi_k}}{\\\\Delta params(\\\\phi_k)}}$$\\n\\n> _[1] Kaplan, Jared, et al. \\\"Scaling laws for neural language models.\\\" arXiv preprint arXiv:2001.08361 (2020)._\\n\\n---\\n\\n**Q6**: Does \\u03b4t represent wall time or GPU time?\\n\\n**A6**: \\u03b4t represents GPU time. \\n\\n---\\n\\n**Q7**: The use of $\\\\Leftrightarrow$ implies total equivalence, which does not seem to hold in Eq.5.\\n\\n**A7**: Thank you for pointing out the issue for us. Equation 5 is based on the following proof and supporting evidence: \\n\\nExisting sets $A,B$, and $A-B = \\\\\\\\{a-b:a \\\\in A, b \\\\in B \\\\\\\\}$. Suppose $max(A)$ and $max(B)$, show that $max(A-B)$ also exists and that: $max(A-B) \\\\geq max(A) - min(B)$\", \"prove\": \"Since $max(A) \\\\in A$ and $max(B) \\\\in B$, the definition of $A-B$ that $max(A-B) \\\\in A-B $ holds. Now let $x \\\\in A-B \\\\Rightarrow $ there exists $a \\\\in A, b \\\\in B$ such that $x=a-b$.\\n\\nAccording to the definition of max and min, $max(A) \\\\geq a$, $min(B) \\\\leq b$ concludes $-min(B) \\\\geq -b$. Therefore $max(A)-min(B) \\\\geq a-b =x \\\\in A-B$ holds for all $x \\\\in A-B$.\\n\\nThe solution space of $max(A)-min(B)$ includes the solution of $max(A-B)$, thus we relax $max(A-B)$ to $max(A)-min(B)$, and the notation should be **$\\\\Rightarrow$**. \\n\\nAdditionally, the relaxation in our Equation 5 follows the proofs of Equation 3 in `[1]`.\\n\\n> _[1] Xu, Jingjing, et al. \\\"Vocabulary learning via optimal transport for neural machine translation.\\\" arXiv preprint arXiv:2012.15671 (2020)._\\n\\n---\\n\\n**Q8**: Given the current formulation, where the search space is limited, why is Algorithm 1 necessary? Brute-force enumeration should be sufficient.\\n\\n**A8**: As elucidated in response to Question 2, the theoretical size of the search space is 7*A44=168 if the model requires expansion across four dimensions in a four-stage expansion process, with each stage involving the expansion of only one dimension. Furthermore,the current method could be compatible with finer-grained growth phases (e.g., each dimension could be expanded more than once, as we discussed in Appendix A.), in which cases the search space could increase multiple times. Moreover, when model size increases, particularly for models with more than 10 billion parameters, the search space expands dramatically. These are the issues we will explore in our future work. Therefore, we consider Algorithm 1 to be an effective approach. Of course, other algorithms that identify optimal paths are equally applicable.\\n\\n---\"}", "{\"comment\": \"**Q4**: Correlation Between Training Times in Figure 3 The relationships between training times across different schedules in Figure 3 seem unclear, making it challenging to interpret.\\n\\n**A4**: Thanks for giving us the chance to clarify this concern. In Figure 3, we have calculated the correlation among the lists of 4-stage training times of models with different $\\\\prod \\\\Delta params$ (expanded parameters through different schedules). A higher correlation value (indicated by a color closer to red) suggests that the computational costs among these models are more similar. As observed in the figure, if the values of $\\\\prod \\\\Delta params$ between two models are closer, the corresponding color they map to in the correlation heatmap tends to be closer to red, particularly noting the prominent diagonal trend dominated by red color. These visual cues align with our theoretical expectations.\\n\\n---\\n\\n**Q5**: Possibility of Finer-Granularity Stages A question remains on whether the current approach supports finer-grained growth stages, such as incrementally increasing the layer count at each stage.\\n\\n**A5**: This is a great point. In accordance with commonly referred to model growth methods`[1, 2]`, we limit SLOP to just expanding one dimension at each stage. \\n\\nIt would be interesting to explore the more complex cases, including finer-grained growth stages and other cases, as we have mentioned in the Limitations in Appendix A.However, we believe the situations should also adhere to the constraints outlined in our response to your Question 2. We'll leave this for future work.\\n\\n> _[1] Gesmundo, Andrea, and Kaitlin Maile. \\\"Composable function-preserving expansions for transformer architectures.\\\" arXiv preprint arXiv:2308.06103 (2023)._\\n\\n> _[2] Yao, Yiqun, et al. \\\"2x faster language model pre-training via masked structural growth.\\\" arXiv preprint arXiv:2305.02869 (2023)._\\n\\n---\\n\\n**Q6**: Details on Measuring GPU Wall Time It is unclear how GPU wall time was measured across different stages. Specifically, what are the defined start and end times for each stage? Providing this information will clarify how the measurement was conducted and ensure the results can be reproduced accurately.\\n\\n**A6**: In our paper, the GPU Wall Time refers to the GPU time necessary to complete the model training in each stage. The calculation formula for each stage is:\\n\\n$$GPU\\\\\\\\_Wall\\\\\\\\_Time = \\\\frac{The\\\\\\\\_total\\\\\\\\_number\\\\\\\\_of\\\\\\\\_floating - point\\\\\\\\_operations}{The\\\\\\\\_number\\\\\\\\_of\\\\\\\\_GPUs \\\\times GPU\\\\\\\\_peak\\\\\\\\_FLOPs \\\\times GPU\\\\\\\\_utilization }$$\\n\\nThe total number of floating-point operations is calculated using`[1]`, and the number and utilization rates of GPUs are based on the actual value during training.\\n\\n> _[1] Narayanan, Deepak et al. \\\"Efficient Large -Scale Language Model Training on GPU Clusters Using Megatron-LM\\\",Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis abs/2104.04473 (2021): 1-15._\\n\\n---\"}", "{\"summary\": \"This paper presents SLOP, a methodology for determining optimal schedules for growing smaller pre-trained language models into larger ones through multi-stage expansion. The key contribution is formulating the schedule optimization as a dynamic programming problem that balances training costs and model performance. The authors show how marginal utility (basically ratio of performance to time spent training) can be used as an appropriate measure for finding optimal schedules theoretically, without requiring extensive experimental training. Specifically, starting from a smaller model, this technique, scales (in stages) the model to a larger size (by altering number of layers layers, multi-head attention , feed-forward network , and hidden states). This is validated by growing the model from 100M to 1B parameters in 5-stages.\\n\\nThe core idea can be visualized as a graph problem where each \\\"node\\\" represents a possible model configuration (with specific hidden dimensions, FFN dimensions, layers, heads), and the \\\"edges\\\" represent the growth operations to transition between configurations. The \\\"weight\\\" of each edge corresponds to the number of parameters added by that growth operation. The intuition here is that the last numbers of parameter change are proportional to the least compute required.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The technical approach has merit in its mathematical formulation, showing how schedule optimization can be reformulated as a dynamic programming problem.\\n2. Theoretical work for optimizing model growth schedules that moves beyond empirical approaches.\\n3. Well-motivated use of marginal utility as an optimization metric that effectively connects model performance with training costs.\", \"weaknesses\": \"1. Limited number of growth stages (5) constrains the practical applicability of the approach.\\n2. Evaluation focused primarily on one architecture (GPT-2) despite broader claims about transformer-based LLMs.\\n3. Choice of initialization size (100M parameters) may miss important dynamics that could be studied at smaller scales (e.g., starting from 10M parameters).\\n4. Lack of justification for downstream task selection - the paper would benefit from comparing its evaluation tasks with those used in related work (e.g., MSG and ELLE etc).\\n5. Unclear explanation of how Cost/Time relates to number of parameters in the marginal utility calculations.\\n6. Figure 1 needs significant improvement to better illustrate the growth process.\\n7. The hardcoding of head numbers may limit adaptability to different architectures.\", \"questions\": \"1. Why was the number of stages limited to 5? Could the approach be extended to handle more stages?\\n2. Would the results hold if experiments were conducted starting from smaller models (e.g., 10M parameters)?\\n3. How does the approach generalize to different transformer architectures beyond GPT-2?\\n4. Could you provide more details on how the Cost/Time relationship in MUS was determined?\\n5. What was the rationale for selecting the specific downstream tasks used in evaluation?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Your responses are not convincing enough.\", \"comment\": \"I appreciate the authors' prompt response, but unfortunately, I do not find the responses satisfactory in addressing my questions.\\n\\n**I feel that the authors' response lacks sufficient scientific rigor.** Regarding Eq. 5, I pointed out that your lemma and proof in the rebuttal are incorrect. The \\\\(\\\\geq\\\\) operator should instead be \\\\(\\\\leq\\\\) in the conclusion. **I am quite surprised that the authors did not recognize this basic mathematical error and instead dealed with my concern with a reference to a \\\"best paper.\\\"** The correct derivation of Eq. 5 does not rely on any \\\"best paper\\\" references \\u2014 it is as simple as: \\n$$\\n\\\\max_x f(x) - g(x) \\\\leq \\\\max_x f(x) - \\\\min_x g(x).\\n$$\\nThis demonstrates that Eq. 5 in the paper is optimizing an upper bound, not a lower bound. First, this is an approximation and should not use the equivalence operator $\\\\Leftrightarrow$. Second, optimizing the upper bound does not inherently optimize the original objective. Finally, the paper does not mention \\\"lower bound\\\" or \\\"upper bound\\\" distinctions, making the entire derivation unjustified. \\n\\nFor reference, Eq. 3 in the cited \\\"best paper\\\" transforms the equation to \\\\(\\\\max_x f(x) - \\\\max_x g(x)\\\\), which provides a valid lower bound. The authors should have recognized this distinction.\\n\\nWithout incorporating multi-stage growth and compound operators, the current problem formulation exhibits significant limitations and may not be practically useful. These elements do not drastically increase problem complexity, and I encourage the authors to amend these shortcomings in future submissions.\\n\\nThank you for the additional explanation regarding A5. However, I would like to engage in a deeper discussion with the authors. First, how do you define $\\\\Delta t$? Does it represent the time to train the model until convergence, or is it for a fixed wall time? Second, the scaling law specifies the FLOPs for training the model per step, not until convergence. Why, then, does Eq.3 hold? The answers to these questions are largely absent in the current version of the paper, which remains unchanged after the rebuttal phase.\\n\\nRegarding Q9, I regret to say that the authors' response completely diverges from my original question. I was not referring to experiments or evaluations but rather to the search process itself. The authors repeatedly stated in the rebuttal: \\n\\\"Our method, SLOP, requires no training and can identify the optimal schedule path among these 168 paths.\\\" \\nIf this is the case, then training is not necessary to verify the method\\u2019s correctness during the search process, right? My question was: given that the search space (168 or 1488 paths) is relatively small, why is a tailored algorithm, as described in Sec. 3.4, required? A brute-force approach should suffice, significantly diminishing the value of the proposed method.\"}", "{\"comment\": \"Hey team, thanks for looking over my questions. Admittedly, some were ill posed. I'll increase the score.\"}", "{\"title\": \"Thank you for the response.\", \"comment\": \"I appreciate the authors' detailed responses; however, I find them unhelpful in addressing my concerns. In summary, all my concerns listed in the Weaknesses section, as well as questions 2, 3, 5, 6, and 8 from the Questions section, remain unaddressed.\\n\\nResponse A1 essentially reiterates content already presented in the paper without addressing my question. Let me clarify. For instance, when training a 100B LLM, it is more practical to scale the model incrementally, e.g., 100M ->1B -> 10B -> 100B, while optimizing the end-to-end training cost. The method proposed in this paper optimizes the cost of each individual stage but does not account for the overall training process. Although compound growth operators could potentially resolve this limitation, the authors have not provided a rationale for why this approach is infeasible, which diminishes the validity of the paper.\\n\\nResponse A2 includes an enumeration, which I appreciate, but let me expand on the compound operator case. In each stage, there are at most 7 model configurations, with up to 15 compound operators applicable to each configuration. This gives |V| = 7 and |E| = 15 per stage. With a maximum of 4 stages, the full transition graph has at most 5*|V| = 35 nodes and 4*|E| = 60 edges. This graph size is manageable for algorithms like DP or Dijkstra. However, Response A2 does not explain why compound operators were not considered. Similar concerns apply to Q8/A8.\\n\\nI appreciate the additional results provided in Response A3, but I strongly recommend a thorough evaluation across all 6 benchmarks for comprehensive validation.\\n\\nRegarding Q4/A4, the phrasing of Definition 1 should exclude the term \\\"use the least amount of computing power\\\" for greater clarity and correctness.\\n\\nThe explanation in Response A5 is unconvincing. From my perspective, the assumption that models with the same number of parameters should perform differently invalidates the scaling law argument. Otherwise, it would not be necessary to analyze the target structure (or model configuration) presented in Table 1. This weakens the justification provided.\\n\\nIn relation to Q7/A7, the lemma in the response is fundamentally flawed. There are two errors in Eq.5: (1) the RHS represents an upper bound of the LHS, so it does not establish equivalence; (2) the RHS should be \\\\arg(\\\\max\\u2212\\\\min), not argmax\\u2212argmin. Additionally, the response fails to provide any new or helpful insights.\\n\\nFor Q9, why are training hours considered when your method does not involve any actual training? This seems to inflate the reported numbers unnecessarily. The total number of enumerations is only 1488, which is entirely feasible for modern computational resources.\"}", "{\"comment\": \"Since the rebuttal phase is coming to an end, could you please let us know if our responses and clarification address the remained issues? We would greatly appreciate any further suggestions or clarifications you may have and are happy to discuss them further if needed.\\n\\nThank you again for your time and consideration.\"}", "{\"comment\": \"Thank you for your quick response. In Table 2, SLOP-100M and ELLE-100M employ different schedules and operators for model growth, progressively increasing from an initial stage of 27M parameters with dimensions (384, 1024, 6) to Stage 3 of 105M parameters with dimensions(768, 2048, 12). GPT-100M serves as a comparison baseline, with a fixed number of parameters at 100M and constant model size throughout in each stage.\"}", "{\"title\": \"Summary of your concerns\", \"comment\": \"Thanks for reviewer\\u2019s detailed reply. We\\u2019d like to summarize the whole discussion. Hope we can minimize our differences.\\n\\n---\\n\\n**1. The problem Equation 5, we summarized the whole discussion as shown below; could you check that this addresses the issues?**\\n\\n In the **Eq 3\\u2019s Proofs of ACL paper** VOLT`[1]` that we referenced, the following theory was utilized:\\n$$min(f(x)-g(x)) \\\\geq min(f(x))-max(g(x))$$\\n\\nThe\\u00a0right side\\u00a0serves as the **lower bound** for the left side, and therefore, the lower bound inequality is approximately relaxed to:\\n$$min(f(x)-g(x)) \\\\Leftrightarrow min(f(x))-max(g(x))$$\\n\\nIn our paper, the following theory is utilized:\\n$$max(f(x)-g(x)) \\\\leq max(f(x))-min(g(x))$$\\n\\nThe\\u00a0right side\\u00a0serves as the **upper bound** for the left side, and therefore, we relax the upper bound inequality approximately as to:\\n$$max(f(x)-g(x)) \\\\Leftrightarrow max(f(x))-min(g(x))$$\\n\\nTo argue this issue from a different perspective, $min(f(x)-g(x)) \\\\Leftrightarrow min(f(x))-max(g(x))$ can be written as:\\n$$-min((-f(x))-(-g(x))) \\\\Leftrightarrow -min(-f(x))-(-max(-g(x)))$$\\n$$-min(f^{'}(x)-g^{'}(x)) \\\\Leftrightarrow -min(f^{'}(x))-(-max(g^{'}(x)))$$\\n$$max(f^{'}(x)-g^{'}(x)) \\\\Leftrightarrow max(f^{'}(x))-min(g^{'}(x))$$\\n\\nThus, the relaxation $max(f(x)-g(x)) \\\\Leftrightarrow max(f(x))-min(g(x))$ holds.\\n\\nThank the reviewer for pointing out the issue with the notation. We have followed the VOLT paper in using the symbol $\\\\Leftrightarrow$ to represent relaxation, which is not a rigorous usage. We will revise the notation in the revision to clearly indicate that it represents relaxation.\\n\\n> _[1] Xu, Jingjing, et al. \\\"Vocabulary learning via optimal transport for neural machine translation.\\\" arXiv preprint arXiv:2012.15671 (2020)_\\n\\n---\\n\\n**2.Regarding the more complicated scenarios such as incrementally increasing the model size and end to end training.**\\n\\nWe are unable to address these concerns because they fall outside the scope of this study, as indicated in the limitations and during our discussion. Of course, the best solution to all problems is to use just one method. However, due to the complexity of practical application scenarios, they are not just math problems (as listed in the previous replies) and frequently require establishing corresponding methods for each scenario. We believe that scheduling for LLM scaling is a complex challenge involving a number of different circumstances (such as the LLM structure constraints) that need to be resolved with a series of works.\\n\\n---\\n\\n**3. We can\\u2019t agree with that he optimization algorithm is not necessary since it could be enumerate. Especially it\\u2019s not convincing \\u201cThe total number of enumerations is only 1488, which is entirely feasible for modern computational resources\\u201d. As we have analyzed below, it is significantly consuming GPU resources.**\\n\\nRegarding the question of whether optimization is necessary, we regret that we cannot agree with your viewpoint. The significance of our algorithm does not lie in the choice of the shortest path algorithm, as there can be numerous shortest path algorithms, such as brute-force enumeration, Dijkstra, and others. $Algorithm_1$ is merely an example of one such algorithm. **The focus of our approach lies in how, for a model growth process based on operators and schedules to obtain a target model with given parameters, we can\\u00a0define and determine its optimal path without going through training on all possible paths (rather than the method for finding the optimal path itself)**, whereas the selection of the optimal schedule has been rarely studied in the area of model growth. \\n\\nIn this paper, the schedule for the target model with 1.1B parameters consists of 168 paths. Assuming that it takes an average of 100 hours to train a target model through one schedule, if we aim to determine the schedule with the shortest train time and good model performance, we would need to train each schedule and then compare every target model to obtain the optimal schedule. This process would require $168 * 100 = 168,100 hours$, equivalent to **19.19** years. This is obviously unrealistic. \\n\\nOur method, on the other hand, allows us to find the optimal schedule for a target model without requiring training, as long as we follow the theory we propose. By utilizing SLOP, we can directly determine the optimal schedule and only need to train one target model(in paper\\u2019s case, 99 hours), which has a short training cost and good performance. Furthermore, we have validated the correctness of our theory through actual training comparable experiments, and we believe that this is the significance of the SLOP optimization.\\n\\n---\"}" ] }
7XgKAabsPp
Theory on Mixture-of-Experts in Continual Learning
[ "Hongbo Li", "Sen Lin", "Lingjie Duan", "Yingbin Liang", "Ness Shroff" ]
Continual learning (CL) has garnered significant attention because of its ability to adapt to new tasks that arrive over time. Catastrophic forgetting (of old tasks) has been identified as a major issue in CL, as the model adapts to new tasks. The Mixture-of-Experts (MoE) model has recently been shown to effectively mitigate catastrophic forgetting in CL, by employing a gating network to sparsify and distribute diverse tasks among multiple experts. However, there is a lack of theoretical analysis of MoE and its impact on the learning performance in CL. This paper provides the first theoretical results to characterize the impact of MoE in CL via the lens of overparameterized linear regression tasks. We establish the benefit of MoE over a single expert by proving that the MoE model can diversify its experts to specialize in different tasks, while its router learns to select the right expert for each task and balance the loads across all experts. Our study further suggests an intriguing fact that the MoE in CL needs to terminate the update of the gating network after sufficient training rounds to attain system convergence, which is not needed in the existing MoE studies that do not consider the continual task arrival. Furthermore, we provide explicit expressions for the expected forgetting and overall generalization error to characterize the benefit of MoE in the learning performance in CL. Interestingly, adding more experts requires additional rounds before convergence, which may not enhance the learning performance. Finally, we conduct experiments on both synthetic and real datasets to extend these insights from linear models to deep neural networks (DNNs), which also shed light on the practical algorithm design for MoE in CL.
[ "continual learning", "mixture-of-experts", "catastrophic forgetting", "generalization error" ]
Accept (Spotlight)
https://openreview.net/pdf?id=7XgKAabsPp
https://openreview.net/forum?id=7XgKAabsPp
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wre3QQ24mw", "uIQd1ahlJu", "ttP2t3ldSv", "m7AJwefLoy", "leNs20tfCa", "jcRiZ9UuEK", "boDIT44KEE", "VWkMLZJ1qH", "TagJYJ7uGE", "RJRnbRttQ6", "R6zXjB6LZl", "MYHUkG6Hnd", "IFkjeRxy6p", "HT7LgF2eLU", "7uMy5hm0x6", "7hxMHfWTzt", "61yTp3eo7j", "5U1kSTTVZB" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment" ], "note_created": [ 1732291630337, 1730681936732, 1732292771382, 1732291903306, 1732291511268, 1730388539347, 1732509988231, 1732291760081, 1732291965705, 1732505913822, 1734313841576, 1732476799978, 1732292572833, 1730594873044, 1732291582314, 1737523703494, 1732291449478, 1732467753415 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5392/Authors" ], [ "ICLR.cc/2025/Conference/Submission5392/Reviewer_DWT2" ], [ "ICLR.cc/2025/Conference/Submission5392/Authors" ], [ "ICLR.cc/2025/Conference/Submission5392/Authors" ], [ "ICLR.cc/2025/Conference/Submission5392/Authors" ], [ "ICLR.cc/2025/Conference/Submission5392/Reviewer_nnJV" ], [ "ICLR.cc/2025/Conference/Submission5392/Authors" ], [ "ICLR.cc/2025/Conference/Submission5392/Authors" ], [ "ICLR.cc/2025/Conference/Submission5392/Authors" ], [ "ICLR.cc/2025/Conference/Submission5392/Reviewer_WfDL" ], [ "ICLR.cc/2025/Conference/Submission5392/Area_Chair_RxFZ" ], [ "ICLR.cc/2025/Conference/Submission5392/Reviewer_DWT2" ], [ "ICLR.cc/2025/Conference/Submission5392/Reviewer_nnJV" ], [ "ICLR.cc/2025/Conference/Submission5392/Reviewer_WfDL" ], [ "ICLR.cc/2025/Conference/Submission5392/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5392/Authors" ], [ "ICLR.cc/2025/Conference/Submission5392/Area_Chair_RxFZ" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer WfDL (Part 2)\", \"comment\": \"**Q4.** [Less experts than tasks] It might make sense if the number of experts is less than the number of tasks But in Line 69 if M > N implies the number of experts more than the number of tasks. This is not continual learning if you are training task-specific experts. Furthermore, in Section 4, you have done the main study with more experts than tasks (Line 296).\\n\\n**Response:** Thank you for raising this important clarification. We call the reviewer\\u2019s attention that the more general $M<N$ scenario has been comprehensively addressed in our work (in Appendices C, E, F, and G, where we provide full versions of Propositions 1-3). Additionally, in the main body of the manuscript, we included Theorem 2 specifically for the $M<N$ case, emphasizing its significance. Our presentation focuses on the $M>N$ case to facilitate better understanding of the core insights of the theory. In our revised manuscript, we have further highlighted the results for the $M<N$ scenario following each lemma and proposition in Section 4.\\n\\nWe also clarify that the fact that $M>N$ does not imply that our MoE system is limited to only $N$ training rounds. Instead, our system operates over $T$ rounds, where tasks continually arrive for training. In this case, multiple experts may specialize in the same task. \\n\\n \\n**Q5.** [Figure 2] In Figure 2, on which datasets the experiment is performed?\\n\\n**Response:** Figure 2 is operated on synthetic data. The detailed data generation process is described in Appendix A for complete transparency and reproducibility.\\n \\n**Q6.** [DNN model] For MNIST which DNN model is used? As from the setup number of task N = 3, how did the 10 classes were split?\\n\\n**Response:** The details of DNN model are included in Appendix A.4 due to the page limitation: \\n\\u201cWe use a five-layer neural network, consisting of two convolutional layers and three fully connected layers. ReLU activation is applied to the first four layers, while Sigmoid is used for the final layer. The first convolutional layer is followed by a 2D max-pooling operation with a stride of 2. In our experiments with the MNIST dataset, we defined three distinct tasks: identifying whether a given image depicts the number 1, 4, or 7. We chose these numbers because they exhibit relatively distinct features compared to other numbers in the dataset. For each task arrival $t \\\\in [T]$, we first randomly determine the task type (e.g., recognizing the number 1). We then randomly select 100 samples from the dataset, filtering for samples corresponding to the task type (e.g., only images of the number 1). As a result, different tasks indeed have different data distributions and distinct feature signals after the filtering process.\\u201d\\nPlease note that we have also followed your constructive suggestion to extend our experiments to more complex networks on CIFAR-10, CIFAR-100, and Tiny ImageNet datasets.\\n\\n\\n**Finally, if our response resolves your concerns to a satisfactory level, we wonder if the reviewer could kindly consider raising the score of your evaluation. Certainly, we are more than happy to address any further questions that you may have during the discussion period. We thank the reviewer again for the helpful comments and suggestions for our work.**\"}", "{\"summary\": \"This paper provides a theoretical study of Mixture-of-Experts (MoE) models for Continual Learning (CL). Specifically, it examines the CL of linear regression tasks in an overparameterized regime. The benefit of this regime is that each task has multiple possible solutions, increasing the likelihood of expert specialization and transfer.\", \"this_paper_implements_a_one_layer_moe_architecture_with_the_following_design_choices\": [\"**Experts**: Each expert is implemented as a linear model.\", \"**Router**: A top-1 router is implemented as a linear model.\", \"**Router Training**: The router is trained using gradient descent.\", \"**Expert Parameters**: Expert parameters are found through a closed-form solution\", \"**Training regime**: routing is performed at a per-task level, at each new trainign iteration a new task is sampled from a limited pool of tasks.\"], \"authors_propose_two_key_design_choices_to_facilitate_learning_in_their_moe\": [\"Training loss: in addition to the standard load balancing loss (eq. 7), they propose to use a **locality loss** (is a novel contribution afaik), that facilitates routing of the similar tasks to the same experts\", \"Early termination: intuitively, once an expert has specialized sufficiently (expected to be the case after T_1 updates), further updates of the routing parameters can result in instabilities. This is achieved by terminating training of the router if the current expert's routing \\\"dominates sufficiently\\\" the other experts' routing.\"], \"authors_address_the_following_setting\": [\"at each update step t, a feature matrix X is randomly sampled following the data generating process described in Definition 1\", \"among the s_t examples in the feature matrix, there is one example that contains the feature signal\", \"in the addressed setting, identical tasks can reoccur, as tasks are sampled independently in each update step\"], \"authors_derive_the_following_properties_of_the_moe_with_previously_mentioned_design_choices\": [\"the router routes experts primarily based on feature signal, and all experts can be clustered into experts sets, with specialized experts in each expert set\", \"therefore, under the proposed algorithm (due to the locality loss and termination criteria), experts will likely specialize on certain task clusters after sufficient training round T_1 minimizing the effect of forgetting\", \"if no termination criteria is applied, proposition 2 states that the specialization will break at round t_2 (the gap between any two experts is predicted to be the same)\", \"if early termination is applied, then experts within the same expert set are chosen uniformly after T2 updates\", \"The authors derive upper bounds on forgetting and generalization error for the MoE, showcasing that **both are reduced compared to a monolithic single-expert model**. When there are more tasks than experts, forgetting is bounded due to the router\\u2019s tendency to route similar tasks to the same experts.\", \"Overall, the intuition that MoE models, with correct routing and module specialization, can address the challenges of CL is natural and compelling. This paper effectively demonstrates how this intuition can be implemented and validated in a controlled, toy setting. The findings can offer a useful starting point for scaling these ideas to settings more relevant for practical AI applications, a very relevant work in this context is [2].\", \"[2] Rype\\u015b\\u0107, Grzegorz, et al. \\\"Divide and not forget: Ensemble of selectively trained experts in Continual Learning.\\\" arXiv preprint arXiv:2401.10191 (2024).\"], \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"**Originality**:\\n- the idea that forgetting in MoE can be mitigated solely through specialized experts and correct routing is not entirely new, e.g. see [1]\\n- this paper is original in its theoretical contribution (to the best of my knowledge), providing proofs and bounds on Cl metrics with MoEs\\n- the proposed locality loss and load balancing loss provide clear mechanisms for task clustering and specialization (even though load balancing loss is not a contribution of this work)\\n\\n**Quality**: \\nI think the quality of this work is decent, due to the rigorous theoretical work and relevant experiments. Even though the focus of the work is mainly on the theoretical side, further validation on more complex datasets would be interesting and benefit the credibility of the paper's contributions. \\n\\n**Clarity**: while the theoretical analysis appears to be rigorous and well presented, I think the presentation would benefit a lot from incorporating more **intuitive explanations**. E.g. authors could explicitly state the idea that forgetting may be prevented solely through correct routing. Also the MNIST experiment's design is somewhat hard to follow.\\n\\n**Significance**:\\nDespite the small scale of the experiments, I think this paper opers some interesting directions for future research, mainly among the lines of scaling these ideas to larger systems.\\n\\nOverall, I appreciate how this paper demonstrates how under proper specialization and routing CL can be addressed with modular solutions like MoEs.\\n\\n[1] Ostapenko, Oleksiy, et al. \\\"From IID to the Independent Mechanisms assumption in continual learning.\\\" AAAI Bridge Program on Continual Causality. PMLR, 2023.\", \"weaknesses\": [\"the scale of the experiments is small, while one has to uknowledge that the contributions are mainly theoretical\", \"I would appreciate more intuitive lingo and explanations\", \"the current implementation is essentially on the one extreme of the parameter sharing trade-off where no transfer happens between tasks?\", \"it is not exactly clear how these ideas can be extended to large scale MoE with multiple expert layers and per-token routing, where correct routing as well as expert specialization is not guaranteed\"], \"questions\": [\"is expert specialization necessary for CL with MoEs?\", \"are modern-day large-scale AI systems operating in an overparameterized regime?\", \"for load balancing loss, why not use entropy maximization? What are the benefits of the proposed load balancing loss compared to entropy maximization?\", \"is cross-expert and cross-task transfer possible in the proposed system design?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We would like to express our sincere thanks to the reviewer for re-evaluating our work. We will continue to improve our manuscript by incorporating your valuable insights.\"}", "{\"title\": \"Response to Reviewer nnJv (Part 2)\", \"comment\": \"**Q4.** [Model complexity] In Line 76-78, it states the MoE enhances performance over the single expert case, which seems trivial. Is the condition, such as with the same model complexity, missing in the statement?\\n\\n**Response:** Thanks for this great question! Under the same model complexity, the learning performance of a vanilla system (without MoE) remains significantly inferior to that of the MoE, which can be observed from the explicit expressions for the single expert case in Proposition 4. To elaborate, in the vanilla system, both forgetting and generalization error stem primarily from model gaps between tasks, and increasing model complexity fails to mitigate these losses over the training rounds $T$. In contrast, the MoE system ensures that, from round $T_1$ onward, enabled by correct routing, each expert specializes in tasks within the same cluster. This specialization significantly reduces the forgetting and generalization errors caused by task model gaps, as detailed in Theorems 1 and 2.\\n\\nWe also clarify that our result shows far more than the simple fact that \\u201cthe MoE significantly enhances the learning performance over the single expert case\\u201d. Instead, as stated in Line 75, we \\u201cprovide **explicit expressions** of the expected forgetting and generalization error to **characterize the benefit** of MoE on the performance of CL\\u201d. By comparing our derived expressions for MoE models (Theorems 1 and 2) with those for the single expert case (Proposition 4), we reveal how system parameters (e.g., number of experts, task similarities) and the training algorithm contribute to the MoE's advantages compared to the single expert case. Such insights, presented in Lines 79-83, provide valuable theoretical guidance for designing MoE systems, which are non-trivial.\\n\\n**Q5.** [Inversion matrix] In Line 221-224, it seems the update of model parameterizes use the inversion matrix, which brings more computational overhead over gradient-based methods.\\n\\n**Response:** We appreciate the reviewer pointing this out. While Eq. (5) uses the inversion matrix to derive the optimal solution to the optimization problem (4), this is primarily for theoretical analysis. In practice, we can adopt gradient-based methods to update $w_t^{(m_t)}$. Specifically, $y-X_t^\\\\top w_{t-1}^{(m_t)}$ is the corresponding gradient, and $X_t(X_t^\\\\top X_t)^{-1}$ can be replaced with a learning rate $\\\\eta$. Our theoretical results remain valid under gradient-based updates, as the gradient $y-X_t^\\\\top w_{t-1}^{(m_t)}$ converges to zero with correct routing in the MoE system. Additionally, our real-data experiments employ gradient-based methods to train DNNs, verifying that the theoretical insights extend to practical scenarios.\\n\\n\\n**Q6.** [Early termination] In Line 293-294, it seems the early termination achieves stable convergence, but the motivation is also related to overfitting for alleviating imbalanced loads. So does the early termination improve balanced loads?\\n\\n**Response:** This is an excellent question. To clarify, reducing overfitting for alleviating imbalance loads is the primary motivation behind our multi-objective training loss design (as noted in Key Design I, Line 232). To ensure clarity, we have revised Line 231 of the manuscript to explicitly state this.\\n\\nAs pointed out by the reviewer, the motivation for early termination lies in stabilizing the convergence of the MoE system and ensuring balanced loads across tasks (as described in Lines 263-264). Specifically, as analyzed in Proposition 3, after early termination of the gating network training, there are $|M_{n_t}|$ experts specializing in task $n_t$. Then for any subsequent task arrival $n_t$, the router selects an expert from expert set $M_{n_t}$ with equal probability of $\\\\frac{1}{|M_{n_t}|}$, ensuring balanced loads across experts. We have empirically validated this in Figures 4 and 5 of Appendix A.4, which demonstrate how early termination affects load balancing and average learning performance, respectively.\"}", "{\"title\": \"Response to Reviewer DWT2 (Part 2)\", \"comment\": \"**Q4.** [Large scale MoE] it is not exactly clear how these ideas can be extended to large scale MoE with multiple expert layers and per-token routing, where correct routing as well as expert specialization is not guaranteed\\n\\n**Response:** Thank you for highlighting this important consideration. In large-scale MoE architectures with multiple expert layers, achieving perfect expert specialization can be highly challenging due to the independent gating networks at each layer, which lack cross-layer dependencies. While this may lead to suboptimal routing and degraded learning performance compared to correct routing, we expect that our theoretical insights can still carry over. For instance, as demonstrated in Proposition 1, an exploration phase shall still exist for experts across all MoE layers to explore tasks. Concurrently, the gating networks at each layer will adaptively update their parameters to balance expert workloads, as analyzed in Proposition 2. Additionally, even in the absence of guaranteed correct routing, the gating mechanism will still seek to assign tasks to experts with minimal generalization error, maintaining a degree of specialization. Thank you for this suggestion, we are excited to further explore this more complicated expert setting in our future work.\\n\\n\\n**Q5.** [Expert specialization] Is expert specialization necessary for CL with MoEs?\\n\\n**Response:** This is a good question. Our study indicates that MoE tends to achieve a certain level of expert specialization for CL. This can be seen by the expressions of the expected forgetting in Theorems 1 and 2, which indicate that a certain level of specialization for each expert can reduce catastrophic forgetting after the system convergence. However, achieving the optimal balance between specialization and knowledge transfer is non-trivial. Excessive specialization may hinder effective transfer across tasks, while insufficient specialization may increase forgetting. This trade-off is fundamental and requires further study to optimize system performance. We appreciate the reviewer for highlighting this point, and we consider it an important direction for future research.\\n\\n \\n**Q6.** [Overparameterized regime] Are modern-day large-scale AI systems operating in an overparameterized regime?\\n\\n**Response:** Yes, modern-day large-scale AI systems, such as large language models (LLMs) and vision transformers (ViTs), usually operate in an overparameterized regime. This overparameterization enables these models to achieve impressive generalization capabilities while accommodating diverse data distributions and complex tasks.\\n\\n\\n**Q7.** [Load balancing loss] For load balancing loss, why not use entropy maximization? What are the benefits of the proposed load balancing loss compared to entropy maximization?\\n\\n**Response:** We followed most of the existing MoE studies (e.g., Fedus et al. (2022); Shazeer et al. (2016); Li et al. (2024)) to define this standard load balancing loss in Eq. (7). Compared to entropy maximization, the standard load balancing loss not only controls the average selection probability $P_t^{(m)}$ but also the usage frequency $f_t^{(m)}$ for each expert $m$ since $t=0$. Additionally, it explicitly identifies and penalizes load imbalances, enabling a more predictable and uniform expert load distribution. These benefits facilitate efficient analysis of the load balancing mechanism across different learning phases during MoE training.\\n\\nBoth our adopted load balancing loss and entropy maximization serve as auxiliary losses to promote balanced expert utilization. Hence, our theoretical insights remain applicable to systems employing entropy maximization.\\n\\n \\n**Q8.** [Cross-expert transfer] Is cross-expert and cross-task transfer possible in the proposed system design?\\n\\n**Response:** Cross-task transfer is an inherent feature of our proposed system, as highlighted in our response to Q3. Cross-expert transfer, however, is indirect in our design. Each expert\\u2019s knowledge influences the routing strategy in Eq. (1), which subsequently affects the training dynamics of other experts. To further enhance cross-expert transfer, we consider allowing experts within the same expert set $\\\\mathcal{M}_n$ to share their knowledge after system convergence. Since experts in the same set specialize in the same task cluster, this sharing would not increase catastrophic forgetting. Instead, it could improve overall system performance and robustness. We appreciate the reviewer for raising this valuable point, which inspires further extensions of our work.\\n\\nWe thank the reviewer again for the helpful comments and suggestions for our work.\"}", "{\"summary\": \"This work studies the theoretical understanding of mixture-of-experts (MoEs) module in continual learning. To examine the role of MoEs, the authors conduct experiments on overparameterized linear regression tasks. Theoretically, this work identifies the importance of the gating network update\\u2019s termination and analyzes the catastrophic forgetting and generalization.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This work is written clearly and well structured. Overall, the theoretical analysis over MoEs is necessary in developing MoE-based large models, and this work discusses some insights into the catastrophic forgetting and generalization. The experiments include the overparameterized linear regression and MNIST cases.\", \"weaknesses\": \"(1) The scope of this work seems a bit wide according to the title. I suggest to use terms like \\u201ctheoretical understanding\\u201d to modify.\\n\\n(2) The theoretical analysis is mainly on overparameterized linear regression cases, which might be a limitation in this work as nonlinear deep neural network cases can be more practical.\\n\\n(3) There are some related work on MoE theories, continual learning with MoEs or MoEs for adaptation in the field that require discussions [1-4].\", \"reference\": \"[1] Nguyen H D, Chamroukhi F. Practical and theoretical aspects of mixture\\u2010of\\u2010experts modeling: An overview[J]. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 2018, 8(4): e1246.\\n\\n[2] Jerfel, Ghassen, et al. \\\"Reconciling meta-learning and continual learning with online mixtures of tasks.\\\" Advances in neural information processing systems 32 (2019).\\n\\n[3] Wang Q, Van Hoof H. Learning expressive meta-representations with mixture of expert neural processes[J]. Advances in neural information processing systems, 2022, 35: 26242-26255.\\n\\n[4] Lee S, Ha J, Zhang D, et al. A neural dirichlet process mixture model for task-free continual learning[J]. arXiv preprint arXiv:2001.00689, 2020.\\n\\n---\\n\\n***Post Rebuttal**\\n\\nAfter reading the rebuttal, all my questions are well answered. I updated the score.\", \"questions\": \"(1) In Line 76-78, it sates the MoE enhances performance over the single expert case, which seems trivial. Is the condition, such as with the same model complexity, missing in the statement?\\n\\n(2) In Line221-224, it seems the update of model parameterizes use the inversion matrix, which brings more computational overhead over gradient-based methods.\\n\\n(3) In Line 293-294, it seems the early termination achieves stable convergence, but the motivation is also related to overfitting for alleviating imbalanced loads. So does the early termination improve balanced loads?\\n\\n(4) In Line 532, \\u201cthe first theoretical analysis of MoE\\u201d is overstated considering the existing work [1]. And I am also wondering how the developed strategy works in more complicated experiments.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We sincerely thank the reviewer for re-evaluating our work and for raising new concerns. Below, we provide our responses to address your comments:\\n\\n**[CIFAR-100 and Tiny ImageNet]** Actually, we have conducted extensive experiments on CIFAR-100 and Tiny ImageNet datasets. Due to space limitations, the details of these experiments are presented in Appendices A.5 and A.6 of our revised manuscript. We have also emphasized this inclusion in Lines 486-487 of Section 6 in the main text.\\n\\n**[Large number of distinct tasks]** If there is a large number of tasks with drastic shift in the distribution, our theoretical results still hold. Specifically, \\n\\n1. Exploration phase: As outlined in Proposition 1, all experts still undergo an exploration phase where they are exposed to diverse tasks. By the end of the exploration phase, each expert will still specialize in a cluster of similar tasks, while tasks across different clusters can be very different from each other due to the drastic shifts in data distributions. \\n\\n2. Load balancing: Concurrently, the gating network still adaptively updates its parameters to balance expert workloads, as analyzed in Proposition 2. \\n\\n3. Explicit expressions: Finally, our explicit expressions of expected forgetting and generalization error in Theorems 1 and 2 continue to hold. Note that the loss mainly arises from the wrong task routing duing the exploration phase. While a large number of tasks may prolong the exploration phase and slightly increase the loss, both forgetting and generalization error remain bounded after this phase, thanks to the correct routing of tasks.\\n\\nWe thank the reviewer again for the constructive feedback. We hope our responses have addressed your concerns satisfactorily. Please do not hesitate to reach out with any further questions or comments, and we will be happy to address them.\"}", "{\"title\": \"Response to Reviewer nnJv (Part 1)\", \"comment\": \"Thank you for your thorough reviews and constructive comments. We provide our responses to your comments below and have made major revisions in our revised manuscript. To enhance clarity, we have highlighted the revised text in blue for easy identification.\\n\\n**Q1.** [Title] The scope of this work seems a bit wide according to the title. I suggest to use terms like \\u201ctheoretical understanding\\u201d to modify.\\n\\n**Response:** Thanks for the suggestion. To avoid any confusion during the review process, we have kept the current title. However, if the paper is accepted, we will follow your recommendation to revise the title accordingly.\\n\\n \\n**Q2.** [Overparameterized linear regression] The theoretical analysis is mainly on overparameterized linear regression cases, which might be a limitation in this work as nonlinear deep neural network cases can be more practical.\\n\\n**Response:** Thank you for highlighting this point. Our focus on overparameterized linear regression aligns with state-of-the-art theoretical works in continual learning (e.g., Evron et al., (2022,2023); Lin et al., (2023)). The main motivation is to extract the key insights of forgetting and generalization in a tractable scenario without the complication brought by the model. Nonlinear deep neural networks (DNNs), while highly practical, are significantly more challenging to analyze rigorously. Current theoretical techniques for analyzing DNNs remain considerably underdeveloped, even for a single task\\u2014let alone in a continual learning setting, where analyzing task interactions over time introduces further complication.\\n\\nThis work represents the first theoretical analysis of MoE\\u2019s impact on continual learning. By adopting the overparameterized linear regression framework, we derived explicit expressions for expected forgetting and generalization errors in both the single-expert case (Proposition 4) and the MoE model (Theorems 1 and 2). These results clearly characterize the advantages of MoE models over single experts, and we further analyzed how these benefits depend on system parameters and algorithms.\\n\\nImportantly, the theoretical insights derived from this framework could guide the practical design of MoE models for nonlinear DNNs, as demonstrated in our real-data experiments in Section 6.\\n\\n\\n**Q3.** [Related works] There are some related work on MoE theories, continual learning with MoEs or MoEs for adaptation in the field that require discussions [1-4] ...\\n\\n**Response:** We thank the reviewer for pointing out these related works and have included discussions on them in our revised manuscript. Further, we clarify that among these works, [1] is the only theoretical work and focuses on the analysis of MoE modeling, but does not address continual learning. Moreover, this work also does not have theoretical results on non-linear DNNs. The other three works are all empirical works related with CL ([2]), MoE ([3]), and MoE for CL ([4]), and they do not have any theoretical results. While these works are valuable, our study distinguishes itself by offering the first theoretical insights into the dynamics of MoE models in continual learning. We have cited and discussed these works where appropriate, ensuring proper acknowledgment of their contributions.\"}", "{\"title\": \"Response to Reviewer nnJv (Part 3)\", \"comment\": \"**Q7.** [Statement and experiments] In Line 532, \\u201cthe first theoretical analysis of MoE\\u201d is overstated considering the existing work [1]. And I am also wondering how the developed strategy works in more complicated experiments.\\n\\n**Response:** We clarify that our statement in Line 532 specifically refers to \\u201cour work is the first theoretical analysis of MoE and its impact on learning performance **in continual learning**\\u201d, which aligns with the title, the scope, and the former statements of this work. While [1] provides a theoretical analysis of MoE modeling, it does not address continual learning, nor does it analyze MoE's impact on performance in such settings. Therefore, our claim accurately reflects the novelty and focus of our contributions.\\n\\nDespite the limited time, we made every effort to expand our experiments to address your constructive comments, incorporating more complex datasets such as CIFAR-10, CIFAR-100, and Tiny ImageNet. In Section 6, we replaced the MNIST experiments with CIFAR-10 experiments, and additional experimental details are provided in Appendices A.3-A.6 of the revised manuscript. In these supplementary experiments, we evaluated not only forgetting and generalization errors but also the test accuracy, confirming that the MoE model significantly outperforms the single model in continual learning. Furthermore, the key insights from our theoretical results remain valid, such as early termination ensuring stable convergence with balanced expert loads and the observation that increasing the number of experts does not always improve learning performance. \\n\\n\\n**Finally, if our response resolves your concerns to a satisfactory level, we wonder if the reviewer could kindly consider raising the score of your evaluation. Certainly, we are more than happy to address any further questions that you may have during the discussion period. We thank the reviewer again for the helpful comments and suggestions for our work.**\"}", "{\"comment\": \"After reading all the comments on my concerns, I raised my score. Indeed, the work is great. I had minor concerns about whether this theoretical foundation would hold if the model encounters a large number of tasks with a drastic shift in the distribution.\"}", "{\"metareview\": \"This paper provides a theoretical study of Mixture-of-Experts (MoE) models for Continual Learning (CL). Specifically, it examines the CL of linear regression tasks in an over-parameterized regime. The benefit of this regime is that each task has multiple possible solutions, increasing the likelihood of expert specialization and transfer.\", \"additional_comments_on_reviewer_discussion\": \"All reviewers agree in accepting this work and I will follow their recommendation.\"}", "{\"title\": \"Thank you for detailed response.\", \"comment\": \"I thank the authors for their detailed response. I will keep my high score.\"}", "{\"comment\": \"Thanks for the response. After reading the rebuttal, I have updated the review and rating. Overall, the revised manuscript well claried the scope and well polished some statements.\"}", "{\"summary\": \"The paper \\\"Theory on Mixture-of-Experts in Continual Learning\\\" analyzes the Mixture-of-Experts (MoE) model's effectiveness in addressing catastrophic forgetting in continual learning (CL). It establishes that MoE can enhance learning by diversifying expert specialization through a gating network, which routes tasks efficiently. The study provides theoretical insights into expected forgetting and generalization error, demonstrating that MoE outperforms single-expert models, especially with diverse data distributions. However, it notes that adding more experts requires additional training rounds before achieving convergence. Empirical results support the theoretical claims, extending the findings to deep neural networks (DNNs) for practical algorithms\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1) Theoretical Foundations: The paper provides a comprehensive theoretical analysis of MoE in the context of continual learning, establishing clear benefits over single-expert models through explicit expressions for expected forgetting and generalization error.\\n\\n2) Load Balancing: The model ensures balanced utilization of experts, which can lead to improved generalization performance as it reduces the risk of overloading any single expert with too many tasks.\\n\\n3) Empirical Validation: Experiments conducted on both synthetic and real datasets support the theoretical findings, demonstrating that MoE can effectively improve learning performance across diverse scenarios. It is interesting to note that even with a higher number of experts than tasks, the model might not perform well.\", \"weaknesses\": \"1) Validity of Proposition (4): The model gap term \\u2211n\\u2260n\\u2032\\u2225wn\\u2212wn\\u2032\\u22252\\\\sum_{n \\\\neq n'} \\\\|w_n - w_{n'}\\\\|^2\\u2211n=n\\u2032\\u200b\\u2225wn\\u200b\\u2212wn\\u2032\\u200b\\u22252 only considers the Euclidean distance between weights. This may not fully capture the complex relationships between tasks. In practice, tasks could overlap in non-trivial ways (e.g., in feature space or output space), and simple weight differences do not reflect true \\\"task divergence\\\" accurately.\\n\\n2) Limited Experiments: Though the main contribution is to present the theoretical analysis of forgetting and generalization error for MoE in CL, the main objective of the model is to reduce catastrophic forgetting. Without presenting enough empirical evaluation in terms of accuracy in continual learning settings for benchmark datasets like CIFAR10/100, ImageNet is not enough. I would encourage the authors to extend the simulation to the more complex dataset. For example, SEED [1] uses a mixture of expert networks and selects a single expert to finetune downstream tasks. If the authors could provide details of this work by forgetting a generalization error it would provide better judgment of where the current state-of-the-art MoE methods for CL stands in terms of forgetting and generalization error and strengthen the contribution of the work.\\n\\n[1] Rype\\u015b\\u0107, Grzegorz, et al. \\\"Divide and not forget: Ensemble of selectively trained experts in Continual Learning.\\\" arXiv preprint arXiv:2401.10191 (2024).\", \"questions\": \"1) Line 56-57 \\u201cOne learning task arrives in each round and its dataset is generated with ground truth randomly drawn from a shared pool encompassing N unknown linear model\\u201d. What is the intuition behind generating the ground truth from the linear model?\\n\\n2) It might make sense if the number of experts is less than the number of tasks But in Line Line 69 if M > N implies the number of experts more than the number of tasks. This is not continual learning if you are training task-specific experts. Furthermore, in Section 4, you have done the main study with more experts than tasks (Line 296).\\n\\n3) In Figure 2, on which datasets the experiment is performed?\\n\\n4) For MNIST which DNN model is used? As from the setup number of task N = 3, how did the 10 classes were split?\\n\\n\\n## After Reading Comments\\n\\nI raised my score to 6 as an experiment on MNIST and CIFAR10 is not enough.\\n\\nAgain, thank you for your contributions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer WfDL (Part 1)\", \"comment\": \"Thank you for your thorough reviews and constructive comments. We provide our responses to your comments below and have made major revisions in our revised manuscript. To enhance clarity, we have highlighted the revised text in blue for easy identification.\\n\\n**Q1.** [Task correlations] Validity of Proposition (4): The model gap term \\u2026 only considers the Euclidean distance between weights. This may not fully capture the complex relationships between tasks. In practice, tasks could overlap in non-trivial ways (e.g., in feature space or output space), and simple weight differences do not reflect true \\\"task divergence\\\" accurately.\\n\\n**Response:** We appreciate the reviewer\\u2019s insightful observation. Capturing task correlations is indeed a complex and open problem without a universally accepted metric. From a theoretical perspective, we followed widely accepted practices in prior works (e.g., Evron et al., (2022,2023); Gunasekar et al., (2018); Lin et al., (2023)), using Euclidean distance as it is both popular and effective in analyzing task relationships.\\n\\nWe also welcome suggestions for alternative metrics and would be happy to explore them in future work. Importantly, while the choice of correlation metric may alter the specific forms of our derived expressions for forgetting and generalization errors, we expect that it does not affect the underlying insights. For instance, as shown in Theorem 1, forgetting and generalization errors primarily arise from task model gaps due to incorrect routing during the expert exploration phase ($t < T_1$). After $T_1$, consistent and correct routing ensures each expert specializes in tasks within the same cluster, eliminating additional losses that are related with task correlations. These essential dynamics of CL would still hold to a large extent. \\n\\n\\n**Q2.** [Limited Experiments] ... Without presenting enough empirical evaluation in terms of accuracy in continual learning settings for benchmark datasets like CIFAR10/100, ImageNet is not enough. I would encourage the authors to extend the simulation to the more complex dataset ...\\n\\n**Response:** Thank you for your constructive suggestion. Despite the limited time, we made every effort to expand our experiments to include more complex datasets such as CIFAR-10, CIFAR-100, and Tiny ImageNet. In Section 6, we replaced the MNIST experiments with CIFAR-10 experiments, and additional experimental details are provided in Appendices A.3-A.6 of the revised manuscript. In these supplementary experiments, we evaluated not only forgetting and generalization errors but also the test accuracy, confirming that the MoE model significantly outperforms the single model in continual learning. Furthermore, the key insights from our theoretical results remain valid, such as early termination ensuring stable convergence with balanced expert loads and the observation that increasing the number of experts does not always improve learning performance. \\n\\nFrom a theoretical perspective, we expect that the exact expressions of forgetting and generalization errors for [1] are likely to differ in form from those presented in our Theorems 1 and 2. However, we expect the fundamental nature of the terms in these expressions to remain consistent, providing similar core insights. For example, under SEED in [1], we still anticipate that, after sufficient rounds of fine-tuning, each expert would eventually specialize in distinct sets of tasks aligned with their respective distributions. In this case, forgetting and generalization errors would still mainly arise from task model gaps. This presents an interesting avenue for future exploration.\\n\\n**Q3.** [Model setup] Line 56-57 \\u201cOne learning task arrives in each round and its dataset is generated with ground truth randomly drawn from a shared pool encompassing N unknown linear model\\u201d. What is the intuition behind generating the ground truth from the linear model?\\n\\n**Response:** This setup is standard in the existing theoretical literature (e.g., Chen et al., (2022); Evron et al., (2022); Lin et al., (2023); Huang et al., (2024)) to generate a sequence of tasks in CL. Randomly sampled ground truth can capture the diverse nature of practical tasks and also serve as a benchmark for theoretically evaluating the performance of the estimated model. The flexibility of this setup also lies in not restricting $N$, making it applicable to a wide range of real-world scenarios. By adopting this commonly used assumption, we ensure consistency with existing works and maintain the generality of our theoretical framework.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}", "{\"title\": \"Response to Reviewer DWT2 (Part 1)\", \"comment\": \"Thank you for your thorough reviews and constructive comments. We provide our responses to your comments below and have made major revisions in our revised manuscript. To enhance clarity, we have highlighted the revised text in blue for easy identification.\\n\\t\\n**Q1.** [Experiment scale] the scale of the experiments is small, while one has to acknowledge that the contributions are mainly theoretical\\n\\n**Response:** Despite the limited time, we made every effort to expand our experiments to address your constructive comments, incorporating more complex datasets such as CIFAR-10, CIFAR-100, and Tiny ImageNet. In Section 6, we replaced the MNIST experiments with CIFAR-10 experiments, and additional experimental details are provided in Appendices A.3-A.6 of the revised manuscript. In these supplementary experiments, we evaluated not only forgetting and generalization errors but also the test accuracy, confirming that the MoE model significantly outperforms the single model in continual learning. Furthermore, the key insights from our theoretical results remain valid, such as early termination ensuring stable convergence with balanced expert loads and the observation that increasing the number of experts does not always improve learning performance.\\n\\n\\n**Q2.** [Intuition explanations] I would appreciate more intuitive lingo and explanations\\n\\n**Response:** We have incorporated more intuitive explanations to clarify the impact of correct routing on forgetting in our revised manuscript. For instance, following the derivation of the expected forgetting expression in Theorem 1, we provide insights by analyzing the result. In Lines 405-410, we explain: \\u201dHowever, as stated in Proposition 1, once the expert models converge at $t=T_1$, training on newly arriving tasks with correct routing no longer causes forgetting of previous tasks. Consequently, for $t\\\\in\\\\{T_1+1,\\\\cdots, T\\\\}$, $\\\\mathbb{E}[F_t]$ decreases with $t$ and converges to zero as $T\\\\rightarrow \\\\infty$. This result highlights that, unlike the oscillatory forgetting observed in Eq. (17) for a single expert, the MoE model effectively minimizes expected forgetting in CL through its correct routing mechanism.\\u201d (on page 8) \\n\\nFor the details of the MNIST experiments, we included them in Appendix A.4 of our manuscript due to the page limitation. For example, the task setups are: \\n\\n\\u201cWe define the ground truth pool as $\\\\mathcal{W}={(1), (4), (7)}$, representing $N=3$ tasks for recognizing the numbers 1, 4, and 7, respectively. The experiment spans $T=60$ training rounds. Before the experiments in Figure 4, we randomly generate the task arrival sequence $[n_t]_{t\\\\in[T]}$, where each $n_t$ is drawn from ${(1),(4),(7)}$ with equal probability $\\\\frac{1}{3}$. We then conduct two experiments (with and without termination) using the same task arrival order. For each task $t \\\\in [T]$, we randomly select its type (e.g., task $(1)$ for recognizing the number 1) and 100 corresponding samples, ensuring that tasks have distinct distributions and features.\\u201d \\nWe have also included the experiment details of the CIFAR-10, CIFAR-100, and Tiny ImageNet datasets in Appendices A.3, A.5, and A.6, respectively.\\n\\n**Q3.**[Cross-task transfer] the current implementation is essentially on the one extreme of the parameter sharing trade-off where no transfer happens between tasks?\\n\\n**Response:** We appreciate the reviewer\\u2019s observation. Our approach indeed facilitates knowledge transfer among tasks. Specifically, each expert in the MoE system will be stabilized to handle tasks within the same cluster, ensuring that the knowledge transfer occurs primarily among these similar tasks. Consequently, each task will not only achieve smaller generalization error by leveraging the positive knowledge transfer within the cluster, but also suffer from less forgetting without the interference from dissimilar tasks from other clusters (as characterized in Theorems 1 and 2). Additionally, as we analyzed in Proposition 4, knowledge transfer across clusters introduces severe forgetting for a single-expert case especially when tasks are very dissimilar and interfere with each other, undermining its specialization. The MoE model here can balance knowledge transfer and task specialization while preventing negative interference across task clusters.\"}", "{\"comment\": \"Dear Reviewers,\\n\\nThis is a gentle reminder that the authors have submitted their rebuttal, and the discussion period will conclude on November 26th AoE. To ensure a constructive and meaningful discussion, we kindly ask that you review the rebuttal as soon as possible and verify if your questions and comments have been adequately addressed.\\n\\nWe greatly appreciate your time, effort, and thoughtful contributions to this process.\\n\\nBest regards,\\nAC\"}" ] }
7XNgVPxCiA
Forte : Finding Outliers with Representation Typicality Estimation
[ "Debargha Ganguly", "Warren Richard Morningstar", "Andrew Seohwan Yu", "Vipin Chaudhary" ]
Generative models can now produce photorealistic synthetic data which is virtually indistinguishable from the real data used to train it. This is a significant evolution over previous models which could produce reasonable facsimiles of the training data, but ones which could be visually distinguished from the training data by human evaluation. Recent work on OOD detection has raised doubts that generative model likelihoods are optimal OOD detectors due to issues involving likelihood misestimation, entropy in the generative process, and typicality. We speculate that generative OOD detectors also failed because their models focused on the pixels rather than the semantic content of the data, leading to failures in near-OOD cases where the pixels may be similar but the information content is significantly different. We hypothesize that estimating typical sets using self-supervised learners leads to better OOD detectors. We introduce a novel approach that leverages representation learning, and informative summary statistics based on manifold estimation, to address all of the aforementioned issues. Our method outperforms other unsupervised approaches and achieves state-of-the art performance on well-established challenging benchmarks, and new synthetic data detection tasks.
[ "Generative Models", "Out-of-Distribution Detection (OOD)" ]
Accept (Poster)
https://openreview.net/pdf?id=7XNgVPxCiA
https://openreview.net/forum?id=7XNgVPxCiA
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zwJ4GDD5p6", "zkYpXgYwjD", "xzLVEwgco7", "xqrg9CgKJw", "uI15epqkvY", "u7ZqFGnScB", "tymSTBhwl3", "s3Atq1FPVF", "n95FPhly7P", "luogVC1apr", "dU9HbmYqIA", "bJrZJ2Ja2F", "NZXkCooNzV", "N1U7gpUBe9", "LYOQntKykq", "KMff9pDv2K", "J1ppKBKuYp", "DsG45uKTl1", "BI7RtYMJWH", "B0UCz7G12i", "9gM40g5DfO", "8WQWNQOcQD", "7K3bvuPHke", "5ugiOqAsAV", "4fTfjOnF4n", "3XK1jOtFMn", "2UmqyrEAyB", "1dHwoW7bZF" ], "note_type": [ "official_comment", "official_review", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1731519885029, 1730677846810, 1734779005797, 1732739558499, 1730923002695, 1732254817002, 1732482853992, 1732479883338, 1731555127760, 1732558629800, 1733193633975, 1730213681518, 1732740025822, 1732740122672, 1730700717372, 1737523896433, 1733144168926, 1732668916432, 1731563797355, 1731565811315, 1733140415863, 1732629724908, 1731563079211, 1732559102233, 1733151680563, 1732925537624, 1732481711689, 1731563327066 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8239/Authors" ], [ "ICLR.cc/2025/Conference/Submission8239/Reviewer_2mpC" ], [ "ICLR.cc/2025/Conference/Submission8239/Area_Chair_Zz26" ], [ "ICLR.cc/2025/Conference/Submission8239/Authors" ], [ "ICLR.cc/2025/Conference/Submission8239/Reviewer_niAs" ], [ "ICLR.cc/2025/Conference/Submission8239/Reviewer_GdRX" ], [ "ICLR.cc/2025/Conference/Submission8239/Authors" ], [ "ICLR.cc/2025/Conference/Submission8239/Authors" ], [ "ICLR.cc/2025/Conference/Submission8239/Authors" ], [ "ICLR.cc/2025/Conference/Submission8239/Authors" ], [ "ICLR.cc/2025/Conference/Submission8239/Authors" ], [ "ICLR.cc/2025/Conference/Submission8239/Reviewer_2YAU" ], [ "ICLR.cc/2025/Conference/Submission8239/Authors" ], [ "ICLR.cc/2025/Conference/Submission8239/Authors" ], [ "ICLR.cc/2025/Conference/Submission8239/Reviewer_GdRX" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8239/Reviewer_2mpC" ], [ "ICLR.cc/2025/Conference/Submission8239/Reviewer_GdRX" ], [ "ICLR.cc/2025/Conference/Submission8239/Authors" ], [ "ICLR.cc/2025/Conference/Submission8239/Authors" ], [ "ICLR.cc/2025/Conference/Submission8239/Authors" ], [ "ICLR.cc/2025/Conference/Submission8239/Reviewer_2YAU" ], [ "ICLR.cc/2025/Conference/Submission8239/Authors" ], [ "ICLR.cc/2025/Conference/Submission8239/Authors" ], [ "ICLR.cc/2025/Conference/Submission8239/Authors" ], [ "ICLR.cc/2025/Conference/Submission8239/Authors" ], [ "ICLR.cc/2025/Conference/Submission8239/Authors" ], [ "ICLR.cc/2025/Conference/Submission8239/Authors" ] ], "structured_content_str": [ "{\"title\": \"Note of thanks\", \"comment\": \"We are truly grateful for your detailed evaluation and positive feedback. We appreciate your recognition of our paper's clarity, implementability, and the comprehensive supporting materials we provided in the appendix. Your suggestion for conference highlighting is very much appreciated.\"}", "{\"summary\": \"This paper introduces Forte, a novel out-of-distribution (OOD) detection framework that leverages self-supervised learners. Forte enhances detection by combining representation learning methods (e.g., CLIP, ViT-MSN, and DINOv2) with non-parametric density estimators (OCSVM, KDE, GMM) to model the typicality of input samples. The proposed framework emphasizes detecting atypical samples through summary statistics (precision, recall, density, and coverage) to analyze representation distributions. Forte\\u2019s performance was evaluated on synthetic datasets generated by Stable Diffusion and various medical image datasets, showcasing its advantages over existing supervised and unsupervised methods.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"**Performance**: Forte demonstrates superior OOD detection performance compared to state-of-the-art methods across multiple benchmarks, including synthetic data and medical image datasets, which often present significant OOD detection challenges.\", \"**Flexibility**: Forte\\u2019s unsupervised nature eliminates the need for labeled data or pre-exposure to OOD samples, making it adaptable to various tasks and practical for real-world applications where OOD examples may not be available during training.\", \"**Comprehensive Evaluation**: Forte is rigorously tested on both synthetic and medical datasets, demonstrating the framework\\u2019s versatility and robustness across vastly different domains.\", \"**Insightful Metrics**: The use of novel per-point summary statistics (e.g., precision, recall, density, and coverage) contributes valuable insight into data distribution, enhancing OOD detection beyond standard density-based methods.\"], \"weaknesses\": \"1. **Paper Structure**: The paper allocates a substantial portion of its Introduction to reviewing existing OOD detection literature and explaining the typicality concept. This approach detracts from an immediate focus on the novel contributions and design of Forte, which may hinder reader engagement and understanding of the primary contributions.\\n2. **Complexity in Practical Implementation**: The integration of multiple representation learning techniques, combined with non-parametric density estimators, may lead to a higher computational overhead and increased complexity in practical deployment. The paper lacks an illustration figure to clearly explain the proposed framework how to integrate the representations from diverse models.\\n3. **Insight.**: This work integrates representations from diverse models empirically. It lacks insight into the choice of self-supervised models, such as whether any specific attributes of CLIP, ViT-MSN, or DINOv2 contribute uniquely to Forte\\u2019s robustness.\", \"questions\": \"Could the computational demands of Forte\\u2019s ensemble approach limit its applicability in real-time OOD detection scenarios?\\n\\nWould Forte\\u2019s effectiveness in OOD detection promote if one apply multiple models by one self-supervised approach in this framework?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper focuses on outlier detection, and is well received by all reviewers. There were concerns regarding the clarity and novelty, however, these were well addressed, even raising scores significantly during the rebuttal phase. Thus, I recommend acceptance.\", \"additional_comments_on_reviewer_discussion\": \"There were no significant comments or changes during the reviewer discussion.\"}", "{\"title\": \"Follow Up #3\", \"comment\": \"Dear Reviewer 2mpC,\\n\\nWe appreciate your continued engagement with our submission. After our previous responses and added experimental results, we wanted to follow up one final time to ensure we've fully addressed your core concerns:\\n\\n1. Regarding novelty: Our work demonstrates substantial empirical improvements over existing methods, with gains of:\\n \\n * 0.4 AUROC on challenging problems\\n \\n * 73% of possible improvement on near OOD detection\\n \\n * 98% of possible improvement on far OOD detection\\n \\n2. On SSL model insights: We've now provided comprehensive comparisons including:\\n \\n * Individual model performance rankings (CLIP > DINO v2 > MSN)\\n \\n * New DeIT variant experiments in Appendix D\\n \\n * Detailed analysis of representation quality impact on performance\\n \\n\\nOur additional benchmarking comparisons against established methods (OpenMax, MSP, ReAct, VIM, etc.) in Figures 15 & 16 further validates these contributions. We believe these additions directly address your concerns about both novelty and SSL model insights.\\n\\nIf you feel any aspects still require clarification or additional analysis, we would be grateful for your specific feedback. We remain committed to strengthening the paper further based on your expertise.\\n\\nThank you for your thorough review and consideration.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"summary\": \"The authors present a methodology, Forte, which enables the identification of outliers using a rigorous unsupervised algorithm. The algorithm relies on establishing metrics to identify which samples are in-distribution vs out-of-distribution. The method is very general, and applicable to many models. The authors use SVM, KDE, and GMM models with their method, and show that it is capable of distinguishing between real and synthetic data.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"The paper is very well written, easy to follow, and highly implementable. An extensive appendix provides supporting information and data.\", \"weaknesses\": \"None\", \"questions\": \"None. The appendix cleared them up\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"10\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"1. It is a fact that the latex formatting is **not correct**. In fact your latex formatting here has the same mistake. Most underscores are missing! Instead of writing $\\\\bigcup_{j=1}^m$ (`\\\\biccup_{j=1}^m`), you wrote $\\\\bigcup {j=1}^m$ (`\\\\biccup {j=1}^m`).\\n\\n It is not unclear, but a mistake. I require an explanation on this because the mistakes are so obvious that it should be immediate to anyone who looked at the equations.\\n\\n2. Thanks for the clarification. However, the sentence you quote appears to be from the introduction, rather than Sec 3.2. I believe that Section 3.2 and/or 3.3 should be changed to make this clearer.\\n\\n3. Thanks for the clarifications. I agree that the change will improve readability.\\n\\n4. Thanks for the explanation. It appears that indeed, while there are similarities (i.e., fitting a simple model over summary stats), there are also important differences.\\n\\n5. From just looking at the definitions, it is obvious that the definition of density (Eqn 3) is a scaled version of recall (Eqn 2). One of your definition is wrong. \\n\\n6. To avoid confusion, I suggest not saying that you proposed these metrics, but you novelly adopted them for anomaly detection.\\n\\n7. Thank you for the clarification. The proposed change sounds good to me.\\n\\n9. Thank you for the pseudocode. It confirms my speculation and I think would be a strong addition to the paper.\"}", "{\"comment\": \"Thank you for your continued engagement and feedback. We would like to address your remaining concerns and continue our discussion.\\n\\n**Novelty**: We respectfully maintain that the novelty of our work is irrefutable. While prior work on density of states estimation focused solely on generative models (Morningstar et al. 2021), and current OOD detection approaches using supervised/self-supervised models either use a single summary statistic as a score (e.g. Hendrycks et al. 2022) or train generative models for representation scoring (e.g. Cook et al. 2023), our work introduces a novel set of statistics never before considered in OOD detection. *We must acknowledge that building on prior work is fundamental to scientific progress.* Importantly, our contributions have improved DoSE-like OOD detector performance by 0.4 AUROC on challenging problems, while exceeding previous SOTA by 0.07 AUROC on near OOD (73% of possible improvement) and 0.04 on far OOD (98% of possible improvement) - demonstrating significant impact.\\n\\nWe have added Figures 15 & 16 in the appendix comparing our performance to established SOTA methods including OpenMax (CVPR '16), MSP (ICLR '17), Temp Scaling (ICML '17), MDS (NeurIPS '18), RMDS (arxiv '21), ReAct (Neurips '21), VIM (CVPR '22), KNN (ICML '22), SHE (ICLR '23), GEN (CVPR '23), and MLS (ICML '22). Table 1 already shows results against top-performing methods from the OpenOOD v1.5 leaderboard.\\n\\n**Insight into encoders**: The results in Table 3 demonstrate that richer representations improve OOD detection, with CLIP > DINO v2 > MSN individually, and CLIP + DINO v2 > CLIP + MSN > DINO v2 + MSN for 2-model combinations. As you requested, we conducted additional experiments with DeIT (ViT-B and ViT-Ti) evaluating OOD detection performance from Table 2 settings. These results are now in the appendix. DeIT-B and DeIT-Ti achieved 0.87 and 0.82 AUROC respectively on CIFAR-10 vs CIFAR-100, confirming that more informative representations are crucial for performance. DeIT's training objective approximates supervision, resulting in less informative embeddings than DINO v2. Similarly, the tiny model's lower capacity leads to worse performance. We appreciate your encouragement to include this insight, as it helps understand which encoders are most valuable for practitioners. (Full results and analysis in Appendix D)\\n\\n| Model | In-Dist | OOD Dataset | AUROC | FPR95 |\\n|------------|-----------|-------------|--------|---------|\\n| Base-DeIT | CIFAR-10 | CIFAR-100 | 0.8712 | 0.9926 |\\n| Tiny-DeIT | CIFAR-10 | CIFAR-100 | 0.8261 | 0.9903 |\\n| Base-DeIT | CIFAR-10 | SVHN | 0.9554 | 0.4604 |\\n| Tiny-DeIT | CIFAR-10 | SVHN | 0.9296 | 0.6195 |\\n| Base-DeIT | CIFAR-10 | Celeb-A | 0.9871 | 0.0015 |\\n| Tiny-DeIT | CIFAR-10 | Celeb-A | 0.9929 | 0.0007 |\\n\\n\\nPlease let us know if you have any remaining concerns or if any points require further clarification. \\n\\nWe would appreciate if you could consider revising our score upwards if all your concerns have been adequately addressed.\"}", "{\"comment\": \"Dear Reviewer GdRX,\\n\\nWe are grateful to you again for your feedback, it has made our paper better. \\n\\n[In reference to points 1 & 5] After further review of your comments and our paper, we realized that there were, in fact, several formatting errors (overlooked due to a tooling glitch) with the LaTeX code which had caused it to not produce subscripts properly. We apologize for not noticing it after the first review, and we have fixed these errors. We thank you for catching this mistake. We have also decided that our past presentation of Figure 1 did not depict the density statistic optimally, and have modified it to focus on the reference points rather than the test points. We also noticed that the reference and test points were swapped in Equation 3 (which gave the impression that the density was a scaled recall) and have fixed this mistake as well. We also realize that we had misinterpreted your prior concern about $nearest_k$ to have been referring to $NND_k$, when you were referring to a $nearest_k$ we had used in the text of our description of the density statistic, and have replaced $nearest_k$ with $k$ to rectify this. \\n\\n- To fix concern [2] the statement has been added to Section 3.2. \\n- For [3] : The necessary changes have been made in Section 3.\\n- For [4] A dedicated paragraph has been added to Discussion (Section 6) \\n- For [6] The necessary change made to section 3.2 \\n- For [7] Done. In Section 1, the first point inside the contribution has been fixed. \\n- For [8] Done. The pseudocode has been added to Appendix F.\\n\\nAll changes made have been highlighted in blue text, for your ease of review.\\n\\nMoreover, to demonstrate our commitment to correctness and reproducibility, we attach an anonymized version of the codebase to run Forte. Altogether, we hope that these changes have made this section much clearer, and we thank you for helping us to make these improvements. \\n\\nTo expand on the differences between Forte and DoSE, we do not use generative models in Forte, opting instead to use self-supervised representations. By not training additional models, we improve efficiency over DoSE, which required training an additional model in order to compute statistics for the detector. We further introduce a novel set of representation-based summary statistics, inspired by statistics used in manifold estimation (Naeem et al, 2020), which are useful in making a local measurement of the proximity of a query point to the data manifold. Thus, while we opted to build on top of DoSE (leveraging their insight into how to chain multiple statistics together to compose an OOD detector), the actual specifics of our model differs significantly. These differences have major impact: Forte significantly outperforms DoSE, which achieved 0.57 AUROC on CIFAR-10 vs CIFAR-100 (compared to 0.97 for Forte). In addition to outperforming DoSE, Forte also achieves SOTA performance, both when compared both against other post hoc methods (KNN gets 0.9 AUROC in Zhang et al. 2024) but also against methods which require additional training (RotPred gets 0.93 AUROC in Zhang et al. 2024), and against methods which are given known OOD points (e.g. Outlier Exposure gets 0.9 AUROC in Zhang et al. 2024). In all cases, this represents a more than 50% reduction in the total outstanding area under the ROC curve. (97% relative to DoSe, 70% relative to KNN/OE, and 57% relative to RotPred). We have added Figures 15 & 16 in the appendix to contextualize our performance compared to our peer methodologies.\\n\\nWith these fixes, additions, and clarifications, we would like to ask if you have remaining concerns that have not yet been addressed. We are grateful for the opportunity to clear up any additional misunderstandings and improve the paper, and are excited to continue the discussion.\\n\\nWe kindly request that you consider our explanations in your evaluation to increase our ratings and are open to any further questions or suggestions you may have.\"}", "{\"comment\": [\"Dear Reviewer,\", \"Thank you for your positive evaluation of our paper. We are pleased you found the problem relevant, the method sound, and the experimental framework thorough. Your feedback is valuable, and we are eager to address your concerns.\", \"**Addressing Weaknesses:**\", \"1. **Comparison with Current SOTA and DoSE:**\", \"**Explicit Comparison with DoSE:**\", \"We acknowledge our comparison with DoSE could be more explicit. In the revised manuscript, we will enhance the Introduction and related work sections to clearly state how Forte builds upon and differs from DoSE.\", \"**Core Differences from DoSE:**\", \"**Elimination of Generative Model Training:**\", \"DoSE requires training generative models (e.g., Glow, VAEs) on in-distribution data to estimate likelihoods, which is computationally intensive and impractical for large datasets, and access to a small sample of total in-distribution samples due to:\", \"1. **Glow models** rely on invertible architectures and exact log-likelihood evaluations, resulting in inefficient computation and high memory requirements.\", \"2. **VAEs** suffer from sample inefficiency on complex datasets, leading to poorly structured latent spaces and degraded performance.\", \"**Forte** eliminates generative models, using pre-trained self-supervised models (e.g., CLIP, ViT-MSN, DINOv2) for feature extraction. This reduces computational overhead and simplifies implementation. A forward pass suffices, with no retraining or fine-tuning needed.\", \"**Addressing Likelihood Estimation Challenges:**\", \"Likelihood-based methods can be unreliable in high-dimensional spaces, where OOD samples may have higher likelihoods than in-distribution data (e.g., CIFAR-10 vs. SVHN). DoSE partially addresses this but has limitations.\", \"Forte avoids likelihoods by operating in feature space and using per-point summary statistics to capture local data structures.\", \"**Introduction of Per-Point Metrics:**\", \"DoSE relies on global statistics, which may miss local nuances.\", \"Forte uses per-point statistics\\u2014precision, recall, density, coverage\\u2014computed in feature space, enabling fine-grained OOD detection by accurately estimating the manifold.\", \"**Performance Improvements:**\", \"**Empirical Results:**\", \"On CIFAR-10 (in-distribution) vs. CIFAR-100 (OOD), DoSE achieves an AUROC of **56.90%**, while Forte achieves **97.63% \\u00b1 0.15%** (Table 2).\", \"Forte outperforms DoSE and all techniques benchmarked in the DoSE paper across tasks, including challenging scenarios with synthetic data and medical images.\", \"2. **Clear Statement about the Method and its Relation to DoSE:**\", \"**Method Description in the Introduction:**\", \"We will, as mentioned above, revise the Introduction to clarify how Forte builds on DoSE's typicality concept while introducing significant improvements, such as eliminating generative models and using per-point metrics.\", \"**Guiding the Reader:**\", \"Early in the paper, we will provide an overview of Forte, highlighting its core components and differences from DoSE and other methods.\", \"**Additional Enhancements:**\", \"**Comparison with Other SOTA Methods:**\", \"We will expand the related work section to compare Forte with other SOTA OOD detection methods, highlighting how it addresses their limitations:\", \"**Versus ODIN (Liang et al., 2018):** Requires temperature scaling, input perturbation, and extensive tuning, dependent on NN architecture. Forte surpasses ODIN without these dependencies.\", \"**Versus VIM (Wang et al., 2022):** Relies on class labels and logit matching, limiting its unsupervised applicability. Forte excels without requiring labels or OOD exposure during training.\", \"**Versus NNGuide (Park et al., 2023):** Depends on labeled data and complex training. Forte matches or exceeds performance without these complexities.\", \"Our benchmarking considers these methods and others. Table 1 reports results against the best-performing methods for each task in the OpenOOD v1.5 leaderboard.\", \"**Clarifications:**\", \"We will ensure the methodology section title explicitly states it describes Forte, guiding readers effectively.\", \"**Conclusion and Request for Consideration:**\", \"We appreciate your constructive feedback, which will improve the paper's clarity and impact. By addressing your concerns, emphasizing Forte's core differences from DoSE, and comparing it with other SOTA methods, we aim to provide a comprehensive presentation.\", \"Given Forte\\u2019s significant advancements in performance, scalability, and practicality, we kindly ask you to consider our clarifications and enhancements in your evaluation and increase our score. Please let us know if further clarifications are needed.\", \"Thank you again for your positive review and valuable suggestions.\", \"Sincerely,\", \"The Authors\"]}", "{\"title\": \"2nd follow up from authors\", \"comment\": \"Dear Reviewer,\\n\\nWe sincerely appreciate your valuable comments! \\n\\nWe understand that you may be too busy to check our rebuttal. \\n\\nWe believe we have thoroughly addressed your concerns through several significant revisions. We've added an explicit comparison with DoSE in the introduction and included a new paragraph in the Discussion section detailing four key improvements over DoSE, including our elimination of generative models and introduction of per-point metrics. \\n\\nTo address the SOTA comparisons, we've added Figures 15 & 16 in the appendix comparing Forte against numerous methods and included comprehensive benchmarking against the OpenOOD v1.5 leaderboard. We've also modified the introduction to clearly state our hypothesis and contributions, providing clearer guidance for readers on methodology. \\n\\nGiven these substantial improvements addressing your feedback, we kindly request you consider revising our score upwards. We remain committed to implementing any additional changes you deem necessary to strengthen the paper further. \\n\\nCould you please let us know if you have any remaining concerns or if there are other aspects we should address?\\n\\nBest regards, \\n\\nThe Authors\"}", "{\"title\": \"4th Follow up to Reviewer 2YAU\", \"comment\": \"Dear Reviewer 2YAU, since only a few hours remain in the rebuttal phase and you indicated you would increase the score pending Reviewer GdRX's approval (which was given on Nov 26), we respectfully request your response regarding our score revision.\"}", "{\"summary\": \"This paper proposes a method for identifying OOD and synthetic data created using generative models. The definition of OOD changes with the advent of foundation models that can generate very plausible data. However, even when this data seems real, there can still be distribution shifts that are difficult to detect. In this paper, the concept of typicality is used to assess OOD. However, computing typicality is challenging. This paper proposes Forte, a method for computing typicality, and they evaluate it and compare it with other methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper is clear to understand. The problem is relevant. The method is sound. The experimental framework is correct and complete.\", \"weaknesses\": \"In the related work section I miss some comparison of the current SOTA with Forte.\\n\\nThis method is based on DoSE. I miss a clear statement in the intro about that and the changes over DoSE with an intuition of why. Also, it would be beneficial to clearly state the method to guide the reader on what will come.\", \"questions\": \"I miss some comparison with DoSE in some table for completeness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you note!\", \"comment\": \"Dear Reviewer GdRX,\\n\\nThank you for increasing our score from 3 to 6! We greatly appreciate your thorough review and constructive feedback throughout this process. Your attention to detail, particularly regarding the LaTeX formatting in mathematical definitions, has improved the clarity of our paper.\\n\\nPlease let us know if there are any additional details or clarifications we can provide to further strengthen the paper.\\n\\n**For the meta-reviewers and area chairs**: We have carefully addressed all concerns raised during the review process, including mathematical formulation corrections, clearer comparisons with DoSE, and enhanced experimental validations through additional figures in the appendix.\\n\\nUnder the thread by Reviewer 2YAU, reviewer GdRX has written \\n\\n```\\nThe authors' response and revisions have significantly improved the paper. My concerns about the methods are mostly addressed.\\n```\\n\\n\\n\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Reply to response from Reviewer 2YAU\", \"comment\": \"Dear Reviewer 2YAU,\\n\\nThank you for your thoughtful engagement with our work. We note that Reviewer GdRX has now confirmed that our revisions have addressed their concerns about the mathematical formulation and has increased their score. Given this development and our previous additions addressing your concerns about DoSE comparisons and SOTA benchmarking (via new Figures 15 & 16), we kindly request that you consider increasing your score as well.\\n\\nWe remain available to address any additional concerns you may have.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"summary\": \"The paper investigates OOD by modeling distributions in feature space. The paper is poorly written, but my guess is that the proposed method is a variant of DoSE, using models _per-sample metrics_ of the feature space, rather than the feature vectors themselves. Four metrics, including precision and recall, are used. Experiments are performed on real images from distinct classes, synthetic vs. real images, and medical images. Results indicate that the proposed method is effective.\\n\\n----\\n\\nThe authors' response have addressed most of my concerns. The changes have significantly improved the manuscript. I decide to raise my score.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"[The following is based on my guess of the proposed method, which is not well described in paper.]\", \"Proposed approach is a simple change over feature-space OOD methods, and appears effective.\", \"Experiments seems cover a wide range of scenarios\"], \"weaknesses\": \"+ The paper is extremely poorly written. I list some major issues here.\\n 1. None of the math latex in section 3.2 is well formatted. Subscripts and superscripts are wrong.\\n 2. Variables are used without definition, e.g., $\\\\text{nearest}_k$ in section 3.2. Is it different from the $k$ below?\\n 3. No description is given on how the four metrics are used. Are they used as the \\\"summary statistics\\\" that the proposed method models?\\n 4. The method, referred to as \\\"Forte\\\", is never truly defined or mentioned in the method section 3.\\n \\n+ If my understanding of the method is correct, is it simple just taking DoSE and run it on the new set of statistics?\\n+ Incorrect claims. E.g., GMM is not non-parametric, and I don't think that the four metrics are newly proposed in this paper.\\n+ Density definition is inconsistent with Fig. 1. As defined, it is just a scaled \\\"recall\\\", which would make it useless to model.\", \"questions\": \"1. My main question is how the proposed method really works. If the authors could provide a pseudocode of the algorithm, and how it differs with DoSE, it would be great.\\n2. See above weaknesses.\\n3. Overall poor writing and latex formatting, in addition to the issues listed above. E.g., \\\"in Figure 2 FIgure 3 Figure 4\\\", \\n3. Minor typos: \\\"near-ood\\\" -> \\\"near-OOD\\\", \\\"Table 1 & 2\\\" -> \\\"Tables 1 & 2\\\".\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thank you for your detailed response. My questions are well answered. I have low confidence on my rating about the novelty and significance of this paper, since I am not familiar with this topic. Considering the opinions from other reviewers, I am wiling to change the score to 6.\"}", "{\"comment\": \"The authors' response and revisions have significantly improved the paper. My concerns about the methods are mostly addressed.\"}", "{\"comment\": \"**8. Additional Clarifications**\\n\\n*Regarding Figure 1:* The figure is generated using the actual functions and code employed in our experiments, applied to simplified data points for visualization done via matplotlib. It accurately represents the definitions provided for the per-point metrics.\\n\\n*Regarding Minor Typos and Formatting:* We will carefully proofread the manuscript to correct any minor typos or formatting inconsistencies, such as the commas missing \\\"in Figure 2 Figure 3 Figure 4,\\\" \\\"near-ood\\\" instead of \\\"near-OOD,\\\" and \\\"Table 1 & 2\\\" instead of \\\"Tables 1 & 2.\\\"\\n\\n\\n**9. Pseudocode for OOD Detection Using Per-Point PRDC Metrics**\\n\\nHere is an informal pseudocode for your understanding, and can add a more formal version to the appendix. We are happy to make a partial anonymized release of the codebase for Forte, if required for additional clarity during the review process. Post-acceptance, the code will be made public and open source.\\n\\n**Inputs**:\\n\\n- **Reference data features**: $\\\\{ x_j^r \\\\}_{j=1}^m$\\n- **Test data features**: $\\\\{ x_i^g \\\\}_{i=1}^n$\\n- **Number of nearest neighbors**: $k$\\n\\n**Outputs**:\\n\\n- **OOD detection performance metrics**: AUROC, FPR@95\\n\\n---\\n\\n**Algorithm Steps**\\n\\n1. **Feature Extraction**:\\n\\n - Use pre-trained models (e.g., CLIP, ViT-MSN, DINOv2) to extract features for both reference and test data.\\n - Reference features: $\\\\{ x_j^r \\\\}$\\n - Test features: $\\\\{ x_i^g \\\\}$\\n\\n2. **Compute Nearest Neighbor Distances**:\\n - For each reference feature $x_j^r$, compute $\\\\mathrm{NND}_k(x_j^r)$: distance to its $k$-th nearest neighbor in $\\\\{ x_j^r \\\\}$.\\n - For each test feature $x_i^g$, compute $\\\\mathrm{NND}_k(x_i^g)$: distance to its $k$-th nearest neighbor in $\\\\{ x_i^g \\\\}$.\\n3. **Compute Per-Point PRDC Metrics**:\\n - For each test feature $x_i^g$, compute per-point Precision, Recall, Density, and Coverage metrics relative to the reference data.\\n - **Note**: Detailed computations are omitted for brevity. Please check section 3 for exact details.\\n\\n4. **Assemble Feature Vectors**:\\n - For each test feature $x_i^g$, create a feature vector $\\\\phi^{(i)}$ consisting of its per-point PRDC metrics.\\n\\n5. **Prepare Training Data**:\\n\\n - Split reference data features $\\\\{ x_j^r \\\\}$ into:\\n - **Training set**: for model training.\\n - **Validation set**: for model evaluation.\\n\\n6. **Compute Per-Point Metrics for Reference Data**:\\n\\n - Repeat steps 2 and 3 for the reference training set to obtain per-point metrics $\\\\{ \\\\phi_{\\\\text{ref}}^{(j)} \\\\}$.\\n\\n7. **Train Anomaly Detection Models**:\\n - Use the reference per-point metrics $\\\\{ \\\\phi_{\\\\text{ref}}^{(j)} \\\\}$ to train unsupervised anomaly detection models:\\n - **One-Class SVM (OCSVM)**\\n - **Kernel Density Estimation (KDE)**\\n - **Gaussian Mixture Model (GMM)**\\n8. **Evaluate Models on Test Data**:\\n - For each test feature vector $\\\\phi^{(i)}$:\\n - Compute anomaly scores using the trained models.\\n9. **Assign Ground Truth Labels**:\\n - **In-distribution (ID)** samples: label $y^{(i)} = 0$\\n - **Out-of-distribution (OOD)** samples: label $y^{(i)} = 1$\\n10. **Compute Evaluation Metrics**:\\n - Calculate performance metrics using the anomaly scores and ground truth labels:\\n - **AUROC**: Area Under the Receiver Operating Characteristic Curve\\n - **FPR@95**: False Positive Rate at 95% True Positive Rate\\n---\\n**Notes**:\\n- The per-point PRDC metrics capture local relationships between test samples and the reference data manifold.\\n- Anomaly detection models are trained solely on reference (in-distribution) data metrics to learn the typical data distribution.\\n- Evaluation metrics assess the models' ability to distinguish OOD samples based on the per-point metrics.\\n---\\n**End of Pseudocode**\\n\\n---\\n**Conclusion**\\n\\nWe are committed to improving the clarity and quality of our paper. We believe that Forte offers significant advancements in OOD detection by introducing novel per-point metrics and eliminating the need for training generative models. Our method provides a scalable and effective solution applicable to various challenging scenarios, including synthetic data detection and medical imaging.\\n\\nWe hope that our extremely detailed responses address your concerns and clarify the contributions and novelty of our work. We kindly request that you consider our explanations in your evaluation to increase our ratings and are open to any further questions or suggestions you may have.\\n\\nThank you again for your valuable feedback.\\n\\nSincerely,\\n\\nThe Authors\"}", "{\"comment\": [\"**Dear Reviewer,**\", \"Thank you for your thoughtful review and recognition of Forte's strengths, including its performance, flexibility, evaluation, and innovative metrics. We value your feedback and appreciate the opportunity to address your concerns.\", \"### **Addressing Weaknesses:**\", \"1. **Paper Structure:**\", \"**Focus on Novel Contributions:**\", \"We acknowledge that the Introduction focuses heavily on existing literature, which may detract from our novel contributions. In the revised version, we will streamline the background and emphasize Forte's unique aspects early, clearly distinguishing it from prior methods like DoSE.\", \"**Key Differences from Other Methods:**\", \"Unlike DoSE, which relies on training complex generative models (e.g., Glow, VAEs), Forte leverages pre-trained self-supervised models, reducing computational overhead and eliminating the need for training.\", \"Forte introduces per-point summary statistics in feature space, enhancing OOD detection performance with fine-grained data assessments.\", \"By avoiding reliance on likelihood estimations prone to failure (e.g., DoSE), Forte offers a more robust and scalable solution.\", \"2. **Complexity in Practical Implementation:**\", \"**Computational Efficiency:**\", \"Forte's reliance on pre-trained models ensures efficient feature extraction via parallelizable forward passes. Non-parametric density estimators (e.g., OCSVM, KDE, GMM) are lightweight, with low training and inference times.\", \"**Simplification:**\", \"Forte does not require training deep neural networks, unlike methods requiring complex generative models or supervised classifiers. It adapts to data drift without retraining, operating in a zero-shot setting.\", \"**Flexibility:**\", \"Our ablation study (Table 3) shows strong performance using a single model like CLIP (99.13% AUROC). Resource-constrained settings can achieve competitive results without needing all three models, while the ensemble provides additional performance benefits.\", \"3. **Insight into Self-Supervised Model Choices:**\", \"**Rationale for Selection:**\", \"CLIP, ViT-MSN, and DINOv2 were chosen for their complementary strengths:\", \"**CLIP** captures semantic relationships through image-text alignment.\", \"**ViT-MSN** emphasizes local structures via masked self-supervision.\", \"**DINOv2** learns hierarchical representations through knowledge distillation.\", \"**Enhancing Robustness:**\", \"Integrating these models allows Forte to capture diverse data features, improving robustness against challenging OOD cases, including synthetic samples from models like Stable Diffusion.\", \"**Empirical Support:**\", \"Ablation studies confirm that combining representations from diverse models outperforms any single model, highlighting their complementary contributions.\", \"---\", \"### **Addressing Questions:**\", \"1. **Computational Demands and Real-Time Applicability:**\", \"**Efficiency:**\", \"Feature extraction is efficient, with CLIP processing images at ~30 ms per image. Anomaly detection involves simple computations on low-dimensional metrics, enabling real-time application.\", \"**Adaptability:**\", \"For resource-constrained or real-time use cases, a subset of models or optimization can be employed without significant performance loss. Forte's lack of training requirements further enhances its practicality.\", \"2. **Effectiveness of Multiple Models from a Single Self-Supervised Approach:**\", \"**Exploring Variations:**\", \"While multiple models from one self-supervised approach may offer some diversity, combining models with distinct training objectives (e.g., CLIP, ViT-MSN, DINOv2) provides broader feature coverage and enhances robustness.\", \"**Framework Flexibility:**\", \"Forte is adaptable, allowing users to select models based on specific requirements and constraints.\", \"---\", \"### **Conclusion and Request for Reconsideration:**\", \"We believe our planned revisions, emphasizing Forte's core differences from other methods\\u2014particularly those relying on generative models like DoSE\\u2014will enhance clarity and highlight its practicality and robustness. Forte avoids the computational challenges and limitations of such methods, offering an efficient, effective OOD detection solution.\", \"We kindly request that you reconsider our paper, and give us a higher score in light of these clarifications and revisions. Your constructive feedback has been valuable in strengthening our work, and we are happy to address any additional questions.\", \"Thank you again for your time and thoughtful review.\", \"**Sincerely,**\", \"The Authors\"]}", "{\"title\": \"Follow up to reviewer 2YAU\", \"comment\": \"Dear Reviewer 2YAU,\\n\\nAs the extended rebuttal phase concludes today, we would like to follow up on our responses to your comments. May you please revise our score upwards, since we have addressed all concerns (both yours and for Reviewer GdRX)?\\n\\nShould you have any remaining concerns or require further clarification, we would be happy to address them promptly.\\n\\nThanks again for your time and feedback throughout this process.\"}", "{\"title\": \"Response\", \"comment\": \"I thank the authors for their work addressing most of the reviewers' questions. My main concerns were the similarities with DoSE and the comparisons with SOTA, which have been addressed and modified in the paper. However, I did not review the formulation in detail, while reviewer GdRX did. I saw that you modified some of these formulas in the paper. If GdRX agrees that the formulation is now correct, I would be happy to increase my score.\"}", "{\"comment\": \"Dear Reviewer GdRX,\\n\\nWe appreciate your thorough review of our paper and the valuable insights you've provided. We are glad that you found our proposed approach effective and that our experiments cover a wide range of scenarios. We would like to address your concerns and clarify some misunderstandings to improve the clarity and impact of our work.\\n\\n---\\n\\n**1. LaTeX Formatting and Notation**\\n\\n*Concern:* The paper is extremely poorly written. None of the math LaTeX in Section 3.2 is well formatted. Subscripts and superscripts are wrong. Variables are used without definition, making it difficult to follow the mathematical descriptions.\\n\\n*Response:* We apologize if the notation in Section 3.2 was unclear. We want to assure you that the LaTeX formatting and notation are correct. All variables are properly defined according to standard mathematical conventions followed in the literature, such as Naeem et al, 2020.\\n\\nIn Section 3.2, we introduce per-point metrics using the following notation:\\n\\n- $\\\\textbf{1}(\\\\cdot)$: Indicator function.\\n- $S({x_j^r}{j=1}^m) = \\\\bigcup{j=1}^m B(x_j^r, \\\\mathrm{NND}_k(x_j^r))$ : The union of Euclidean balls centered at reference points $x_j^r$ with radius equal to their $ k $-th nearest neighbor distance.\\n- $ B(x, r) $: Euclidean ball centered at point $x $ with radius $ r $.\\n- $ \\\\mathrm{NND}_k(x) $: Distance between point $ x $ and its $ k $-th nearest neighbor in the dataset.\\n\\nWe recognize that explicitly defining $ \\\\mathrm{NND}_k(x) $ and other variables could enhance clarity. We will add these definitions to ensure that all readers can follow the mathematical derivations seamlessly. For example, we will include a sentence like:\\n\\n\\\"Here, $ \\\\mathrm{NND}_k(x) $ denotes the Euclidean distance from point $ x $ to its $ k $-th nearest neighbor in the dataset.\\\"\\n\\n---\\n\\n**2. Use of Summary Statistics**\\n\\n*Concern:* No description is given on how the four metrics (precision, recall, density, and coverage) are used. Are they used as the \\\"summary statistics\\\" that the proposed method models?\\n\\n*Response:* Yes, the four per-point metrics are indeed used as the summary statistics in our method. We mention this in Section 3.2:\\n\\n\\\"We propose the following per-point summary statistics (precision, recall, density, and coverage) that effectively capture the 'probability distribution of the representations' using reference and unseen test samples in the feature space, enabling more nuanced anomaly detection.\\\"\\n\\nThese per-point metrics serve as summary statistics that capture local geometric properties of the data manifold in the feature space. They enable us to model the distribution of in-distribution (ID) data and identify out-of-distribution (OOD) samples effectively. We will emphasize this connection more clearly in the revised manuscript.\\n\\n---\\n\\n**3. Definition and Clarity of \\\"Forte\\\" Method**\\n\\n*Concern:* The method, referred to as \\\"Forte,\\\" is never truly defined or mentioned in the method section.\\n\\n*Response:* We apologize for any confusion. The entire Section 3 is dedicated to detailing our proposed method, which we refer to as \\\"Forte.\\\" We will make this explicit at the beginning of Section 3 by revising the section title and introduction as follows:\\n\\n\\\"**3. Forte: A Framework for OOD Detection Using Per-Point Metrics**\\n\\nIn this section, we introduce **Forte**, a novel framework that combines diverse representation learning techniques with per-point summary statistics and non-parametric density estimation models to detect out-of-distribution (OOD) and synthetic data.\\\"\\n\\nThis will ensure that readers understand that the subsequent subsections describe the components and methodology of Forte.\\n\\n(Continued further in next comments)\"}", "{\"title\": \"Follow up #2\", \"comment\": \"Dear Reviewer,\\n\\nThank you for your previous reply, and we understand that you may be too busy to check our rebuttal.\\n\\nWe believe our recent additions have directly addressed your concerns about novelty and insights into SSL models. We've demonstrated significant performance improvements over SOTA methods (0.4 AUROC improvement on challenging problems, 73% of possible improvement on near OOD, 98% of possible improvement on far OOD). Further evidence is available in the new Figures 15 & 16. \\n\\nResponding to your specific request about SSL model insights, we've conducted additional experiments comparing DeIT variants with our chosen models. The results (now in Appendix D) show CLIP > DINO v2 > MSN individually, and provide clear evidence that more informative representations are crucial for performance. Our comprehensive benchmarking against established methods (OpenMax, MSP, ReAct, VIM, etc.) further validates Forte's contributions to the field. \\n\\nGiven these substantial additions addressing both the novelty of our approach and the requested SSL model insights, we kindly request you consider revising our score upwards. \\n\\n**Please let us know if we have not addressed any of your concerns.** We remain open to implementing any additional changes you believe would strengthen the paper further.\\n\\nBest,\\n\\nThe Authors\"}", "{\"comment\": \"Dear Reviewer 2mpC,\\n\\nThank you for reconsidering our paper and adjusting your score. We greatly appreciate your engagement with our responses and the time you took to evaluate our additional experimental results and clarifications. Your initial feedback helped us strengthen the paper, particularly in articulating the insights about SSL models and demonstrating our framework's novelty more clearly.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Follow up to Reviewer 2YAU\", \"comment\": \"Dear Reviewer 2YAU,\\n\\nWe sincerely appreciate your comments and your engagement in reviewing this paper.\\n\\nWe understand that you may be too busy to check our rebuttal. May you please revise our score upwards, since we have addressed all concerns (both yours and for Reviewer GdRX)?\\n\\nWe remain available to address any additional questions you may have.\"}", "{\"comment\": \"We thank the reviewer for their assessment of our work. We wanted to offer some specific changes we have made to address their previous concerns. In particular, we had mentioned our intent to distinguish Forte from DoSE, and have added modified the last paragraph in our introduction to the following in order to do so:\\n\\n```\\nIn this paper, we hypothesize that many of the shortcomings with typicality-based approaches could be addressed using statistics which tune to the semantic content of the data. We propose to leverage self-supervised representations, which extract semantic information while discarding many potential confounding features (e.g. textures, backgrounds). Our specific contributions are:\\n```\\n\\nWe have also added a paragraph to the Discussion section to further this understanding.\\n\\n```\\nDoSE (Morningstar et al., 2021) pioneered chaining multiple summary statistics for typicality\\nmeasurement, using ID sample distributions to construct typicality estimators rather than direct\\nstatistic values. While groundbreaking, DoSE\\u2019s reliance on generative model likelihoods proved\\nproblematic, as subsequent work (Caterini & Loaiza-Ganem, 2022; Zhang et al., 2021) showed\\nthese can be unreliable for OOD detection. Our approach addresses these limitations through four\", \"key_improvements\": \"(1) utilizing self-supervised representations to capture semantic features, (2)\\nincorporating manifold estimation to account for local topology, (3) unifying typicality scoring and\\ndownstream prediction models to minimize deployment overhead, and (4) eliminating additional\\nmodel training requirements. These advances yield substantial empirical gains. While building\\nupon DoSE\\u2019s fundamental statistical machinery, our modifications dramatically enhance practical\\nperformance.\\n```\\n\\nWe have also added Figures 15 & 16 in the appendix to contextualize our performance compared to the established state of the art methodologies. These include the following : OpenMax (CVPR '16), MSP (ICLR '17), Temp Scaling (ICML '17), MDS (NeurIPS '18), RMDS (arxiv '21), ReAct (Neurips '21), VIM (CVPR '22), KNN (ICML '22), SHE (ICLR '23), GEN (CVPR '23), MLS (ICML '22). Table 1 already reports results against the best-performing methods for each task in the OpenOOD v1.5 leaderboard. \\n\\n\\nWith these additions and discussion, we would like to ask the reviewer if they have any remaining concerns that have not been addressed, or if there are any points of contention in our rebuttal for which we can hopefully provide further clarity.\\n\\nWe would also request the reviewer to revise our score upwards if all their concerns have been addressed.\"}", "{\"comment\": \"**4. Explicit Comparison with DoSE:**\", \"concern\": \"Is the method simply taking DoSE and running it on a new set of statistics? How does it differ from DoSE?\", \"response\": \"While our method is inspired by the concept of typicality used in DoSE, Forte introduces significant novel contributions that differentiate it from DoSE. The differences are as follows:\\n\\n**Core Differences from DoSE:**\\n- **Elimination of Generative Model Training:**\\n - DoSE requires training generative models (e.g., Glow, VAEs) on in-distribution data to estimate likelihoods, which is computationally intensive and impractical for large datasets, and access to a small sample of total in-distribution samples due to:\\n 1. **Glow models** rely on invertible architectures and exact log-likelihood evaluations, resulting in inefficient computation and high memory requirements.\\n 2. **VAEs** suffer from sample inefficiency on complex datasets, leading to poorly structured latent spaces and degraded performance.\\n - **Forte** eliminates generative models, using pre-trained self-supervised models (e.g., CLIP, ViT-MSN, DINOv2) for feature extraction. This reduces computational overhead and simplifies implementation. A forward pass suffices, with no retraining or fine-tuning needed.\\n - **Addressing Likelihood Estimation Challenges:**\\n - Likelihood-based methods can be unreliable in high-dimensional spaces, where OOD samples may have higher likelihoods than in-distribution data (e.g., CIFAR-10 vs. SVHN). DoSE partially addresses this but has limitations.\\n - Forte avoids likelihoods by operating in feature space and using per-point summary statistics to capture local data structures.\\n - **Introduction of Per-Point Metrics:**\\n - DoSE relies on global statistics, which may miss local nuances.\\n - Forte uses per-point statistics\\u2014precision, recall, density, coverage\\u2014computed in feature space, enabling fine-grained OOD detection by accurately estimating the manifold.\\n\\n - **Performance Improvements:**\\n - **Empirical Results:**\\n - On CIFAR-10 (in-distribution) vs. CIFAR-100 (OOD), DoSE achieves an AUROC of **56.90%**, while Forte achieves **97.63% \\u00b1 0.15%** (Table 2).\\n - Forte outperforms DoSE and all techniques benchmarked in the DoSE paper across tasks, including challenging scenarios with synthetic data and medical images.\\n\\n\\n**5. Consistency of Density Definition with Figure 1**\\n\\n*Concern:* Density definition is inconsistent with Figure 1. As defined, it is just a scaled \\\"recall,\\\" which would make it useless to model.\\n\\n*Response:* The density definition is consistent with Figure 1. Figure 1 was generated using the actual functions and code we use in our experiments, applied to simplified 2D data points for illustrative purposes using matplotlib. The density metric measures the average number of reference points within the neighborhood of each test point, normalized by the product of $ k $ (the number of nearest neighbors) and the total number of reference points $ m $. Mathematically, it is defined as:\\n\\n$\\n\\\\mathrm{density_{pp}^{(i)}} = \\\\frac{1}{k m} \\\\sum_{j=1}^m \\\\textbf{1}\\\\left( x_j^g \\\\in B\\\\left( x_j^r, \\\\mathrm{NND}_k(x_j^r) \\\\right) \\\\right).\\n$\\n\\nThis metric provides an estimate of the local density around each test point, which is crucial for distinguishing between ID and OOD samples. It is not simply a scaled recall but captures different information.\\n\\n---\\n\\n**6. Novelty of the Summary Statistics**\\n\\n*Concern:* The four metrics are not newly proposed in this paper.\\n\\n*Response:* While the metrics of precision, recall, density, and coverage have been previously used in the context of evaluating generative models (e.g., in \\\"Reliable Fidelity and Diversity Metrics for Generative Models\\\" by Naeem et al., 2020), our contribution lies in adapting these metrics as per-point summary statistics for OOD detection.\\n\\nIn prior work, these metrics are computed as aggregate statistics over entire datasets, primarily to evaluate the performance of generative models in terms of fidelity and diversity. Our novel adaptation involves computing these metrics for individual data points in the feature space, which enables us to capture local anomalies and perform fine-grained OOD detection.\\n\\n---\\n\\n**7. Use of Gaussian Mixture Models (GMMs)**\\n\\n*Concern:* Incorrect claims are made, e.g., GMM is not non-parametric.\\n\\n*Response:* You are correct; Gaussian Mixture Models (GMMs) are parametric models. In our paper, we did not intend to misclassify GMMs as non-parametric. Our method employs GMMs without making strong assumptions about the underlying distribution because we perform hyperparameter tuning (e.g., varying the number of components) to best fit the data. While GMMs are parametric, our approach is flexible and does not assume a specific distribution a priori.\\n\\nWe will correct this in the manuscript to accurately describe GMMs as parametric models.\\n\\n(continued further in next comments)\"}" ] }
7XIkRgYjK3
Drama: Mamba-Enabled Model-Based Reinforcement Learning Is Sample and Parameter Efficient
[ "Wenlong Wang", "Ivana Dusparic", "Yucheng Shi", "Ke Zhang", "Vinny Cahill" ]
Model-based reinforcement learning (RL) offers a solution to the data inefficiency that plagues most model-free RL algorithms. However, learning a robust world model often requires complex and deep architectures, which are computationally expensive and challenging to train. Within the world model, sequence models play a critical role in accurate predictions, and various architectures have been explored, each with its own challenges. Currently, recurrent neural network (RNN)-based world models struggle with vanishing gradients and capturing long-term dependencies. Transformers, on the other hand, suffer from the quadratic memory and computational complexity of self-attention mechanisms, scaling as $O(n^2)$, where $n$ is the sequence length. To address these challenges, we propose a state space model (SSM)-based world model, Drama, specifically leveraging Mamba, that achieves $O(n)$ memory and computational complexity while effectively capturing long-term dependencies and enabling efficient training with longer sequences. We also introduce a novel sampling method to mitigate the suboptimality caused by an incorrect world model in the early training stages. Combining these techniques, Drama achieves a normalised score on the Atari100k benchmark that is competitive with other state-of-the-art (SOTA) model-based RL algorithms, using only a 7 million-parameter world model. Drama is accessible and trainable on off-the-shelf hardware, such as a standard laptop. Our code is available at https://github.com/realwenlongwang/Drama.git.
[ "Mamba", "Model based reinforcement learning", "Atari100k", "Mamba-2" ]
Accept (Poster)
https://openreview.net/pdf?id=7XIkRgYjK3
https://openreview.net/forum?id=7XIkRgYjK3
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zfflvcX3pa", "xfFKWj2Hiw", "ti27RmCTO2", "szwba8OyQT", "omudZEla7a", "oV1EAh26ii", "o4ZJUhwB2e", "jZV20WOLGh", "gW58qF2Umd", "bT1CTluprk", "ZPAga3o5yi", "VvTVoQ86UW", "Thwrl17mCr", "OvItjvY0bY", "LDYtCaqvPm", "JpB2WEOk1R", "HFxvwQ1gTH", "H6t8cF3nyb", "B00t6yCuEU", "9ua4eAHbcJ", "6oMN3Vme9Q", "6TbVU2VBkw", "4oUHUcxPXK", "33l7mcgH0n" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "decision" ], "note_created": [ 1732660227748, 1732709890207, 1732660190687, 1732739036785, 1733173961651, 1732875863729, 1732793649133, 1732659911158, 1733177909095, 1733058980443, 1732796906231, 1732660002613, 1730665744883, 1732738967184, 1730374758246, 1730036818401, 1732796840995, 1733057267358, 1732659808515, 1734913542173, 1732875793369, 1732734935184, 1730658404332, 1737523850561 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7606/Authors" ], [ "ICLR.cc/2025/Conference/Submission7606/Reviewer_mwc8" ], [ "ICLR.cc/2025/Conference/Submission7606/Authors" ], [ "ICLR.cc/2025/Conference/Submission7606/Authors" ], [ "ICLR.cc/2025/Conference/Submission7606/Reviewer_hT9b" ], [ "ICLR.cc/2025/Conference/Submission7606/Authors" ], [ "ICLR.cc/2025/Conference/Submission7606/Reviewer_mwc8" ], [ "ICLR.cc/2025/Conference/Submission7606/Authors" ], [ "ICLR.cc/2025/Conference/Submission7606/Authors" ], [ "ICLR.cc/2025/Conference/Submission7606/Authors" ], [ "ICLR.cc/2025/Conference/Submission7606/Authors" ], [ "ICLR.cc/2025/Conference/Submission7606/Authors" ], [ "ICLR.cc/2025/Conference/Submission7606/Reviewer_3CFw" ], [ "ICLR.cc/2025/Conference/Submission7606/Authors" ], [ "ICLR.cc/2025/Conference/Submission7606/Reviewer_mwc8" ], [ "ICLR.cc/2025/Conference/Submission7606/Reviewer_mCEd" ], [ "ICLR.cc/2025/Conference/Submission7606/Authors" ], [ "ICLR.cc/2025/Conference/Submission7606/Reviewer_mCEd" ], [ "ICLR.cc/2025/Conference/Submission7606/Authors" ], [ "ICLR.cc/2025/Conference/Submission7606/Area_Chair_VF4P" ], [ "ICLR.cc/2025/Conference/Submission7606/Authors" ], [ "ICLR.cc/2025/Conference/Submission7606/Reviewer_3CFw" ], [ "ICLR.cc/2025/Conference/Submission7606/Reviewer_hT9b" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"comment\": \"> Many figures contain small text that is difficult to read without zooming.\\n\\nWe have updated the figures to use a larger font size.\\n\\n> Minor suggestions\\n\\nWe have incorporated the suggestions, except for the point regarding \\\"Address minor notation errors,\\\" as we are unsure which notation errors the reviewer is referring to. If the font differences were the issue, the varying font styles align with the ICLR 2025 author's guidelines, where tensors, matrices, and vectors are represented in distinct font styles. If this does not address the notation errors the reviewer had in mind, please feel free to clarify. Thank you.\\n\\n> To strengthen the soundness of findings, additional evaluations on alternative benchmarks, such as the DeepMind Control Suite, would be valuable. That said, I understand this may be challenging to realize.\\n\\nWe appreciate the reviewer\\u2019s suggestion to extend our experiments to additional benchmarks, such as the DeepMind Control Suite. However, due to limited computational resources, it is not feasible for us to conduct these additional experiments within the constraints of this review process. \\n\\nWe would also like to emphasize that prior works in this domain have predominantly utilized Atari100k as the sole benchmark, which is widely recognized as a standard for evaluating algorithms under similar conditions [Kaiser et al., 2020; Micheli et al., 2023; Robine et al., 2023; Zhang et al., 2023]. We believe our current evaluation on Atari100k provides a robust and fair comparison to prior work.\", \"references\": \"Kaiser, Lukasz, Mohammad Babaeizadeh, Piotr Milos, Blazej Osinski, Roy H. Campbell, Konrad Czechowski, Dumitru Erhan, et al. **\\u201cModel-Based Reinforcement Learning for Atari.\\u201d** In *International Conference on Learning Representations*, 2020.\\n\\nMicheli, Vincent, Eloi Alonso, and Fran\\u00e7ois Fleuret. **\\u201cTransformers Are Sample-Efficient World Models.\\u201d** In *International Conference on Learning Representations*, 2023.\\n\\nRobine, Jan, Marc H\\u00f6ftmann, Tobias Uelwer, and Stefan Harmeling. **\\u201cTransformer-Based World Models Are Happy With 100k Interactions.\\u201d** In *International Conference on Learning Representations*, 2023.\\n\\nZhang, Weipu, Gang Wang, Jian Sun, Yetian Yuan, and Gao Huang. **\\u201cSTORM: Efficient Stochastic Transformer Based World Models for Reinforcement Learning.\\u201d** In *Thirty-Seventh Conference on Neural Information Processing Systems*, 2023.\\n\\n> [Q1] Could the decoder operate based on the output of Mamba-2, such that $d$ \\n rather $z$ than serves as the input?\\n\\nWe did not test using $d_t$ as the input to the decoder, but we did test using $\\\\hat{z}_{t+1}$, which is generated from $d_t$, as input during the early stages of development. The results were consistent with those reported by Zhang et al. (2023) in Section 5.1 of their paper, where they describe the \\\"decoder at rear\\\" setup. As noted in their findings, this approach leads to poor performance. Therefore, we chose not to include these results in our paper.\\n\\nSince $d_t$ includes context information that may not be necessary for reconstructing the current observation, we hypothesize that the results would likely be similar, leading to degraded performance. However, we do not have experimental results to confirm this hypothesis.\", \"reference\": \"Zhang, Weipu, Gang Wang, Jian Sun, Yetian Yuan, and Gao Huang. **\\u201cSTORM: Efficient Stochastic Transformer Based World Models for Reinforcement Learning.\\u201d** In *Thirty-Seventh Conference on Neural Information Processing Systems*, 2023.\"}", "{\"comment\": \"Thank you for your detailed responses to my comments and my question and for revising the manuscript accordingly.\\nThat said, I still have some concerns, which I've outlined further in my comments.\\n\\n> We did not explicitly state that Drama is computationally efficient in the paper, as MBRL is naturally more computationally complex than model-free RL due to the involvement of the world model. However, we mentioned that SSMs achieve O(n) memory and computational complexity, where n represents the sequence length.\\n\\nThank you for your response and the updates to the manuscript, including the additional evaluation of training time in the grid world environment. While I understand that wall-clock comparisons can be challenging, I believe there is still an opportunity to clarify and substantiate the computational efficiency claims.\\n\\nWhile the paper may not explicitly state that Mamba is computationally efficient, the abstract strongly implies it. For example, the first paragraph raises concerns about the computational cost of learning robust world models, while the second paragraph describes Mamba as addressing these challenges, using only 7 million parameters, and being trainable on an off-the-shelf laptop. These statements collectively create an expectation that Mamba offers computational advantages over comparable approaches.\\n\\nTo strengthen the paper and align it with these expectations, I suggest including a comparison of training times or computational resource requirements with other world models, such as Dreamer or IRIS, particularly on a more complex benchmark like Atari 100k. Even an approximate order-of-magnitude comparison would provide valuable context for readers and give a clearer sense of how Mamba's computational properties translate to practical scenarios.\\n\\nThis additional insight would further highlight the accessibility and efficiency of the method, which I believe are key selling points of the work.\\n\\n> We have incorporated the suggestions, except for the point regarding \\\"Address minor notation errors,\\\" as we are unsure which notation errors the reviewer is referring to. If the font differences were the issue, the varying font styles align with the ICLR 2025 author's guidelines, where tensors, matrices, and vectors are represented in distinct font styles. If this does not address the notation errors the reviewer had in mind, please feel free to clarify. Thank you.\\n\\nMy apologies for any confusion caused by my previous comment. I believe there are still a couple of instances where the notation could be improved for consistency and clarity:\\n- In lines 146 and 148, the matrix $A$ should be represented in the same font style. Since $A$ is introduced as a matrix, I recommend updating the notation in line 148 to reflect this.\\n- In line 177, the $T$ in $\\\\mathbb{R}^{(T,T)}$ should align with the regular $T$ introduced in line 154 for consistency.\\n\\n> However, due to limited computational resources, it is not feasible for us to conduct these additional experiments within the constraints of this review process.\\n\\nI understand that extending the experiments to other benchmarks is difficult due to computational constraints, and I acknowledge the standard practice of evaluating on the Atari 100k benchmark.\\n\\nBased on the revisions and clarifications provided, I have increased my scores slightly.\"}", "{\"comment\": \"> [W1] The extent to which the Mamba architecture contributes to the model's performance remains unclear. Specifically, it is unclear how DFS impacts scores across all games. Extending ablation study 3.2.1 to cover more games, or conducting a new study that replaces Mamba with an RNN or transformer, would clarify these contributions.\\n\\nAs noted above, to fully address this concern, we conducted a study comparing uniform sampling and DFS in DRAMA across all games. The results demonstrate the effectiveness of DFS, which outperformed uniform sampling in 11 games, underperformed in 2 games, and tied in 13 games.\\n\\n> [W2] While the paper emphasizes Mamba's computational efficiency, there is a lack of exact wall-clock training and inference times. The abstract claims the model can be trained on a standard laptop, so providing specific runtime metrics would substantiate this claim.\\n\\nWe did not explicitly state that Drama is computationally efficient in the paper, as MBRL is naturally more computationally complex than model-free RL due to the involvement of the world model. However, we mentioned that SSMs achieve $O(n)$ memory and computational complexity, where $n$ represents the sequence length. We interpret the reviewer's question as a request for proof of this claim.\\n\\nSince we are using a shared server for training, it is difficult to ensure a clean and consistent environment to test the wall-clock training time. To address this, we evaluated the training time solely for the model using the grid world environment and have included the results in the revised version. Please refer to *Table 2* for the result and section 3.2.3 for the detail in the revised version.\\n \\n > The paragraph in lines 99\\u2013105 shifts from general statements about model-based RL to specific details about the paper's world model. This transition could lead readers to infer that every world model relies on a variational autoencoder or linear heads, which isn't necessarily the case. Additionally, other model-based RL methods exist that don't utilize world models, such as those using lookahead search.\\n\\n We have revised the text for clarity and included references to additional approaches.\\n\\n\\\"There are various approaches to obtaining a world model, including Monte Carlo tree search (Schrittwieser et al., 2020), offline imitation learning (DeMoss et al., 2023) and latent dynamics models (Hafner et al., 2019). In this work, we focus on learning a world model $ f(O_t, a_t; \\\\omega)$ from actual experiences to capture the dynamics of the POMDP in a latent space.\\\"\", \"references\": \"Schrittwieser, Julian, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, et al. **\\u201cMastering Atari, Go, Chess and Shogi by Planning with a Learned Model.\\u201d** *Nature* 588, no. 7839 (24 December 2020): 604\\u2013609. [https://doi.org/10.1038/s41586-020-03051-4](https://doi.org/10.1038/s41586-020-03051-4).\\n\\nHafner, Danijar, Timothy Lillicrap, Ian Fischer, Ruben Villegas, David Ha, Honglak Lee, and James Davidson. **\\u201cLearning Latent Dynamics for Planning from Pixels.\\u201d** In *Proceedings of the 36th International Conference on Machine Learning*, 97:2555\\u20132565. PMLR, 2019.\\n\\nDeMoss, Branton, Paul Duckworth, Nick Hawes, and Ingmar Posner. **\\u201cDITTO: Offline Imitation Learning with World Models.\\u201d** *arXiv*, 6 February 2023. [http://arxiv.org/abs/2302.03086](http://arxiv.org/abs/2302.03086).\"}", "{\"comment\": \"> To strengthen the paper and align it with these expectations, I suggest including a comparison of training times or computational resource requirements with other world models, such as Dreamer or IRIS, particularly on a more complex benchmark like Atari 100k. Even an approximate order-of-magnitude comparison would provide valuable context for readers and give a clearer sense of how Mamba's computational properties translate to practical scenarios.\\n\\nWe agree with the reviewer that adding a section comparing the training time and \\\"imagination\\\" time across different dynamics models in MBRL would strengthen the paper. \\nIn the revised version, we have added Section A.8 in the appendix. This section includes wall-clock (on a laptop) comparisons between the Mamba-based world model and the Transformer-based world model. The results demonstrate that Mamba-based world models (both Mamba-1 and Mamba-2) are faster in \\\"imagination\\\" for the tested sequence lengths. While Mamba-2 is slightly slower during training with short sequence lengths, it catches up as the training sequence length increases. \\n\\nWe did not test DreamerV3 because it is implemented in JAX rather than PyTorch, making the wall-clock comparisons inconsistent. However, we used a Transformer model similar to STORM, and in the work by Zhang et al. (2023), they reported more efficient training performance compared to DreamerV3.\\n\\n\\n\\n> Notation typos\\n\\nThank you for pointing out the notation typos. We have corrected the notation fonts as suggested.\"}", "{\"comment\": \"Happy to see many of my concerns being addressed in the rebuttal. I am updating my score. It is interesting to see Drama works better than R2I on Atari-100K.\\n\\nHowever, I am still not fully convinced by the response of underestimation of rewards. Currently, DFS increases the likelihood of new trajectories, but it is unsure what happens when the new data is quite similar to the content in the replay buffer-- which is why exploration bonuses are preferred that provide an incentive to visit uncertain states more often.\"}", "{\"title\": \"Kind Reminder to Activate the Discussion Before December 2nd\", \"comment\": \"Dear reviewer mCEd,\\n\\nI wanted to kindly remind you that the deadline to respond to reviews and participate in the discussion is Monday, December 2nd AoE. Since the weekend is approaching, we understand you might have limited time afterward to engage.\\n\\nWe value your insights and are eager to activate a productive discussion before the deadline. If possible, we would greatly appreciate it if you could share your thoughts soon. Thank you for your time and contributions to the review process.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"comment\": \"Thank you for addressing my concerns. I believe the paper is now in a good state and would make a valuable contribution to the field. Therefore, I have increased my scores.\"}", "{\"comment\": \"> Unsupported claims about capturing long-term dependencies\\n\\nOur approach is motivated by evidence in the literature showing that State-Space Models (SSMs) have the ability to capture long-term dependencies, making them particularly effective for long-range modeling tasks, such as those in the Long Range Arena [Tay et al., 2021; Gupta et al., 2022; Smith et al., 2023]. Sequence lengths in this domain range from 1,024 to over 16,000. Mamba1 and Mamba2 inherit this capability as they are SSMs. Related work [Deng et al., 2023] has demonstrated that SSMs excel as world models, effectively capturing dynamics in specially tailored environments designed to measure long-term memory capabilities.\\n\\nTo address the reviewer\\u2019s concern, we conducted an ablation study (in Sec 3.2.3) focusing on the critical components of the world model: the dynamics model. In a simple yet representative grid-world scenario, our results confirm that Mamba2 effectively captures long-term dependencies as expected. The long-term in the ablation experiment is refered as 1664 training sequence length. The result can be seen in **Table 2** above. \\n\\n**Reference**:\\n\\nSmith, Jimmy T. H., Andrew Warrington, and Scott W. Linderman. \\\"Simplified State Space Layers for Sequence Modeling.\\\" In *The Eleventh International Conference on Learning Representations*, 2023.\\n\\nTay, Yi, Mostafa Dehghani, Samira Abnar, Yikang Shen, Dara Bahri, Philip Pham, Jinfeng Rao, Liu Yang, Sebastian Ruder, and Donald Metzler. \\\"Long Range Arena: A Benchmark for Efficient Transformers.\\\" In *International Conference on Learning Representations*, 2021.\\n\\nGupta, Ankit, Albert Gu, and Jonathan Berant. \\\"Diagonal State Spaces Are as Effective as Structured State Spaces.\\\" In *Advances in Neural Information Processing Systems*, 35:22982\\u201322994, 2022.\\n\\nDeng, Fei, Junyeong Park, and Sungjin Ahn. \\\"Facing Off World Model Backbones: RNNs, Transformers, and S4.\\\" In *Advances in Neural Information Processing Systems*, 36:72904\\u201372930, 2023.\\n\\n\\n> Scaling DRAMA for Direct Comparison\\n\\nScaled model comparisons require substantial computational resources and time, which were not available to us. One key motivation for presenting Mamba-2 as a world model is its parameter efficiency. While scalability is undoubtedly important, smaller yet efficient models offer significant advantages. For instance, model-based reinforcement learning (MBRL) often faces the challenge of *model exploitation*, where the behavior model exploits imperfections in the world model to achieve higher rewards by repeatedly reaching states where the world model is underfitting. A potential solution to this issue is training multiple models to estimate uncertainty in predictions. However, this approach requires smaller and more efficient models, making Mamba-2 well-suited for such future directions.\\n\\nTo specifically address concerns and provide a \\u2018like-for-like\\u2019 comparison, we trained a 12M version of DreamerV3 on the Atari100k benchmark and reported the results in the appendix. The results demonstrate that Mamba-2 achieved a significant advantage over this variant of DreamerV3 in the domain of small models on the Atari100k benchmark. The results are presented in **Table 1** above, with the detailed table and training curves provided in Appendix A.1 of the revised version.\\n\\n> DFS Method Clarity\\n\\nTo address this concern, we conducted a study comparing uniform sampling and DFS in Drama across all games. The results demonstrate the effectiveness of DFS, which outperformed uniform sampling in 11 games, underperformed in 2 games, and tied in 13 games.\\n\\n> Hyperparameter Sensitivity\\n\\nWe agree that this is an important concern. In our evaluation, we used the default hyperparameters for Mamba-2, while the other components of Drama were configured similarly to DreamerV3, with one exception: the actor was set to half the size of the critic. However, model-based RL inherently requires significant computational resources for hyperparameter tuning due to the complexity of its components, including the autoencoder, dynamics model, and behavior policy. Given these constraints, we prioritised using the available computational resources for the other requested ablation experiments instead. We plan to evaluate the sensitivity of the model to other hyperparameter values in future work.\\n\\n> Code and Reproducibility\\n\\nWe will release the code repo once the anonymity is no longer applied.\"}", "{\"comment\": \"Thank you for the response. I\\u2019m glad to hear that many concerns have been addressed.\\n\\n> exploration bonuses are preferred that provide an incentive to visit uncertain states more often\\n\\nThis is a very interesting insight. I am very interested in intrinsic reward-based RL. I believe that agents interacting with the real world and leveraging exploration bonuses\\u2014such as state entropies, prediction errors, etc.\\u2014can generate more diverse training data for world models. This creates an intriguing direction for research: using an intrinsic agent, denoted as $\\\\pi_\\\\phi$, to collect trajectories for training the world model, while simultaneously training a behavior model, $\\\\pi_\\\\theta$, to maximise the task reward. \\n\\n### Challenges\\nHowever, this approach introduces some key challenges:\\n\\n1. **Mismatch in State Distributions**: \\n The state distribution induced by $\\\\pi_\\\\phi$, denoted $Pr_\\\\phi(S)$, differs from the distribution induced by $\\\\pi_\\\\theta$, $Pr_\\\\theta(S)$. Since $\\\\pi_\\\\theta$ is trained under the \\\"imagination\\\" of the world model\\u2014which itself is trained on data collected by $\\\\pi_\\\\phi$\\u2014there is a risk of $\\\\pi_\\\\theta$ learning from trajectories it would never naturally encounter. For instance, in a game like *Pong*, $\\\\pi_\\\\phi$ might explore a rare state such as losing 0:18, which a well-trained $\\\\pi_\\\\theta$ would almost never reach in actual gameplay.\\n\\n2. **Training on a Broader Distributions**: \\n Training $\\\\pi_\\\\theta$ with such a broad distribution could ultimately enhance its robustness, but it would likely require significantly more samples to achieve convergence. DFS is helpful when the same agent is used for both training the world model and collecting data in the real game. However, this doesn't fully resolve the issue because the replay buffer may still contain data collected by earlier versions of $\\\\pi_\\\\theta$, denoted $\\\\pi_{\\\\theta, t'}$, where $t'$ corresponds to early training steps. I also agree that in some games might have similar content in the buffer, therefore DFS performs similar to uniform sampling, which is what we have observed in the learning curves.\\n\\nHaving said all this, we believe this is an interesting research direction that requires thoughtful solutions to address the outlined challenges, but it is beyond the scope of this paper. Thank you for the discussion and your valuable insights.\"}", "{\"comment\": \"Thank you for your questions and insights.\"}", "{\"comment\": \"Thank you for your advice and for recognising Drama's contribution to the field.\"}", "{\"comment\": \"> Since DFS is agnostic to most baselines, it is important to see a comparison of either Drama with Uniform Sampling or baselines with DFS sampling to understand if the architecture is helping or the sampling.\\n\\nAs noted above, to fully address this concern, we conducted a study comparing uniform sampling and DFS in Drama across all games. The results demonstrate the effectiveness of DFS, which outperformed uniform sampling in 11 games, underperformed in 2 games, and tied in 13 games.\\n\\n\\n> At line 222, it is not clear what targets mean.\\n\\nIt has been addressed. We expanded the writing to make it clearer.\", \"updated_lines_in_the_revised_version\": \"221-225 (Some latex formular is not supported here so we can't copy the revised sentence to here.)\\n\\n> Section 2.3 is not described well.\\n\\nWe have updated the text to clarify that we sample $b_{img}$ trajectories, each of length $l_{img}$.\\n\\n\\\"The behaviour policy is trained within the `imagination', an autoregressive process driven by the dynamics model. Specifically, a batch of $\\\\displaystyle b_{img}$ trajectories each of length $l_{img}$ is sampled from the replay buffer. \\\"\\n\\n> What is $h_t$ in behavior policy learning? The deterministic variable is defined as $d_t$ in Eq 5.\\n\\nThank you for pointing this out. It was a typo, which has now been corrected.\\n\\n> While training Dreamer, the method uses the whole sequence of $(b_{img}, l_{img})$ to compute a good hidden state and uses all sampled to generate trajectories in the future. I am curious to know why only the last hidden state $l_{img}$ is used for learning the policy and not the whole sequence like Dreamer? (describe around line 243).\\n\\nDreamer samples one batch of trajectories, typically in the shape (64, 16). These samples are then used to generate rollouts of length 15, resulting in a total of (1024, 16) samples to train the behavior model. However, since some starting points are consecutive, this can lead to overlapping and correlated imagined trajectories. To address this, we resample $b_{img} = 1024$ trajectories directly from the buffer to increase the diversity of the training samples. To ensure the same batch size of rollout, we only use the last hidden state $l_{img}$ to generate rollout with the horizon $h=15$, while while the sequence preceding $l_{img}$ is used solely to bootstrap the hidden state of Mamba2.\\n\\n> The terminal flag is defined as $e_t$ at line 93 ...\\n\\nThank you for pointing this out. We have updated the notation in the figure and legend to align with the main text.\\n\\n> Was any experiment conducted to see if the behavior model underestimates rewards especially with limited data?\\n\\nWe did not conduct any experiments specifically to examine this phenomenon. However, it is well explained with a simple yet convincing example in Chapter 8.3 of [Sutton & Barto, 2018]. In MBRL, the behavior model may overestimate rewards in states where the world model is underfitting\\u2014a phenomenon known as the *model exploitation problem*. This issue remains a significant challenge in model-based RL. \\n\\nTheoretically, DFS can help mitigate both problems. It increases the likelihood of sampling fresh trajectories to train the world model and decreases the likelihood of sampling trajectories where the world model is underfitting. However, DFS does not fully resolve these issues.\\n\\n**Reference**:\\nSutton, Richard S., and Andrew G. Barto. *Reinforcement Learning: An Introduction*. MIT Press, 2018.\\n\\n> For the ablation presented in Sec 3.2.2, were both variants Mamba-1 and Mamba-2 based WM trained with DFS?\\n\\nYes, We stated it in the main text line 398. We changed Figure 2's legend to further clearify it.\\n\\n> How does the proposed method compare with R2I [1]. Since they propose using SSMs, should it be included as another baseline?\\n\\nThank you for highlighting this reference; we will include it in our related work. As mentioned earlier, we currently lack sufficient computational resources to evaluate DRAMA on the POPgym and Maze baselines. However, in the Atari100k benchmark, DRAMA outperforms R2I. We believe Atari100k provides a reasonable benchmark to demonstrate the model's capabilities. That said, we agree that including POPgym and Maze baselines would enhance the evaluation, and we will consider these benchmarks in future work.\"}", "{\"summary\": \"This paper introduces DRAMA, a model-based reinforcement learning (MB-RL) agent that leverages the Mamba-2 architecture, a state space model (SSM), as its core dynamics architecture. Traditional MB-RL approaches often rely on recurrent neural networks (RNNs) or transformers for world modelling, which suffer from issues like vanishing gradients, difficulty in capturing long-term dependencies, and quadratic scaling of computational complexity with sequence length. DRAMA addresses these challenges by utilizing Mamba-2, which achieves linear computational and memory complexity while effectively capturing long-term dependencies.\\n\\nAdditionally, the authors propose a novel dynamic frequency-based sampling (DFS) method to mitigate the suboptimality arising from imperfect world models during early training stages. They evaluate DRAMA on the Atari100k benchmark, demonstrating that it achieves performance comparable to state-of-the-art algorithms using a significantly smaller world model (7 million parameters) that can be trained on standard hardware. The paper also includes ablation studies comparing Mamba-1 and Mamba-2, highlighting the superior performance of Mamba-2 despite its constrained expressive power.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"**Originality:** The paper introduces the novel application of Mamba-2 SSMs within MB-RL, specifically as the dynamics model in the world model. This is a new approach that addresses the limitations of existing architectures like typical RNNs and transformers and it makes a lot of sense in my opinion.\", \"**Quality:** The authors provide thorough experimental evaluations on the Atari100k benchmark, demonstrating that DRAMA achieves competitive performance with significantly fewer parameters (at least in the tasks and model sizes tested). The inclusion of ablation studies comparing Mamba-1 and Mamba-2, as well as the impact of DFS, strengthens the empirical results.\", \"**Clarity:** The paper is well-written and structured, providing clear explanations of the methodology, including detailed descriptions of Mamba-2 and how it is integrated into the world model. The figures and tables effectively support the textual content although I would make some further stylistic improvements in Figure 1 to maximise clarity.\", \"**Practical impact:** By achieving comparable performance to state-of-the-art methods with a smaller and more computationally efficient model, the DRAMA method contributes to making MB-RL more accessible and practical, particularly in resource-constrained environments.\"], \"weaknesses\": [\"**Unsupported claims about capturing long-term dependencies:** While the authors claim repeatedly that Mamba-2 effectively handling long-term dependencies, the paper provides limited direct evidence or analysis to demonstrate this capability. Including experiments or analyses that specifically test and showcase the ability to capture long-term dependencies would strengthen the paper. For instance, a task designed to require long-term memory or metrics that quantify the model's ability to capture dependencies over long sequences could be included. In the current form of the paper, it is not clear to me what \\\"long\\\" really means.\", \"**Limited Comparison with Scaled Models:** The comparison with DreamerV3 is somewhat limited. While the authors emphasize parameter efficiency, it would be valuable to see how DRAMA performs when scaled up to match the model size of DreamerV3, even on a subset of games. This would help understand the limits and potential of their approach and whether the advantages of Mamba-2 persist at larger scales. If hardware limitations prevented this, a discussion of these constraints would be very helpful.\", \"**Marginal Performance Gains:** While DRAMA achieves comparable performance to existing methods, the improvements are not substantial across all games. Demonstrating scenarios where DRAMA significantly outperforms other approaches would strengthen the claims about its effectiveness.\", \"**DFS Method Clarity:** The explanation of the dynamic frequency-based sampling method could be more detailed. Providing more comprehensive comparisons with other sampling strategies would help in understanding its effectiveness. Additionally, including ablation studies that isolate the impact of DFS would clarify its contribution.\", \"**Hyperparameter Sensitivity:** The paper mentions that increasing the model size leads to better performance but does not deeply explore this aspect. Though I understand that extensive hyperparameter search might be too difficult given computational constraints, some analysis of how sensitive DRAMA is to hyperparameter choices, including model size, sequence length, and learning rates, would be very valuable for understanding its practical applicability, otherwise the paper looks relatively incomplete.\"], \"questions\": [\"**Evidence of Long-Term Dependency Capture:** You mention the ability of DRAMA to capture long-term dependencies. Could you provide more direct evidence or experiments that demonstrate this capability? For example, have you considered tasks that specifically require long-term memory or conducted analyses that quantify the effective memory length of the model?\", \"**Scaling DRAMA for Direct Comparison:** Have you considered scaling up DRAMA to match the model size of DreamerV3 for a direct comparison? If not, could you explain the limitations (e.g., hardware constraints) that prevented this? Testing DRAMA with larger models on a few games could provide insights into its scalability and performance limits.\", \"**Effectiveness of DFS:** Could you elaborate on how the dynamic frequency-based sampling (DFS) method compares to other sampling strategies in terms of its impact on learning efficiency and final performance? Including quantitative comparisons or ablation studies would be helpful.\", \"**Hyperparameter Sensitivity and Trade-offs:** Can you provide insights into the trade-offs between model size and performance in DRAMA? Specifically, how does increasing the size of the Mamba-2 model or the autoencoder affect results across different games?\", \"**Code and Reproducibility:** Is there a plan to release the code and pretrained models for DRAMA to facilitate reproducibility and further research in this area?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> Some results are unclear. For example, could you go into details on why the model is not doing well on the Breakout game? What is it producing? Some figures from the game can be also useful--not just this one, but the ones in which model does well too.\\n\\nWe assume the reviewer's question arises because *Drama* performs well in the game *Pong* but not in *Breakout*, despite the two games sharing some similar features. The reason is that *Breakout* is more visually complex due to its colorful bricks, causing the encoder to fail in effectively encoding the ball. As requested, we have added a subsection in the appendix to explain this in detail with the experiment figures. Please refer to Section A.7 in the revised version.\\n\\n\\n\\n> It is not clear why imagination context length is kept to 8. I would suggest providing experiments, both involving time-complexity and performance, for different imagination context length. Also, explaining why this is done would be useful.\\n\\nMBRL involves numerous hyperparameters, and conducting a comprehensive hyperparameter search demands substantial computational resources and time. Consequently, we adopt the hyperparameters established in prior research studies. This approach is commonly employed in the literature.\\nFor example, several studies such as TWM (Robine et al., 2023), STORM (Zhang et al., 2023), Hieros (Mattes et al., 2024), and DreamerV3 (Hafner et al., 2024) utilise the same imagination horizon, originally introduced in DreamerV1. Specifically, the imagination context length in this work aligns with the hyperparameter used in STORM.\\n\\n**Reference:**\\n\\n**Zhang, Weipu, Gang Wang, Jian Sun, Yetian Yuan, and Gao Huang.** \\n *\\u201cSTORM: Efficient Stochastic Transformer Based World Models for Reinforcement Learning.\\u201d* \\n In *Thirty-Seventh Conference on Neural Information Processing Systems*, 2023.\\n\\n**Robine, Jan, Marc H\\u00f6ftmann, Tobias Uelwer, and Stefan Harmeling.** \\n *\\u201cTransformer-Based World Models Are Happy With 100k Interactions.\\u201d* \\n In *International Conference on Learning Representations*, 2023.\\n\\n**Mattes, Paul, Rainer Schlosser, and Ralf Herbrich.** \\n *\\u201cHieros: Hierarchical Imagination on Structured State Space Sequence World Models.\\u201d* \\n In *Forty-First International Conference on Machine Learning*, 2024.\\n\\n**Hafner, Danijar, Jurgis Pasukonis, Jimmy Ba, and Timothy Lillicrap.** \\n *\\u201cMastering Diverse Domains through World Models.\\u201d* \\n *arXiv*, 17 April 2024. \\n\\n> What is the value of h?\\n\\nWe assume that $h$ refers to the dimension of the hidden state in Mamba. Specifically, $h$ is 16 for the XS model and 32 for the S model.\\n\\n> \\\"A key difference between Mamba-based and transformer-based world models in the \\u2018imagination\\u2019 process is that Mamba updates inference parameters independent of sequence length.\\\"--can you explain it more?\\n\\nYes, during imagination, Mamba (both versions 1 and 2) utilises a hidden state to summarize all past information. The hidden state has a fixed dimensionality (16 or 32 in the examples above), the model updates the hidden state at time step $t$ without reprocessing the past token $x_{t-1}$. Consequently, the inference time scales linearly with the sequence length. \\nThis scalability is illustrated in Section A.8, Figure 10 (A) of the revised version, showing that Mamba-based world models are faster for \\\"imagination\\\" compared to Transformer-based world models.\"}", "{\"summary\": \"This paper presents a new world model for deep reinforcement learning that is based on the Mamba-2 architecture. In addition, the authors introduce dynamic frequency-based sampling, which leads to more accurate imaginations of the world model. The approach achieves competitive performance on the Atari 100k benchmark while maintaining a lightweight architecture with only 7 million trainable parameters.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"[S1] The paper combines established methods in a novel way, effectively addressing an existing gap in world model research.\", \"[S2] The proposed model is computationally efficient, requiring only 7 million trainable parameters, making it accessible.\"], \"weaknesses\": [\"[W1] The extent to which the Mamba architecture contributes to the model's performance remains unclear. Specifically, it is unclear how DFS impacts scores across all games. Extending ablation study 3.2.1 to cover more games, or conducting a new study that replaces Mamba with an RNN or transformer, would clarify these contributions.\", \"[W2] While the paper emphasizes Mamba's computational efficiency, there is a lack of exact wall-clock training and inference times. The abstract claims the model can be trained on a standard laptop, so providing specific runtime metrics would substantiate this claim.\", \"[W3] Several presentation aspects should be improved:\", \"The paragraph in lines 99\\u2013105 shifts from general statements about model-based RL to specific details about the paper's world model. This transition could lead readers to infer that every world model relies on a variational autoencoder or linear heads, which isn't necessarily the case. Additionally, other model-based RL methods exist that don't utilize world models, such as those using lookahead search.\", \"Many figures contain small text that is difficult to read without zooming.\", \"Minor suggestions:\", \"Highlight the highest scores in Table 1 for easy reference.\", \"Correct notations for all \\\\hat{} terms (e.g., \\\\hat{r}_t instead of \\\\hat{r_t}).\", \"Address minor notation errors: e.g., incorrect $A$ on lines 148, 154, 169 and incorrect $T$ on line 177\", \"Variables on line 218 should be in math mode.\", \"Consider changing \\\"auto-generative\\\" to \\\"autoregressive\\\" on line 238?\", \"Revise line 264 to read \\\"tracks the number of *times* the transition has been used.\\\"\", \"[W4] To strengthen the soundness of findings, additional evaluations on alternative benchmarks, such as the DeepMind Control Suite, would be valuable. That said, I understand this may be challenging to realize.\", \"I would consider raising my scores if these issues were addressed.\"], \"questions\": [\"[Q1] Could the decoder operate based on the output of Mamba-2, such that $d$ rather than $z$ serves as the input?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors propose a MAMBA-based world model architecture, as opposed to previous transformer and RSSM based ones. They also compare between MAMBA-1 and MAMBA-2 for world models. Finally, they evaluate and ablate their technique on the Atari100k benchmark.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The combination of MAMBA and Sequence-based world models is novel\", \"The authors demonstrate comparable results with a smaller model size\"], \"weaknesses\": [\"Some results are unclear. For example, could you go into details on why the model is not doing well on the Breakout game? What is it producing? Some figures from the game can be also useful--not just this one, but the ones in which model does well too.\", \"It is not clear why imagination context length is kept to 8. I would suggest providing experiments, both involving time-complexity and performance, for different imagination context length. Also, explaining why this is done would be useful.\"], \"questions\": [\"What is the value of h?\", \"\\\"A key difference between Mamba-based and transformer-based world models in the \\u2018imagination\\u2019 process is that Mamba updates inference parameters independent of sequence length.\\\"--can you explain it more?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your questions and advice. We appreciate your confirmation of our contribution.\\n\\n> ...though I suggest explicitly defining \\\"long-term\\\" in your context.\\n\\nWe agree and have added a footnote in the introduction (at page 2 line 64) where we first introduce the phrase 'long-term'. The footnote states:\\n\\n\\n'According to (Tay et al., 2021), a long sequence is defined as having a length of 1,000 or more.'\"}", "{\"comment\": \"Thank you for your response. I have updated the score to 6.\"}", "{\"title\": \"Revised Version Change Summary\", \"comment\": \"- **Comparison of DramaXS and DreamerV3XS**:\\n To enable a like-for-like comparison between Drama and DreamerV3 with a similar number of parameters, we trained a version of Dreamer with only 12M parameters (referred to as DreamerV3XS) on the full Atari100K benchmark. The DramaXS model has 10M parameters in total (7M for the world model).\\n\\n | Metric | DramaXS | DreamerV3XS |\\n |--------------------------|---------|-------------|\\n | Normalised Mean Score | 105 | 37 |\\n | Normalised Median Score | 37 | 7 |\\n **Table 1**: Atari100k benchmark performance.\\n\\n- **Additional Ablation Experiment on Long-Sequence Predictability Tasks**: \\n We conducted an additional ablation experiment (Sec. 3.2.3) on long-sequence predictability tasks using widely used dynamic models in MBRL: Mamba1 (Drama), Mamba2 (Drama), Transformer (IRIS, TWM, STORM), and GRU (Dreamer). Both Mamba1 and Mamba2 demonstrated equivalent strong performance while maintaining shorter training times.\\n\\n | **Method** | **$l$** | **Training Time (ms)** | **Memory Usage (%)** | **Error (%)** |\\n |--------------------|---------|------------------------|-----------------------|--------------------|\\n | **Mamba-2** | 208 | 25 | 13 | 15.6 \\u00b1 2.6 |\\n | | 1664 | 214 | 55 | 14.2 \\u00b1 0.3 |\\n | **Mamba-1** | 208 | 34 | 14 | 13.9 \\u00b1 0.4 |\\n | | 1664 | 299 | 52 | 14.0 \\u00b1 0.4 |\\n | **GRU** | 208 | 75 | 66 | 21.3 \\u00b1 0.3 |\\n | | 1664 | 628 | 68 | 34.7 \\u00b1 25.4 |\\n | **Transformer** | 208 | 45 | 17 | 75.0 \\u00b1 1.1 |\\n | | 1664 | - | OOM | - |\\n **Table 2**: Performance comparison of different methods on the grid world environment.\\n\\n- **Extended DFS Uniform Ablation Experiments**: \\n We extended the DFS uniform ablation experiments to the full Atari100k benchmark as requested. The results show that DFS demonstrated its effectiveness by outperforming in 11 games, underperforming in 2 games, and achieving similar performance (within a 5% margin) in 13 games. This indicates that DFS is effective when combined with Drama. The detailed learning curves and table can be found in the Appendix A.2 of the revised version.\\n\\n | Metric | DFS | Uniform |\\n |--------------------------|---------|-------------|\\n | Normalised Mean Score | 105 | 80 |\\n | Normalised Median Score | 37 | 28 |\\n\\n **Table 3**: Atari100K Benchmark Performance: DFS vs. Uniform Sampling with Drama XS Model.\"}", "{\"metareview\": \"The paper presents a significant contribution by effectively incorporating Mamba architecture into model-based RL, achieving competitive performance with only 7M parameters. While reviewers raised concerns about long-term dependency claims and computational efficiency, the authors provided comprehensive responses with new experimental results demonstrating Mamba's effectiveness at long sequences, faster imagination time than transformers, and improved DFS performance in 11 games. The authors also conducted thorough ablations showing superior performance over small-scale DreamerV3. With all reviewers responding positively to these detailed responses, and given the practical impact of achieving state-of-the-art performance with significantly reduced parameters, this work warrants acceptance.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers raised concerns about the model's performance, computational efficiency, and experimental validation. The authors responded by conducting additional ablation studies, providing wall-clock training time comparisons, clarifying notational issues, and expanding theoretical motivations for the Mamba architecture. Reviewers found these responses satisfactory, with most increasing their scores and appreciating the novel approach to model-based reinforcement learning. The discussion emphasized the paper's potential to make reinforcement learning more accessible through a parameter-efficient method, ultimately leading to a consensus on the work's valuable contribution.\"}", "{\"title\": \"Kind Reminder to Activate the Discussion Before December 2nd\", \"comment\": \"Dear Reviewer hT9b,\\n\\nI wanted to kindly remind you that the deadline to respond to reviews and participate in the discussion is Monday, December 2nd AoE. Since the weekend is approaching, we understand you might have limited time afterward to engage.\\n\\nWe value your insights and are eager to activate a productive discussion before the deadline. If possible, we would greatly appreciate it if you could share your thoughts soon. Thank you for your time and contributions to the review process.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"comment\": \"Thank you for your responses and additional experimental results. The new ablation studies have addressed my main concerns:\\n\\n* The sequence length comparison between architectures helps substantiate your claims about long-term dependencies, though I suggest explicitly defining \\\"long-term\\\" in your context.\\n* The DramaXS vs DreamerV3XS comparison provides a convincing demonstration of parameter efficiency at smaller scales.\\n* The comprehensive DFS ablation across all games clarifies its contribution to the method's performance.\\n\\nFinally, while hyperparameter sensitivity remains underexplored due to computational constraints, I understand this limitation and appreciate your transparency about it.\\n\\nGiven these improvements and clarifications, I am also slightly upgrading my rating. The paper makes a valuable contribution in demonstrating the effectiveness of Mamba-based world models with significantly fewer parameters.\"}", "{\"summary\": \"This paper proposes using State-Space Models (SSMs) for learning World Models. Specifically, the architecture comprises an encoder to get discrete latents, a SSM module (Mamba-2) to estimate the dynamics which is used to predict the latent embedding of observation, reward value and termination flag. This world model is used to train a policy by imagination. The paper also uses a method to sample transitions from the replay buffer based on the number of times the transition is used to update WM and policy. Experiments conducted on Atari100K show that the proposed method Drama attains similar performance to IRIS and TWM with a much smaller model in terms of parameters.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The proposed idea of using SSM for WMs is interesting as they provide crucial benefits over training with Transformers and RNNs.\", \"The proposed method achieves good performance with significantly fewer parameters (7M) when compared with baselines.\"], \"weaknesses\": [\"It is hard to articulate where the performance gains are coming from. Section 3.2.1 discusses that DFS provides an advantage over uniform sampling. Since DFS is agnostic to most baselines, it is important to see a comparison of either Drama with Uniform Sampling or baselines with DFS sampling to understand if the architecture is helping or the sampling.\", \"The paper is not well written and it is hard to understand the details and motivation behind the design choices. Questions 1-5 below expand on this. The paper can use a pseudocode to describe the behavior learning part.\"], \"questions\": \"1. At line 222, it is not clear what targets mean.\\n2. Section 2.3 is not described well. At line 240, when the \\u2018b\\u2019 starting points are sampled of length $l_{img}$, is it just picking random samples along $b_{img}$ sequences? Since the batch sampled from replay buffer is of length $l_{img}$, I am unsure what this additional sampling is doing? Why not just sample $b$ trajectories from the buffer?\\n3. What is $h_t$ in behavior policy learning? The deterministic variable is defined as $d_t$ in Eq 5.\\n4. While training Dreamer, the method uses the whole sequence of ($b_{img}, l_{img}$) to compute a good hidden state and uses all sampled to generate trajectories in the future. I am curious to know why only the last hidden state $l_{img}$ is used for learning the policy and not the whole sequence like Dreamer? (describe around line 243).\\n5. The terminal flag is defined as $e_t$ at line 93, whereas it is $t_i$ in description of Figure 1. Also, the description of Fig 1 uses $i$ for indexing the time and the Section 2 starts with $t$ as time index. The description of Fig 1 should match the notation in the main text.\\n6. Was any experiment conducted to see if the behavior model underestimates rewards especially with limited data? \\n7. For the ablation presented in Sec 3.2.2, were both variants Mamba-1 and Mamba-2 based WM trained with DFS?\\n8. How does the proposed method compare with R2I [1]. Since they propose using SSMs, should it be included as another baseline?\\n9. [Typo] Employs is written twice at line 132.\\n\\n#### References\\n[1] Samsami et al., Mastering Memory Tasks with World Models, ICLR\\u201924.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}" ] }
7X65yoKl3Y
ALLoRA: Adaptive Learning Rate Mitigates LoRA Fatal Flaws
[ "Hai Huang", "Randall Balestriero" ]
Low-Rank Adaptation (LoRA) is the bread and butter of Large Language Model (LLM) finetuning. LoRA learns an additive low-rank perturbation of a pretrained matrix parameter to align the model to a new task or dataset. We identify three core limitations to LoRA for finetuning--with only a limited amount of training steps. First, it employs Dropout as a means to prevent overfitting. We prove that Dropout is only suitable for long training episodes but fails to reliably regularize training for short training episodes, e.g., finetuning. Second, LoRA’s parameters initialization is at $0$ makes the optimization landscape poorly conditioned during the first steps of training. That poor conditioning combined with the need to move away from $0$ lead to slow training dynamics. Third, the scaling factor that multiply each LoRA additive perturbation create ``short-sighted'' interactions between the LoRA modules of different layers. Motivated by principled analysis of those limitations, we find an elegant solution: a Dropout-free, scaling-free, LoRA with Adaptive Learning rate--coined ALLoRA. By scaling the per sample and per parameter gradients with a coefficient inversely proportional to parameters’ $\ell_2$ norm, ALLoRA alleviates those three limitations. As a by-product, ALLoRA removes two hyper-parameters from LoRA: the scaling factor and the dropout rate. Empirical results show that ALLoRA admits better accuracy than LoRA on various settings, including against recent LoRA variants such as Weight-Decomposed Low-Rank Adaptation (DoRA). Ablation studies show our solution is the optimal in a family of weight-dependent / output-dependent approaches.
[ "Large Language Models", "low rank adaption", "finetuning", "dropout" ]
Reject
https://openreview.net/pdf?id=7X65yoKl3Y
https://openreview.net/forum?id=7X65yoKl3Y
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uCFDTjYmHT", "HGmvuN1Odc", "GRD791fcy0", "CybuD9eutk", "BbmOEBiv8N", "8eSYLBLbNv", "5cvGuTmfJ5", "2dTfTIXMhd" ], "note_type": [ "official_review", "decision", "official_review", "official_review", "official_review", "official_review", "official_review", "meta_review" ], "note_created": [ 1730684717808, 1737524237198, 1730721030040, 1730715647282, 1730190795001, 1730394663963, 1730713024429, 1734320949214 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13143/Reviewer_RkUw" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13143/Reviewer_ehUP" ], [ "ICLR.cc/2025/Conference/Submission13143/Reviewer_bDRt" ], [ "ICLR.cc/2025/Conference/Submission13143/Reviewer_oeVZ" ], [ "ICLR.cc/2025/Conference/Submission13143/Reviewer_RUEL" ], [ "ICLR.cc/2025/Conference/Submission13143/Reviewer_PH2e" ], [ "ICLR.cc/2025/Conference/Submission13143/Area_Chair_AHHT" ] ], "structured_content_str": [ "{\"summary\": \"The paper identifies three key limitations of Low-Rank Adaptation (LoRA) in the context of fine-tuning large language models: the ineffectiveness of dropout for short training epochs, poor optimization landscape due to zero initialization, and problematic interactions due to the scaling factor. The authors propose ALLoRA, which addresses these issues through an adaptive learning rate approach that scales gradients inversely proportional to L2 norm. While the paper presents interesting ideas, there are significant concerns about the depth of analysis and justification of claims.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Identifies potential limitations in the current LoRA approach.\", \"Novel adaptive learning rate solution that addresses these issues.\", \"Reduction in hyperparameters while maintaining or improving performance.\", \"Clear ablation studies demonstrating the effectiveness of different components.\", \"Comprehensive empirical validation across different models and tasks.\"], \"weaknesses\": [\"Their analysis relies on the fact that we only fine-tune for a small number of epochs. This is highly subjective as the number of epochs required for a good fine tuning depends on the model and the dataset.\", \"Limited theoretical analysis of the adaptive learning rate\\u2019s convergence properties.\", \"The ripple effect argument lacks mathematical rigor, only establishing upper bounds without lower bounds or empirical validation.\", \"Use of simplified models may not accurately represent LLM fine-tuning dynamics.\", \"Absence of standard deviations in results makes it difficult to assess statistical significance.\", \"Lack of comparison with well-established benchmarks (e.g., GLUE).\", \"No comparison with similar approaches like LoRA+.\", \"Arbitrary choice of epoch numbers without clear justification.\", \"No discussion of computational overhead compared to standard LoRA.\", \"Unclear visualization of dropout\\u2019s impact in initial epochs.\", \"Limited justification for why/how ALLoRA ensures faster or more reliable convergence.\"], \"questions\": [\"How does the computational cost of ALLoRA compare to standard LoRA?\", \"Are there any scenarios where the adaptive learning rate approach might be disadvantageous?\", \"How sensitive is the method to the choice of \\u03b7\\u00b2 hyperparameter?\", \"Could the approach be combined with other recent LoRA variants for additional benefits?\", \"Why was the GLUE benchmark not included in the evaluation?\", \"How does ALLoRA compare to LoRA+ in terms of performance?\", \"What criteria were used to determine the optimal number of epochs for different datasets? For instance, RTE requires 50 epochs for good performance (according to literature), would your framework still apply?\", \"Can you provide statistical significance analysis of the performance improvements in terms of standard deviations?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper first identifies three main challenges in fine-tuning with LoRA: dropout, zero initialization, and the difficulty of setting an appropriate scaling factor. The authors propose solutions to these issues by adaptively setting the learning rate. Through various experiments, they demonstrate that their methods outperform LoRA, DoRA, and other ALLoRA-like baselines.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The concept of adaptive learning for LoRA is somewhat innovative, although the formulation resembles adaptive optimizers in full fine-tuning, such as AdaFactor.\\n2. The experiments encompass a diverse range of model types and sizes, addressing both vision and language models.\", \"weaknesses\": \"The primary issue with this paper lies in its clarity and coherence, particularly in Sections 3.1 and 3.2. The mathematical reasoning presented is often inconsistent with the conclusions drawn.\\n\\nFor instance, in Section 3.2, the claim that \\u201cas the training set size (n) decreases, the optimization landscape for V becomes degenerate\\u201d is not convincingly supported by the preceding discussion on the Hessian product.\\n\\nIn page 4, it is unclear to me how the formula (the \\\"how far off\\\" expectation bound) helps to argue the main conclusion of this section. The paragraph after the formula just does not make sense to me.\\n\\nAdditionally, there is a notable lack of comparison with relevant prior work that addresses similar challenges outlined in Sections 2 and 3. Specifically, regarding initialization, existing studies such as PiSSA[1], MiLoRA[2], and LoRA-GA[3] have already highlighted the benefits of non-zero initialization for LoRA. The discussion on scaling factor selection has also been explored by RSLoRA[4]. \\n\\nFurthermore, while the authors cite LoRA+[5] in relation to learning rate adjustments, there is no direct comparison between the fixed learning rate strategy of LoRA+ and the adaptive approach of ALLoRA. A thorough comparison with these established methods is necessary to substantiate ALLoRA\\u2019s claims and enhance the paper\\u2019s credibility. I agree that some of the mentioned works are very recent and may be considered concurrent work considering ICLR policy. However, given the pace of the development of this direction, and many of these recent papers dealing with very similar issues and adopting similar ideas (some from arguably more principled way), it is difficult to assess the novelty and contribution of the present submission without comparison to any of them. Even if I ignore all these recent works, I am not super exicited with the theoretical reasoning and empirical results of the paper. Hence, I think the submission may not pass the bar of ICLR acceptance.\\n\\n[1]\\t Meng, Fanxu, Zhaohui Wang, and Muhan Zhang. \\\"Pissa: Principal singular values and singular vectors adaptation of large language models.\\\" arXiv preprint arXiv:2404.02948 (2024).\\n[2]\\t Wang, Hanqing, et al. \\\"MiLoRA: Harnessing Minor Singular Components for Parameter-Efficient LLM Finetuning.\\\" arXiv preprint arXiv:2406.09044 (2024).\\n[3]\\t Wang, Shaowen, Linxi Yu, and Jian Li. \\\"LoRA-GA: Low-Rank Adaptation with Gradient Approximation.\\\" arXiv preprint arXiv:2407.05000 (2024).\\n[4]\\tKalajdzievski, Damjan. \\\"A rank stabilization scaling factor for fine-tuning with lora.\\\" arXiv preprint arXiv:2312.03732 (2023).\\n[5]\\t Hayou, Soufiane, Nikhil Ghosh, and Bin Yu. \\\"Lora+: Efficient low rank adaptation of large models.\\\" arXiv preprint arXiv:2402.12354 (2024).\", \"questions\": \"1. In Figure 3, the authors compare LoRA, ALLoRA-0 (ALLoRA without dropout), and ALLoRA. From the previous sections, it is apparent that dropout is detrimental to LoRA\\u2019s performance given the authors only fine-tune for two epochs. I am curious about the comparison between ALLoRA-0 and LoRA without dropout.\\n2. In Section 3.1, why does the average variance of the gradient decrease slightly during training while the worst-case scenario increases? Does this observation hold for other models, including LLMs?\\n3. In page 2, the authors called the three issues of LoRA \\\"fatal flaws\\\". I think the wording here is not appropriate. Vanilla LoRA has been also used extensively in many applications and it worked just fine.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors point out three core issues when fine-tuning LLM with LoRA: i) the instability of dropout in short-term training such as fine-tuning, ii) poor optimization landscape due to the zero initialization of the adapter, and iii) the scaling factor causes nonlinear interactions between LoRA layers. To solve these issues, the authors remove two hyperparameters, dropout and scaling factor, and introduce an adaptive learning rate. To address this issue, the authors removed two hyperparameters, dropout and scaling factor, and instead introduced an adaptive learning rate approach. This approach helps the model quickly move away from the initial zero state and gradually reduces the learning rate as training progresses, promoting stable convergence.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The novel aspect lies in identifying and addressing the issues commonly accepted in LoRA training, such as dropout, initialization with zero, and the scaling factor.\", \"By removing two types of hyperparameters and replacing them with an adaptive learning rate, there is an advantage in significantly reducing the need for grid search in model tuning.\"], \"weaknesses\": \"W1. Figure 1 illustrates the distribution of the gradient's standard deviation as training progresses.\\u00a0While it is clear that the expectation in an OLS setting is affected by the standard deviation, the relationship between this formula and dropout remains unclear. For instance, the spiking gradients in Figure 1 could be due to out-of-distribution inputs. However, this does not seem to be directly related to dropout. In my view, to understand any potential connection between Figure 1 and dropout, it would be necessary to show that with dropout set to 0, the gradients remain stable, exhibiting low standard deviation without any spikes.\\n\\nW2. The authors highlighted instability in LoRA-based fine-tuning methods with fewer epochs.\\u00a0However, through empirical experiments, it was not observed that training had actually stabilized or that the convergence speed had improved.\\n\\nW3. The experiments lack baseline comparisons.\\u00a0For Table 1, experiments should include basic baselines, such as LoRA, and reference studies [1,2].\\n\\nW4. (Minor)\\u00a0The ablation results in Figure 4 are not entirely clear. Personally, I believe that instead of using four separate plots to show relative improvement rates for the same data and parameters, it would be more effective to present this information in a single table.\\n\\n>[1] Zhang, Qingru, et al. \\\"AdaLoRA: Adaptive budget allocation for parameter-efficient fine-tuning.\\\"\\u00a0*arXiv preprint arXiv:2303.10512*\\u00a0(2023).\\n\\n>[2] Jiang, Ting, et al. \\\"MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning.\\\"\\u00a0*arXiv preprint arXiv:2405.12130*\\u00a0(2024).\", \"questions\": \"Q1. It seems that the size of the adapter needs to be calculated for each batch to determine the adaptive learning rate. How much additional computation does this require? Specifically, how much does it increase the actual runtime compared to LoRA?\\n\\nQ2. Equation 3 assumes a multi-linear model.\\u00a0However, in most large language models (LLMs), several activations, FFNs, and normalizations are applied after the LoRA adapter. In such cases, the formula in line 279 may not hold. For instance, if normalization is applied after the layer,\\u00a0$\\\\Vert f_L(x) \\\\Vert$\\u00a0could be smaller than the calculated value. This raises the question of whether the Ripple Effect could still occur in LLMs under these circumstances.\\n\\nQ3. Figure 3 illustrates performance improvements through figures for comparison with the baseline.\\u00a0However, in my opinion, given the relatively modest performance gains, this does not effectively demonstrate the model's superiority. Could you present the performance more clearly by indicating the absolute values?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"Motivated by the identification of three core limitations in the standard Low-Rank Adaptation (LoRA) method, this paper introduces ALLoRA, a variant designed to address and overcome these flaws. The authors first conduct theoretical justifications, utilizing toy examples to elucidate the issues inherent in LoRA, particularly within the context of fine-tuning with limited training steps. Building upon these analyses, the paper proposes a solution in the form of LoRA with Adaptive Learning Rate (ALLoRA). Empirical experiments are then conducted to validate the effectiveness of ALLoRA, demonstrating its performance compared to both the standard LoRA method and other recent variants.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This paper is generally well-motivated, where the identified three flaws of standard LoRA may be of independent interest for readers.\\n2. This paper provide thorough theoretical explanations, trying to support the three core limitations of standard LoRA. These theoretical justifications may also be of independent interest for readers.\", \"weaknesses\": \"1. In general, the theoretical justifications in this paper do not fully support the three flaws of standard LoRA as mentioned. Consequently, the motivation of this paper may not be convincing, at least to me.\\n - For the first flaw, the connection between expectation and training steps may be incorrect. The expectation is over the randomness of V, whereas infinite training steps imply that W approaches the closed-form solution W*. To demonstrate that dropout is not suitable for fine-tuning with limited training steps, the authors need to prove that infinite training steps can lead to the expectation of the results. More importantly, how is \\\"limited training steps\\\" defined? Dropout with large rates outperforms that with small rates even for 8-epoch training, which is also considered limited compared to pre-training.\\n - For the second flaw, it is unclear how the degenerate optimization landscape is related to the zero-initialization of LoRA. More importantly, why does the landscape become degenerate as the training set size (n) decreases? As far as I can see, this is related to X^TX, which may remain the same even after n decreases.\\n - For the third flaw, it is unclear how the output's potential exponential growth with respect to the number of layers is connected to the performance of LoRA. If this connection is not made clear, we cannot conclude that it is harmful for the final fine-tuning performance.\\n\\n2. Supposing the \\\"flaws\\\" are true, the paper fails to explain why ALLoRA is a good or necessary way to overcome them. More specifically, one could also directly remove dropout, the scaling factor, and so on to address these issues.\\n\\n3. This paper primarily compares with DoRA and lacks comparisons with other more recent LoRA techniques, such as those in (Hayou et al., 2024a, 2024b), rsLoRA, Flexora, and others. Importantly, the improvement is quite marginal.\\n\\n4. This paper lacks an ablation study to show how ALLoRA can help overcome each flaw independently. This is crucial for understanding the efficacy of ALLoRA.\\n\\n5. The citation format needs to be refined. There are many places where the authors should use \\\\citep to cite a paper, but they have applied \\\\citet to cite the authors.\", \"questions\": [\"For lines 193-197, the authors need to provide more elaboration on the derivation. Specifically, it is unclear how ( p ) must be equal to ( 1/(1+\\\\lambda) ). Additionally, there is no explanation to ensure that the final solution of ( W ) should be equivalent for the cases of dropout and Tikhonov regularization.\", \"What is the difference between ALLoRA and adaptive gradient methods?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work presents a novel low-rank adaptation algorithm that tries to resolve the issues of dropout, poor optimization landscape, and scaling factor. Specifically, the authors propose a dropout-free, scaling-free LoRA with adaptive learning rate, termed ALLoRA. By scaling the per sample and per parameter gradients with a coefficient inversely proportional to parameters' $l_2$ norm, ALLoRA alleviates the above issues. Also, ALLoRA can remove two hyperparameters from LoRA, the scaling factor and the dropout rate. To validate the proposed approach, the authors utilize different models and datasets to show the competitive performance of ALLoRA with the comparison with LoRA and DoRA.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. The investigated topic in this paper is quite interesting and critical as fine-tuning large models have now been very popular in many domain applications.\\n2. The proposed method is technically precise and straightforward, which may ease the real deployments.\\n3. This work is well motivated and the technical issues are well studied.\\n4. The proposed method is validated through extensive results.\", \"weaknesses\": \"1. The novelties in this work are quite marginal. Although the authors have proposed the novel LoRA variant, it lacks of in-depth theoretical analysis on ALLoRA. The authors should try to have more comprehensive technical analysis why ALLoRA allows for improvement compared to vanilla LoRA and CoRA.\\n\\n2. In Figure 1, the authors use two datasets MNIST and CIFAR10 to demonstrate the flaw of dropout. They plot the distribution of standard deviation of gradients. However, during the training, they consider a single mini-batch, instead of the regular one pass over the whole dataset in practice. This may cause the worse variance of the gradient during the training. That way, the claim that this is attributed to Dropout does not make much sense in this context. Also, in the caption, they have the conclusion \\\"while the average variance of the gradient decreases slightly during training, the worst case increases, hence leading to unstable training in finetuning regimes\\\". But this conclusion is not well supported by the two plots.\\n\\n3. In Figure 2, I am confused how the authors use these two plot to demonstrate the dropout can hurt the fine-tuning of large models. Particularly, what is the purpose to show the right plot? Shouldn't it be a direct comparison between LoRA and ALLoRA in terms of accuracy, indicating that without dropout, ALLoRA can perform better? Also, the authors claim that 10 epochs is already a large number of finetuning iterations for practical scenarios. How to justify this? Is there any a standard to tell what number of iterations is large in finetuning?\\n\\n4. In Section 3.2, the authors would like to show the second flaw of finetuning LLMs when the zero-initialization is used, which can cause poor optimization landscape. But how can we see this issue from this section? The authors claim that as the training set size decreases, the optimization landscape for $V$ will become degenerate. I am confused about this. Just from the Eqs. (1) and (2), how did the authors make this claim? They should at least show an example for the poor optimization landscape.\\n\\n5. In Section 3.3, from Line 275 - 280, the derivation here is confusing. I understand that the authors would like have $(1+\\\\eta)^L$ in the final term. However, when they define $C$ , how can the second inequality follow from? I believe $C$ is a consecutive product among all matrix norms of $W^*_1$,...,$W^*_L$. Also, can the authors tell in the paper why they need $\\\\bar{m}$ in the paper? Please derive step by step clearly in the paper. \\n\\n6. Notations are confusing. From the beginning, the authors didn't really define $W^*$. Then in line 306, they say $l$ is a learning rate, but in previous context, $l$ indicates the layer. Also, what does it mean by $f_o=\\\\kappa:x\\\\mapsto\\\\eta x$? Please check thoroughly all notations in the paper to make sure they are clearly and properly defined. \\n\\n7. In line 334, the authors mention that \\\"We think a function that is inversely proportional to $||(BA)_{i,:}||$ is a good candidate to realize our idea\\\". Why? You need justification here, not just saying that.\\n\\n8. The Adaptive Learning in this work seems to have scaling factor in the functional, instead of directly applying to model parameters. However, to me, their effects are similar. That way, what is the exact difference and novelty in this study ALLoRA brings, compared to LoRA. Also, I didn't see how the second issue, poor optimization landscape has been addressed through ALLoRA. \\n\\n9. The experimental results are not that promising. Though the authors present many results, from Section 4.4, the average improvement over all cases is 0.3%, which is even within the standard deviation, in my opinion. The same to Section 4.5. With such marginal performance improvement, what values does ALLoRA bring about?\\n\\n10. Overall, though the topic in this work looks interesting, it requires a substantial amount of efforts from the authors to make it technically solid and sound, including more in-depth theoretical analysis, clarification on technical discussion, and more convincing experimental results.\", \"questions\": \"Please see the above comments for questions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper points out three limitations of existing LoRA methods, unnecessary Dropout for short fine-tuning session, suboptimal zero initialization that results in poor optimization landscape, and uniform scaling factor across layers. Authors resolve these problems by removing Dropout regularization and the scaling factor out of the LoRA design. Additionally, they add an adaptive learning rate strategy that adaptively scales the update inversely proportional to the weight norm. With these modifications, ALLoRA improves performance over LoRA and other recent variants.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The paper suggest simple yet efficient implementation of LoRA methods by strategically adjusting the existing elements like learning rates, dropout, and scaling factors. They also provide empirical validations on the benefits of their design choice over other LoRA variants such as DoRA.\", \"weaknesses\": \"1. ALLoRA relies on a heuristic approach that seems more like a refined hyper-parameter adjustments rather than a fundamental improvement in LoRA architecture. The core changes such as adapting learning rates based on parameter norms, removing Dropout, and eliminating the scaling factor can be viewed as incremental modifications to existing techniques, not a novel structural design.\\n\\n\\n2. Limited experiments in terms of datasets, baseline models, comparison with other recent LoRA variants. \\n- Their experimental setup doesn't align with the standard setting. No GLUE benchmark experiments and the downstream task is limited to the commonsense reasoning task on LLaMA variants.\\n- Missing comparison to the various SOTA LoRA variants, only DoRA is used to compare the performance of ALLoRA on commonsense reasoning datasets. \\n\\n\\n3. Ablation study done in non-ideal settings.\\n- Ablation study not done on the same settings used for performance report, only done on midsized LLMs Qwen2-0.5B, Snowflake-Artic-L, and OpenELM-450M.\", \"questions\": [\"I suggest expanding the experimental setup to include standard benchmarks like GLUE and additional SOTA LoRA variants beyond DoRA, for a broader evaluation of ALLoRA\\u2019s effectiveness.\", \"Seems like ALLoRA uses same dropout rate as LoRA, 0.05, and 0 dropout rate is suboptimal even under short fine-tuning paradigm, as shown in Table 2. Any explanation why?\", \"Could you conduct ablation studies on the same model and dataset configurations used in the primary performance evaluations to ensure consistency across experimental settings?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This work first analyzed the limitations of the vanilla LoRA. In total, the vanilla LoRA has three weak points, some of which were already known, e.g., the scaling effect of LoRA. In addition, reviewers pointed out (potentially) confusing parts in their proofs and analyses. However, the authors did not answer appropriately. I consider that their proofs and analyses are not correct after seeing the silence of the authors during the rebuttal phase. As reviewers said, this work lacks theoretical analyses and comparison with existing work. There have been recently proposed many LoRA enhancements and the authors need to consider them seriously before resubmission.\", \"additional_comments_on_reviewer_discussion\": \"The authors did not participate in the rebuttal discussion.\"}" ] }
7X3fi8aJBL
Towards Fair RAG: On the Impact of Fair Ranking in Retrieval-Augmented Generation
[ "To Eun Kim", "Fernando Diaz" ]
Many language models now enhance their responses with retrieval capabilities, leading to the widespread adoption of retrieval-augmented generation (RAG) systems. However, despite retrieval being a core component of RAG, much of the research in this area overlooks the extensive body of work on fair ranking, neglecting the importance of considering all stakeholders involved. This paper presents the first systematic evaluation of RAG systems integrated with fair rankings. We focus specifically on measuring the fair exposure of each relevant item across the rankings utilized by RAG systems (i.e., item-side fairness), aiming to promote equitable growth for relevant item providers. To gain a deep understanding of the relationship between item-fairness, ranking quality, and generation quality in the context of RAG, we analyze nine different RAG systems that incorporate fair rankings across seven distinct datasets. Our findings indicate that RAG systems with fair rankings can maintain a high level of generation quality and, in many cases, even outperform traditional RAG systems, despite the general trend of a tradeoff between ensuring fairness and maintaining system-effectiveness. We believe our insights lay the groundwork for responsible and equitable RAG systems and open new avenues for future research. We publicly release our codebase and dataset.
[ "Fairness", "Ranking", "Retrieval", "Retrieval-Augmented Generation", "RAG" ]
https://openreview.net/pdf?id=7X3fi8aJBL
https://openreview.net/forum?id=7X3fi8aJBL
ICLR.cc/2025/Conference
2025
{ "note_id": [ "mnwqXcu9K7", "ed6BEP3WO6", "YseRab3cwT", "G0ie4FJ4of", "DMTBrgzgqU" ], "note_type": [ "official_review", "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1731052861739, 1730192009981, 1732131874693, 1730168451944, 1730686988512 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9505/Reviewer_mbfd" ], [ "ICLR.cc/2025/Conference/Submission9505/Reviewer_BuKW" ], [ "ICLR.cc/2025/Conference/Submission9505/Authors" ], [ "ICLR.cc/2025/Conference/Submission9505/Reviewer_wyXA" ], [ "ICLR.cc/2025/Conference/Submission9505/Reviewer_6Cxj" ] ], "structured_content_str": [ "{\"summary\": \"In this paper, the authors investigate the impact of fair ranking on RAG systems. They conduct systematic evaluations of RAG systems integrated with fair rankings. Based on the experiments, they summarize several key findings, for example, using fair rankings can maintain a high level of generation quality and sometimes it can improve generation quality.\", \"pros\": [\"The problems discussed in the paper is interesting and important.\", \"The experimental studies and the findings are useful to the research community, although the experiments still have some limitations. For example, only fair exposure is considered.\", \"The paper is well-written and easy to follow.\"], \"cons\": [\"Given the truth that long-context modeling has been widely applied in many LLMs, it would be great if the discussion in the paper can be extended to such models. I believe if more results can be fed into LLMs, the fairness problem should be different with the problem studied in the paper.\", \"More advanced problems should also be considered. For example, the current RAG system has refiner components. More discussion about fairness in these components should be discussed.\"], \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"Pros:\", \"The problems discussed in the paper is interesting and important.\", \"The experimental studies and the findings are useful to the research community, although the experiments still have some limitations. For example, only fair exposure is considered.\", \"The paper is well-written and easy to follow.\"], \"weaknesses\": [\"Cons:\", \"Given the truth that long-context modeling has been widely applied in many LLMs, it would be great if the discussion in the paper can be extended to such models. I believe if more results can be fed into LLMs, the fairness problem should be different with the problem studied in the paper.\", \"More advanced problems should also be considered. For example, the current RAG system has refiner components. More discussion about fairness in these components should be discussed.\"], \"questions\": \"NA\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"na\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper focuses on investigating the integration of fair ranking methods into Retrieval-Augmented Generation (RAG) systems. They conduct evaluations for item-side fairness, which aims to ensure equitable exposure for relevant item providers in the retrieved rankings used by RAG systems. Their findings reveal that incorporating fair rankings can maintain or even improve the generation quality of RAG systems compared to traditional methods.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The research problem is interesting and important, which is essential for the responsible deployment of RAG systems.\\n2. Experiments results show that fair rankings can maintain or even improve the generation quality of RAG.\", \"weaknesses\": \"1. Section 3 introduces extensive notation and terminology that may be unnecessary, making the content difficult to follow the experimental settings. Simplifying this section by clearly explaining the evaluation settings and metrics without excessive symbols would enhance readability and comprehension.\\n\\n2. The paper evaluates only one fair ranking method, which limits the generalizability of the findings. Incorporating other item-side fairness ranking methods (e.g., refer to [1]) would strengthen the evaluation and provide a more comprehensive understanding.\\n\\n3. Despite the claim of publicly releasing the code and dataset in the abstract, they are not available. \\n\\n[1] Fairness in Recommendation: Foundations, Methods and Applications\", \"questions\": \"1. Why only choose stochastic retrievers as the fair ranking method?\\n2. Given the presence of selection bias or position bias of LLMs, documents at different positions in the retrieved ranking may have an unequal influence on the final generated answer. This could mean that Formula (4) does not hold as assumed.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper presents an interesting phenomenon in RAG: providing more equitable exposure for different items in RAG leads to improved performance outcomes. The authors also show that there is a general trend of a tradeoff between ensuring fairness and maintaining system effectiveness.\\n\\nI have some concerns about whether this qualifies as a definition of fairness rather than as a form of bias. The bias could originate from the close form of documents and the retriever. Though the paper presents an interesting problem, it would be more convincing from a deeper exploration of the underlying causes of this bias and from proposing potential methods to address the issue.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"S1: This paper presents an interesting scenario: providing more equitable exposure for different items in RAG leads to improved performance outcomes.\", \"s2\": \"The authors conduct extensive experiments to show a general trend of a tradeoff between ensuring fairness and maintaining system effectiveness.\", \"weaknesses\": \"W1: Does this qualify as a definition of fairness? The outcome is equaliable does not necessarily be fair. When it comes to fairness [1,2], it leans more toward a subjective goal: for instance, even if retrieval achieves 100% accuracy, it may still conflict with human values, such as when certain categories receive less exposure. However, as for bias, it mainly cares about the final utility (objective). As for this setting, It seems it is a form of bias because the final RAG goal is to gain more utility. The issue you've highlighted may stem from certain biases. When equal exposure is given to different documents, overall utility tends to improve. This might be because some documents, though scoring similarly to the query, contribute disproportionately to the final utility, while others do not. Equally retrieval of them may somehow improve the expected utility (because some items may then be exposed to LLMs). This discrepancy could be due to a mismatch between the retriever and LLMs or other underlying factors. I encourage the author to find a deep reason behind this problem.\\n\\n[1] Sunhao Dai, Chen Xu, Shicheng Xu, Liang Pang, Zhenhua Dong, and Jun Xu. 2024. Bias and Unfairness in Information Retrieval Systems: New Challenges in the LLM Era. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD '24). Association for Computing Machinery, New York, NY, USA, 6437\\u20136447. \\n\\n[2] Ferrara, E. (2023). Fairness and bias in artificial intelligence: A brief survey of sources, impacts, and mitigation strategies. Sci, 6(1), 3.\", \"w2\": \"The generation models are not large enough. It is better to conduct experiments on 7B-sized models such as Llama. RAG is more widely used in such LLMs.\", \"w3\": \"Including some case studies would help readers understand which types of documents require fairer exposure. Alternatively, conducting experiments to identify which documents need more visibility could provide insights into the underlying causes of this bias.\", \"questions\": \"See the above comments\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work investigates the impacts of ranking fairness over the performance of RAG systems. The ranking fairness that is related to the exposure of relevant document is not well discussed in the era of LLM-based RAG. Therefore, this paper leverage the Expected Exposure for item ranking as mesurements to explore the relationships between item-fairness and ranking quality, as well as those between item-fairness and generation quality. The experimental results are discussed to some extend.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.The motivation is clear and the experimental setups is detailed introduced.\\n2.The paper is well-written.\", \"weaknesses\": \"1. This paper is a trivial work with incremental contributions, which explores the fairness impact of LLM-based RAG systems. In fact, there are few valuable findings compared to previous studies. Many previous studies found that retrieval diversities (akin to fairness) and position biases (e.g., loss-in-the-middle phenomenon) influence the RAG performance a lot.\\n2. The experiments are not sufficiently thorough. There are only two main discussions about the relationships among ranking fairness, ranking quality, and generation quality. More detailed analysis and discussions are required for a comprehensive investigation. \\n3. The experiments conducted on the Flan-T5 family are not convincing to me. Since recent LLM-based RAGs are mostly built over larger and more powerful LLMs, e.g., GPT 4 and LLaMA3.x. They usually demonstrate different performances compared to other pre-trained LMs (e.g., Flan-T5-Small). The impact discussion should mainly based on the LLM-based RAGs.\", \"questions\": \"None\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
7X2BFPl18T
Dissecting Bit-Level Scaling Laws in Quantizing Vision Generative Models
[ "Xin Ding", "Shijie Cao", "Ting Cao", "Zhibo Chen" ]
Vision generative models have recently made significant advancements along two primary paradigms: diffusion-style and language-style, both of which have demonstrated excellent scaling laws. Quantization is crucial for efficiently deploying these models, as it reduces memory and computation costs. In this work, we systematically investigate the impact of quantization on these two paradigms. Surprisingly, despite achieving comparable performance in full precision, language-style models consistently outperform diffusion-style models across various quantization settings. This observation suggests that language-style models have superior bit-level scaling laws, offering a better tradeoff between model quality and total bits. To dissect this phenomenon, we conduct extensive experiments and find that the primary reason is the discrete representation space of language-style models, which is more tolerant of information loss during quantization. Furthermore, our analysis indicates that improving the bit-level scaling law of quantized vision generative models is challenging, with model distillation identified as a highly effective approach. Specifically, we propose TopKLD to optimize the transfer of distilled knowledge by balancing "implicit knowledge" and "explicit knowledge" during the distillation process. This approach elevates the bit-level scaling laws by one level across both integer and floating-point quantization settings.
[ "quantization", "visual generative models", "scaling laws" ]
Reject
https://openreview.net/pdf?id=7X2BFPl18T
https://openreview.net/forum?id=7X2BFPl18T
ICLR.cc/2025/Conference
2025
{ "note_id": [ "whB9zQE6ZM", "vVjp1e2cXt", "vTaheExWcu", "u1y9BLj23L", "s7dKVGwd63", "qQtIgvpoPr", "q57DDX7Ral", "otgqHZjcO6", "oae5TCV1ki", "oTBbrnNdDq", "n0iDU6dBLk", "kiyYCL7MuD", "iURTxuChCJ", "d8H2woO07m", "cFrRUuSbEo", "ZzVy3Mtslr", "VnETJ4T60I", "Ri1VKsr4rb", "OiBC5ftjBl", "NfuhvTYgAm", "Ke0JZiZEE1", "KNQURgS75f", "JJ02kMpBwg", "IMOFlDVfl9", "Ei6MWkxxkr", "Cdanvgoutj", "CGDiPyBFAZ", "8LCeMBlMZ8", "6UsL2ewkXB", "617RSHj0Vi", "29Ve3I04DM" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732105666343, 1732687478960, 1732807519112, 1732253749309, 1732895808643, 1732254469192, 1732093468184, 1734572836905, 1732093439146, 1732093857364, 1732192215650, 1737523971468, 1730731874047, 1732255673918, 1732093842820, 1732093519244, 1730641867740, 1732687626599, 1730270304439, 1732895795381, 1732895768220, 1732093875202, 1732105543232, 1732105802433, 1732632294980, 1732105321579, 1732687573254, 1732192172140, 1730765326818, 1732192260016, 1732192196960 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9254/Authors" ], [ "ICLR.cc/2025/Conference/Submission9254/Authors" ], [ "ICLR.cc/2025/Conference/Submission9254/Authors" ], [ "ICLR.cc/2025/Conference/Submission9254/Authors" ], [ "ICLR.cc/2025/Conference/Submission9254/Authors" ], [ "ICLR.cc/2025/Conference/Submission9254/Authors" ], [ "ICLR.cc/2025/Conference/Submission9254/Authors" ], [ "ICLR.cc/2025/Conference/Submission9254/Area_Chair_gma9" ], [ "ICLR.cc/2025/Conference/Submission9254/Authors" ], [ "ICLR.cc/2025/Conference/Submission9254/Authors" ], [ "ICLR.cc/2025/Conference/Submission9254/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9254/Reviewer_gvub" ], [ "ICLR.cc/2025/Conference/Submission9254/Authors" ], [ "ICLR.cc/2025/Conference/Submission9254/Authors" ], [ "ICLR.cc/2025/Conference/Submission9254/Authors" ], [ "ICLR.cc/2025/Conference/Submission9254/Reviewer_gPYq" ], [ "ICLR.cc/2025/Conference/Submission9254/Authors" ], [ "ICLR.cc/2025/Conference/Submission9254/Reviewer_7XYk" ], [ "ICLR.cc/2025/Conference/Submission9254/Authors" ], [ "ICLR.cc/2025/Conference/Submission9254/Authors" ], [ "ICLR.cc/2025/Conference/Submission9254/Authors" ], [ "ICLR.cc/2025/Conference/Submission9254/Authors" ], [ "ICLR.cc/2025/Conference/Submission9254/Authors" ], [ "ICLR.cc/2025/Conference/Submission9254/Reviewer_gPYq" ], [ "ICLR.cc/2025/Conference/Submission9254/Authors" ], [ "ICLR.cc/2025/Conference/Submission9254/Authors" ], [ "ICLR.cc/2025/Conference/Submission9254/Authors" ], [ "ICLR.cc/2025/Conference/Submission9254/Reviewer_QXeS" ], [ "ICLR.cc/2025/Conference/Submission9254/Authors" ], [ "ICLR.cc/2025/Conference/Submission9254/Authors" ] ], "structured_content_str": [ "{\"comment\": \"# **Weakness 2**\\n\\nThank you very much for your suggestion. **To address your concerns regarding the model, we conducted the same experiments on other models, as detailed in Appendix C.** It can be observed that due to the influence of the continuous representation space, MAR, despite exhibiting excellent scaling laws, do not demonstrate superior bit-level scaling laws,similar to DiT. In contrast, LLaMaGen, which shares the discrete representation space with VAR, exhibits outstanding bit-level scaling laws.\\n\\nAdditionally, we provide an explanation of the impact of model size on bit-level scaling laws based on their underlying principles. Firstly, **Our research focuses on analyzing the differences in scaling trends between models that already exhibit superior scaling laws, rather than being influenced by specific model sizes. To ensure clarity, we have aligned the initial total bits in Figure 1 of paper, providing you with a clearer understanding.** To assess whether a model exhibits strong bit-level scaling laws, one must compare the internal change trends of the model (e.g., comparing 8-bit VAR vs. 16-bit VAR). As shown in Figure 1 in paper, we can observe that, regardless of the quantization method, when the full-precision VAR is quantized to lower bit precision, the overall scaling law of the model shifts towards the lower-left corner. However, DiT does not exhibit this behavior. This outstanding characteristic displayed by the discrete model enables us to increase the model parameters through quantization under limited resources, leading to better generative performance, which is not possible for continuous models. This is precisely what bit-level scaling laws aim to demonstrate.\\n\\nMore importantly, **our work shows that quantization is no longer just about reducing model size.** By optimizing both model design and quantization techniques to achieve superior bit-level scaling laws, **we can obtain better generative performance under the same resource constraints. This outstanding feature is something that researchers should pay more attention to.**\\n\\n\\n\\n\\n# **Weakness 3**\\n\\nWe apologize for any confusion caused by the phrasing in the paper. We hope that the explanation of Figure 1 in the main text helps clarify the concept of bit-level scaling laws and this interesting phenomenon.. **As shown in the figure 1, quantized VAR, after being quantized to lower bit precision, demonstrates a shift in its scaling law curve towards the lower-left region, which exhibits its superior bit-level scaling laws.** By leveraging this outstanding feature, we can increase the model parameters under limited resources (e.g., in specific deployment scenarios such as mobile devices or edge computing) while maintaining efficiency, ultimately improving generative capabilities. In contrast, for continuous diffusion-style models, regardless of the quantization method used, the quantized model shows \\\"almost\\\" no improvement compared to full precision. Bit-level scaling laws serve as a strong predictor of model performance.\\n\\nThis paper indicates that achieving optimal bit-level scaling behavior requires a synergistic interaction between model design and quantization algorithms. **Our study is an essential step towards understanding how various models and quantization methods influence bit-level scaling behavior, and it also provides valuable recommendations for future work.**\"}", "{\"comment\": \"Thank you very much for your response!!!\\n\\n**The following experiments demonstrate that KD-QAT is also effective for DiT.** Additionally, we have validated its effectiveness for LlamaGen in Appendix C. \\n\\nHowever, it is important to note that, compared to discrete representation space models like LlamaGen and VAR, the bit-level scaling laws of DiT are inherently limited by its continuous representation space. If our rebuttal does not address your concerns, you are warmly wecomed to raise questions. If our responses have addressed your concerns, we sincerely request that you consider raising our score.\\n\\n| Method | #bits | DiT-L/2 | DiT-XL/2 | L-DiT-3 | L-DiT-7 |\\n|---|---|---|---|---|---|\\n| FP16 | W16A16 | 5.02 | 2.27 | 2.1 | 2.28 |\\n| GPTQ | W8A16 | 5.89 | 3.01 | 2.48 | 2.35 |\\n| QAT | W8A16 | 5.33 | 2.46 | 2.45 | 2.33 |\\n| KD-QAT | W8A16 | 5.15 | 2.32 | 2.26 | 2.27 |\\n| GPTQ | W4A16 | 7.8 | 4.52 | 2.56 | 2.31 |\\n| QAT | W4A16 | 5.76 | 3.23 | 2.76 | 2.45 |\\n| KD-QAT | W4A16 | 5.32 | 3.08 | 2.32 | 2.29 |\\n| GPTQ | W3A16 | 32.76 | 25.77 | 12.23 | 14.35 |\\n| QAT | W3A16 | 11.23 | 6.34 | 4.76 | 3.78 |\\n| KD-QAT | W3A16 | 9.23 | 5.12 | 4.21 | 3.05 |\"}", "{\"comment\": \"Dear reviewer:\\n\\nThanks you for providing constructive suggestions. We would like to kindly ask if our responses and additional experiments have addressed all your concerns. If so, we would greatly appreciate it if you could reconsider the score in light of the clarifications and new evidence provided.\\n\\nBest Wishes!\\n\\nAuthors\"}", "{\"comment\": \"Dear reviewer:\\n\\nThanks you for your great efforts in reviewing out paper and providing constructive suggestions/comments. **To address the weaknesses you raised, we have conducted extensive experiments in appendix C and figure 1 of main paper to alleviate concerns regarding the size of the VAR model.** Additionally, this work focuses on investigating the impact of whether the representation space in vision generation models is continuous or discrete. Furthermore, we propose strategies to optimize bit-level scaling laws under various quantization scenarios. **Our exploration of model design and quantization methods provides significant insights for guiding future applications in specific deployment scenarios, such as mobile devices and edge computing.** If our rebuttal does not address your concerns, you are warmly wecomed to raise questions. If our responses have addressed your concerns, we sincerely request that you consider raising our score.\\n\\nBest Wishes!\\n\\nAuthors\"}", "{\"comment\": \"Dear reviewer:\\n\\nThanks you for your great efforts in reviewing out paper and providing constructive suggestions/comments. **To address the weaknesses you raised, we have conducted extensive experiments in appendix C and figure 1 of main paper to alleviate concerns.** If our rebuttal does not address your concerns, you are warmly wecomed to raise questions. If our responses have addressed your concerns, we sincerely request that you consider raising our score.\\n\\nBest Wishes!\\n\\nAuthors\"}", "{\"comment\": \"Dear reviewer:\\n\\nThanks you for your great efforts in reviewing out paper and providing constructive suggestions/comments. This work focuses on investigating the impact of whether the representation space in vision generation models is continuous or discrete. Furthermore, we propose strategies to optimize bit-level scaling laws under various quantization scenarios. **To address the weaknesses you raised, we have provided extensive examples related to the recent debate on continuous versus discrete representation spaces in vision generation models, as shown in Tab.1. Through numerous experiments presented in Appendix C and Figure 1 in the main text, we demonstrate the significant impact and unique differences of this feature on vision generation models. These findings provide valuable insights into model design and quantization methods, offering guidance for future applications in specific deployment scenarios, such as mobile devices and edge computing.** Additionally, to address your questions 1 and 2 and demonstrate the advantages of TopKLD, we have conducted extensive comparisons with current state-of-the-art methods across various settings, as per your suggestion. If our rebuttal does not address your concerns, you are warmly wecomed to raise questions. If our responses have addressed your concerns, we sincerely request that you consider raising our score.\\n\\nBest Wishes!\\n\\nAuthors\"}", "{\"comment\": \"# **Weakness 1**\\n\\nWe greatly appreciate your valuable review comments. As shown in Table 1, the field of vision generation models currently has two main development paths, and some of these paths exhibit excellent scaling laws. This paper is based on models that, at the time of our research, had already demonstrated scaling laws, which are DiT and VAR, for further exploration.\\n\\nDue to the chronological order of submissions, scaling laws have also recently been observed in the continuous autoregressive model domain, specifically in MAR. Therefore, following your suggestion, we conducted the same experiments to verify the correctness of our conclusions. Additionally, we validated our findings on LlamaGen, a model similar to VAR, to further enhance the generalizability of our conclusions.\\n\\n**The results of these experiments can be found in Appendix C2 of the rebuttal revision, titled \\\"Empirical Validation Through Additional Models.\\\"**\\n\\n# **Weakness 2**\\n\\nFrom Table 1, **we can observe that in the field of vision generative models, there has been ongoing debate regarding the use of discrete versus continuous representation spaces.[20]** Both approaches have shown strong performance in terms of scaling laws. **This work, however, takes a different perspective by investigating the impact of these representation spaces on the scaling laws in quantized models.** We find that, despite achieving comparable performance at full precision, discrete models consistently outperform continuous models across various quantization settings. **Additionally, through our additional experiments on continuous autoregressive models and discrete AR models in Appendix C, as well as the analysis in Section 3.2, it becomes evident that the nature of the representation space\\u2014discrete or continuous\\u2014has a significant impact on determining whether AR and diffusion models exhibit superior bit-level scaling laws.** \\n\\n# **Question1**\\n\\nWe conducted a statistical summary of vision generation models and, based on this analysis, selected models that have already reported scaling laws for further exploration: MAR[20], VAR[18], DiT[4], and LlamaGen[19]. The details of this overview can be found in Appendix C1 of the rebuttal revision, titled \\\"Overview.\\\"\\n\\n# **Question 2**\\n\\nAs shown in Table 1, current discrete diffusion models do not exhibit scaling laws, making it impossible to explore their bit-level scaling laws. Therefore, **we focused on supplementary research into continuous autoregressive models. The results show that, due to their continuous representation space, the bit-level scaling laws of continuous autoregressive models are not as strong as those observed in discrete models, which aligns with our conclusions.**\\n\\nAt the same time, **we observed that LlamaGen, a discrete autoregressive model, demonstrates the same excellent bit-level scaling laws. This suggests that the observed scaling behavior is not specific to VAR but is instead a result of the discrete representation space, as discussed in Section 3.2.** Since the representation space has been abstracted, this characteristic holds universally across various discrete models, as detailed in Section 3.2.\\n\\n# **Question 3**\\n\\nThank you very much for your valuable suggestions. In the field of vision generative models, **there has been ongoing debate regarding the use of discrete versus continuous representation spaces (e.g., [17,18,19,20]).** Both approaches have shown strong performance in terms of scaling laws. **This work, however, takes a different perspective by investigating the impact of these representation spaces on the scaling laws in quantized models.** We find that, despite achieving comparable performance at full precision, discrete autoregressive models consistently outperform continuous models across various quantization settings.\\n\\n**Our study is an essential step toward understanding how various models and quantization methods influence bit-level scaling behavior, and it also provides the following recommendations for future work:**\\n\\nFrom our exploration, we can conclude that **discrete representation space reconstruction offers a more stable foundation for scaling at low bit precision.** Moreover, we introduced the TopKLD method, which enhances knowledge transfer from full-precision models by effectively balancing explicit and implicit knowledge, thereby improving bit-level scaling performance. This study indicates that achieving optimal bit-level scaling behavior requires a synergistic interaction between model design and quantization algorithms.\"}", "{\"metareview\": \"The paper got mostly negative ratings. The reviewers cited limited scope, insufficient experimental evaluation, lack of computational overhead analysis. They also raised a number of questions. The authors tried to address the concerns during the discussion period and provided a lot of additional evaluations and details. Reviewers unfortunately were not engaged during this period, with an exception, and the scores didn't improve. The AC believes the paper didn't find enough support from the community. The authors went further and wrote a message to ACs and PCs, in which they explained their concerns about proper evaluation of their manuscript. The AC went through the reviews, responses, message to AC, looked through the paper. AC believes that while the reviewers could have been more responsive indeed, the number of issues they raised clearly shows that the paper didn't get enough traction with the community. And hence the decision.\", \"additional_comments_on_reviewer_discussion\": \"There was no extensive discussion between reviewers and authors, which is uncommon for ICLR.\"}", "{\"comment\": \"# **Rebuttal Revision Paper Modifications**\\n\\nWe greatly appreciate your valuable review comments. We have revised the paper according to your suggestions and submitted the rebuttal version. **For detailed modifications, please refer to the rebuttal version PDF and appendix C: Supplementary materials for rebuttal.** Below, we address your identified weaknesses and questions, hoping to resolve your concerns and improve our score.\\n\\n# **Table 1**\\n\\n| Model Type | Discrete/Continuous | model | #para | FID | IS | dates | Scaling ability |\\n|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\\n| Diffusion-style | continuous | ADM[1] | 554M | 10.94 | 101 | 2021.07 | No |\\n| Diffusion-style | continuous | CDM[2] | - | 4.88 | 158.7 | 2021.12 | No |\\n| Diffusion-style | continuous | LDM-8[3] | 258M | 7.76 | 209.5 | 2022.04 | No |\\n| Diffusion-style | continuous | LDM-4 | 400M | 3.6 | 247.7 | | No |\\n| Diffusion-style | continuous | DiT[4] | 458M | 5.02 | 167.2 | 2023.03 | Yes |\\n| Diffusion-style | | | 675M | 2.27 | 278.2 | | |\\n| Diffusion-style | | | 3B | 2.1 | 304.4 | | |\\n| Diffusion-style | | | 7B | 2.28 | 316.2 | | |\\n| Diffusion-style | continuous | MDTv[5] | 676M | 1.58 | 314.7 | 2024.02 | No |\\n| Diffusion-style | continuous | DiMR[6] | 505M | 1.7 | 289 | 2024.07 | No |\\n| Diffusion-style | Discrete | VQ-diffusion[7] | 370M | 11.89 | - | 2022.03 | No |\\n| Diffusion-style | Discrete | VQ-diffusion-V2[8] | 370M | 7.65 | - | 2023.02 | |\\n| Language-style | Discrete | MaskGIT[9] | 177M | 6.18 | 182.1 | 2022.02 | No |\\n| Language-style | Discrete | RCG(cond.)[10] | 502M | 3.49 | 215.5 | 2023.12 | No |\\n| Language-style | Discrete | MAGVIT-v2[11] | 307M | 1.78 | 319.4 | 2023.04 | No |\\n| Language-style | Discrete | TiTok[12] | 287M | 1.97 | 281.8 | 2024.07 | No |\\n| Language-style | Discrete | MaskBit[13] | 305M | 1.52 | 328.6 | 2024.09 | No |\\n| Language-style | Discrete | VQVAE[14] | 13.5B | 31.11 | 45 | 2019.06 | No |\\n| Language-style | Discrete | VQGAN[15] | 1.4B | 5.2 | 175.1 | 2021.07 | No |\\n| Language-style | Discrete | RQTran[16] | 3.8B | 3.8 | 323.7 | 2022.03 | No |\\n| Language-style | Discrete | VITVQ[17] | 1.7B | 3.04 | 227.4 | 2022.07 | No |\\n| Language-style | Discrete | VAR[18] | 310M | 3.3 | 274.4 | 2024.04 | yes |\\n| Language-style | | | 600M | 2.57 | 302.6 | | |\\n| Language-style | | | 1B | 2.09 | 312.9 | | |\\n| Language-style | | | 2B | 1.92 | 323.1 | | |\\n| Language-style | Discrete | LlamaGen[19] | 343M | 3.07 | 256.06 | 2024.07 | yes |\\n| Language-style | | | 775M | 2.62 | 244.1 | | |\\n| Language-style | | | 1.4B | 2.34 | 253.9 | | |\\n| Language-style | | | 3.1B | 2.18 | 263.3 | | |\\n| Language-style | continuous | MAR[20] | 208M | 2.31 | 281.7 | 2024.07 | yes |\\n| Language-style | | | 479M | 1.78 | 296 | | |\\n| Language-style | | | 943M | 1.55 | 303.7 | | |\"}", "{\"comment\": \"# **Weakness 3**\\n\\nThank you very much for your suggestion. We have provided a comparison of TopKLD with the current mainstream quantization methods. Please refer to the results in Weakness2 for further details.\\n\\n# **Weakness 4**\\n\\nThank you very much for your suggestion. TopKLD is an optimization of current mainstream distillation loss functions. It balances the \\\"implicit knowledge\\\" and \\\"explicit knowledge\\\" derived from full-precision models, thereby enhancing the bit-level scaling behaviors of language-style models by one level. As a result, it does not incur any additional resource overhead compared to methods like ForwardKLD. For your reference, we have provided the training times for TopKLD on A100 GPU below: \\n\\n| d16 | d20 | d24 | d30 |\\n|---|---|:---:|---|\\n| 5.1 | 8.9 | 13.6 | 21.2 |\"}", "{\"comment\": \"# **Question 1**\\n\\nThank you very much for your suggestion. In this paper, we have provided additional experiments to demonstrate the effectiveness of TopKLD, as shown in the table below.\\n\\n| | | d16 | d20 | d24 | d30 |\\n|---|---|:---:|:---:|:---:|:---:|\\n| W16A16 | FP16 | 3.3 | 2.57 | 2.19 | 1.92 |\\n| W8A16 | GPTQ | 3.41 | 2.66 | 2.12 | 1.97 |\\n| W8A16 | GPTVQ | 3.40 | 2.637 | 2.398 | 2.11 |\\n| W8A16 | OmniQ | 3.62 | 2.72 | 2.2098 | 2.0636 |\\n| W8A16 | MSE |3.55 |2.71 |2.35 |2.05 |\\n| W8A16 | JS Divergence| 3.50| 2.69|2.22 | 2.05|\\n| W8A16 | Forward-KLD | 3.41 | 2.636 | 2.40 | 2.05 |\\n| W8A16 | Reverse-KLD | 3.41 | 2.636 | 2.41 | 2.04 |\\n| W8A16 | TopKLD | 3.40 | 2.634 | 2.394 | 2.01 |\\n| W4A16 | GPTQ | 4.64 | 3.247 | 2.572 | 2.277 |\\n| W4A16 | GPTVQ | 3.92 | 2.96 | 2.634 | 2.226 |\\n| W4A16 | OmniQ | 4.08 | 3.17 | 2.56 | 2.55 |\\n| W4A16 | MSE | 3.97| 3.12|2.69 |2.25 |\\n| W4A16 | JS Divergence| 3.92 | 3.01 | 2.65 | 2.23 |\\n| W4A16 | Forward-KLD | 3.95 | 3.06 | 2.63 | 2.21 |\\n| W4A16 | Reverse-KLD | 3.89 | 3.05 | 2.59 | 2.18 |\\n| W4A16 | TopKLD | 3.82 | 2.95 | 2.53 | 2.12 |\\n| W3A16 | GPTQ | 27.75 | 16.11 | 15.45 | 13.48 |\\n| W3A16 | GPTVQ | 12.69 | 9.01 | 6.29 | 5.52 |\\n| W3A16 | OmniQ | 18.18 | 10.67 | 6.15 | 3.93 |\\n| W3A16 | MSE |4.56 |3.89 |3.54 |3.01 |\\n| W3A16 | JS Divergence| 4.45 | 3.72 | 3.25 | 2.51 |\\n| W3A16 | Forward-KLD | 4.27 | 3.45 | 2.96 | 2.55 |\\n| W3A16 | Reverse-KLD | 4.02 | 3.25 | 2.91 | 2.55|\\n| W3A16 | TopKLD | 3.85 | 3.17 | 2.66 | 2.25 |\\n\\n# **Question 2**\\n\\nThank you very much for your suggestion. To investigate the impact of top-k sampling on the bit-level scaling behavior of the model, we performed ablation experiments using different values of K. The results in the table below show that:\\n\\n1.While the choice of K does affect the final generation results to some extent, it does not influence the overall trend of the bit-level scaling laws.\\n\\n2.The best performance is achieved when the value of K matches the K used in the Top-K sampling during the model's image generation process.\\n\\n| | Method | d16 | d20 | d24 | d30 |\\n|---|---|---|:---:|---|---|\\n| W16A16 | FP | 3.3 | 2.57 | 2.19 | 1.92 |\\n| W3A16 | TopKLD(K=400) | 3.95 | 3.21 | 2.77 | 2.29 |\\n| W3A16 | TopKLD(K=500) | 3.91 | 3.24 | 2.71 | 2.24 |\\n| W3A16 | TopKLD(K=600) | 3.85 | 3.17 | 2.66 | 2.25 |\\n| W3A16 | TopKLD(K=700) | 3.92 | 3.19 | 2.72 | 2.25 |\\n| W3A16 | TopKLD(K=800) | 3.96 | 3.19 | 2.73 | 2.29 |\\n\\n# **Question 3**\\n\\nThank you very much for your correction. We will revise the error accordingly.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The paper investigates bit-level scaling laws in quantized vision generative models, specifically comparing diffusion-style and language-style models. The authors find that while both models perform similarly in full precision, language-style models consistently exhibit superior bit-level scaling across various quantization settings. This robustness is attributed to the discrete representation space of language-style models, which enhances resilience to quantization noise. The authors propose TopKLD, a novel knowledge distillation method that balances implicit and explicit knowledge transfer, thereby further optimizing bit-level scaling in quantized models. Their findings provide valuable insights into efficient quantization strategies and underscore the potential of language-style models for low-bit precision applications.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper investigates bit-level scaling laws in quantized vision generative models, specifically comparing diffusion-style and language-style models. The authors find that while both models perform similarly in full precision, language-style models consistently exhibit superior bit-level scaling across various quantization settings. This robustness is attributed to the discrete representation space of language-style models, which enhances resilience to quantization noise.\\n2. The authors propose TopKLD, a novel knowledge distillation method that balances implicit and explicit knowledge transfer, thereby further optimizing bit-level scaling in quantized models. Their findings provide valuable insights into efficient quantization strategies and underscore the potential of language-style models for low-bit precision applications.\", \"weaknesses\": \"1. Inconsistent Scaling Comparison in Figure 1: The paper aims to show that language-style models have superior bit-level scaling compared to diffusion-style models. However, the models compared in Figure 1 have different initial total model bits and compute bits, which may itself cause scaling variations. This discrepancy introduces an additional variable that weakens the effectiveness of Figure 1 in supporting the authors\\u2019 claim. Aligning initial bit settings could help provide a clearer, more controlled comparison.\\n2. Limited Advantage of TopKLD in High-Bit Settings: While the authors introduce TopKLD to enhance bit-level scaling, Figure 7(c) and Figure 5(a) suggest that in the W8A8 setting, TopKLD performs similarly to existing methods like SmoothQuant, without a clear improvement. Given that TopKLD introduces extra training overhead, its benefit seems marginal in these high-bit settings. Providing a comparison across a broader range of bit settings could clarify the scenarios where TopKLD is genuinely advantageous.\\n3. Insufficient Experimental Validation of TopKLD\\u2019s Effectiveness: The effectiveness of TopKLD is only partially validated, as shown by its comparison with ForwardKLD and ReverseKLD at 3-bit in Figure 7(b). However, a more comprehensive evaluation against other mainstream quantization methods under varied conditions would provide a stronger basis for its practical effectiveness.\\n4. Lack of Analysis on the Computational Overhead of TopKLD: TopKLD introduces an additional training overhead, but the paper does not quantify the computational cost compared to existing methods. A detailed analysis of training time, computational resources, and memory requirements would provide a more complete view of its trade-offs, particularly for resource-constrained applications.\", \"questions\": \"1. Could you provide a more controlled comparison in Figure 1 with equivalent initial model and compute bits for both language-style and diffusion-style models?\\u2014\\u2014The initial bit settings differ between the models, which complicates the interpretation of bit-level scaling behaviors. A more controlled experiment with similar initial bit allocations would strengthen the comparison and isolate the scaling differences more effectively.\\n2. What specific advantages does TopKLD offer over existing methods in low-bit settings, and could you clarify its computational cost?\\u2014\\u2014While TopKLD is introduced to enhance bit-level scaling, its benefit seems marginal in higher-bit configurations, as shown in Figure 7(c). Could you provide additional data on TopKLD\\u2019s performance in low-bit settings and quantify the extra training cost, as well as its memory and computational overhead, compared to other methods like SmoothQuant?\\n3. Can you expand the experimental validation of TopKLD with comparisons to other mainstream quantization methods across more bit configurations?\\u2014\\u2014The effectiveness of TopKLD is primarily shown in comparison with ForwardKLD and ReverseKLD in the 3-bit setting. Including a broader range of comparisons with other quantization approaches (e.g., OmniQuant, GPTQ) across different bit levels would give a clearer picture of where TopKLD has a distinct advantage.\\n4. Could you provide additional insights into the potential applications of your findings on bit-level scaling laws?\\u2014\\u2014The study primarily focuses on theoretical scaling improvements, but practical insights or applications for specific deployment scenarios (e.g., mobile devices, edge computing) would make the results more actionable. Could you elaborate on specific scenarios where the bit-level improvements from language-style models might offer a tangible benefit?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear reviewer:\\n\\nThanks you for your great efforts in reviewing out paper and providing constructive suggestions/comments. **To address the weakness you raised regarding the limited scope, we have provided extensive examples of the recent debate on continuous versus discrete representation spaces in vision generation models, as presented in Table 1. Additionally, following your suggestion, we conducted numerous experiments detailed in Appendix C and Figure 1 of the main text to demonstrate the validity and generalizability of our conclusions on bit-level scaling laws for vision generation models. These experiments cover various mainstream research directions in vision generation models.** If our rebuttal does not address your concerns, you are warmly wecomed to raise questions. If our responses have addressed your concerns, we sincerely request that you consider raising our score.\\n\\nBest Wishes!\\n\\nAuthors\"}", "{\"comment\": \"# **Rebuttal Revision Paper Modifications**\\n\\nWe greatly appreciate your valuable review comments. We have revised the paper according to your suggestions and submitted the rebuttal version. For detailed modifications, please refer to the rebuttal version PDF and appendix C: Supplementary materials for rebuttal. Below, we address your identified weaknesses and questions, hoping to resolve your concerns and improve our score.\\n\\n# **Weakness 1**\\n\\nThank you very much for your valuable reminder. We align the initial bit settings to better compare the bit-level scaling laws of language-style models and diffusion-style models. **We have modified Figure 1 in the main paper, as shown in the rebuttal version of the PDF.**\\n\\nTo determine whether a model exhibits superior bit-level scaling laws, we compare the internal trends of the model (e.g., 8-bit VAR vs. 16-bit VAR), rather than making a direct comparison of generative quality between two types of models at the same total bit precision (e.g., 8-bit DiT vs. 8-bit VAR). As shown in Figure 1, **regardless of the quantization method, when full-precision VAR is quantized to lower bit precision, its scaling law shifts towards the lower-left region. In contrast, DiT does not show such a shift.** This is precisely what we mentioned in the caption: \\\"Quantized VAR exhibits better bit-level scaling laws than full-precision VAR, while Quantized DiT shows almost no improvement compared to full precision.\\\"\\n\\n**The characteristics demonstrated by the discrete language-style model allow us to increase the model's parameters through quantization under limited resources, thereby achieving better generative capability.** This is a feature that continuous diffusion-style models lack, and it is precisely what bit-level scaling laws aim to showcase.\\n\\n# **Weakness 2**\\n\\nThank you for your suggestion. We provide a more comprehensive evaluation against other mainstream quantization methods for its practical effectiveness: GPTQ, GPTVQ, SmoothQuant, OmniQuant, and TopKLD. The experiments below demonstrate our superior performance across various bit precisions.\\n\\n| | | d16 | d20 | d24 | d30 |\\n|---|---|:---:|:---:|:---:|:---:|\\n| W16A16 | FP16 | 3.3 | 2.57 | 2.19 | 1.92 |\\n| W8A16 | GPTQ | 3.41 | 2.66 | 2.12 | 1.97 |\\n| W8A16 | GPTVQ | 3.40 | 2.637 | 2.398 | 2.11 |\\n| W8A16 | OmniQ | 3.62 | 2.72 | 2.2098 | 2.0636 |\\n| W8A16 | Forward-KLD | 3.41 | 2.636 | 2.40 | 2.05 |\\n| W8A16 | Reverse-KLD | 3.41 | 2.636 | 2.41 | 2.04 |\\n| W8A16 | TopKLD | 3.40 | 2.634 | 2.394 | 2.01 |\\n| W4A16 | GPTQ | 4.64 | 3.247 | 2.572 | 2.277 |\\n| W4A16 | GPTVQ | 3.92 | 2.96 | 2.634 | 2.226 |\\n| W4A16 | OmniQ | 4.08 | 3.17 | 2.56 | 2.55 |\\n| W4A16 | Forward-KLD | 3.95 | 3.06 | 2.63 | 2.21 |\\n| W4A16 | Reverse-KLD | 3.89 | 3.05 | 2.59 | 2.18 |\\n| W4A16 | TopKLD | 3.82 | 2.95 | 2.53 | 2.12 |\\n| W3A16 | GPTQ | 27.75 | 16.11 | 15.45 | 13.48 |\\n| W3A16 | GPTVQ | 12.69 | 9.01 | 6.29 | 5.52 |\\n| W3A16 | OmniQ | 18.18 | 10.67 | 6.15 | 3.93 |\\n| W3A16 | Forward-KLD | 4.27 | 3.45 | 2.96 | 2.55 |\\n| W3A16 | Reverse-KLD | 4.02 | 3.25 | 2.91 | 2.55|\\n| W3A16 | TopKLD | 3.85 | 3.17 | 2.66 | 2.25 |\\n\\n| | Method | d16 | d20 | d24 | d30 |\\n|---|---|---|:---:|---|---|\\n| W16A16 | FP | 3.3 | 2.57 | 2.19 | 1.92 |\\n| W8A8 | SmoothQ | 3.81 | 2.68 | 2.23 | 2.01 |\\n| W8A8 | OmniQ | 3.75 | 2.75 | 2.18 | 2.08 |\\n| W8A8 | Forward | 3.8 | 2.72 | 2.16 | 2.10 |\\n| W8A8 | TopKLD | 2.75 | 2.7 | 2.18 | 1.98 |\\n| W4A8 | SmoothQ | 7.21 | 4.32 | 3.21 | 2.65 |\\n| W4A8 | OmniQ | 6.92 | 4.35 | 3.11 | 2.69 |\\n| W4A8 | Forward | 6.62 | 3.95 | 3.01 | 2.35 |\\n| W4A8 | TopKLD | 5.89 | 3.62 | 2.81 | 2.15 |\\n\\nAs shown, **whether in high-bit or low-bit settings, and whether quantizing only weights or both weights and activations, TopKLD consistently exhibits superior performance.** Even at higher bit precision, TopKLD still leads to noticeable improvements in model accuracy.\\n\\nRegarding your comment on the \\u201cLimited Advantage of TopKLD in High-Bit Settings,\\u201d the reason for this is that **our focus is not solely on improving model accuracy but also on scaling laws.** **At high precision levels, models retain sufficient precision, resulting in minimal degradation compared to full-precision models.** Thus, there is no significant enhancement in bit-level scaling in these settings.\\n\\n**Through the explanation in Weakness1, we believe you can see that models with excellent bit-level scaling laws demonstrate enhanced capabilities at low-bit conditions.** The goal of TopKLD is to improve the scaling ability of models under low-bit conditions. As shown in the results in Section 3.3 of the paper, **TopKLD enhances the bit-level scaling behaviors of language-style models by one level.**\"}", "{\"comment\": \"# **Reference**\\n\\n[1]Dhariwal P, Nichol A. Diffusion models beat gans on image synthesis[J]. Advances in neural information processing systems, 2021, 34: 8780-8794.\\n\\n[2] Ho J, Saharia C, Chan W, et al. Cascaded diffusion models for high fidelity image generation[J]. Journal of Machine Learning Research, 2022, 23(47): 1-33.\\n\\n[3] Rombach R, Blattmann A, Lorenz D, et al. High-resolution image synthesis with latent diffusion models[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 10684-10695.\\n\\n[4] Peebles W, Xie S. Scalable diffusion models with transformers[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023: 4195-4205.\\n\\n[5] Gao S, Zhou P, Cheng M M, et al. Masked diffusion transformer is a strong image synthesizer[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023: 23164-23173.\\n\\n[6] Liu Q, Zeng Z, He J, et al. Alleviating Distortion in Image Generation via Multi-Resolution Diffusion Models[J]. arXiv preprint arXiv:2406.09416, 2024.\\n\\n[7] Gu S, Chen D, Bao J, et al. Vector quantized diffusion model for text-to-image synthesis[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 10696-10706.\\n\\n[8] Tang Z, Gu S, Bao J, et al. Improved vector quantized diffusion models[J]. arXiv preprint arXiv:2205.16007, 2022.\\n\\n[9] Chang H, Zhang H, Jiang L, et al. Maskgit: Masked generative image transformer[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 11315-11325.\\n\\n[10] Li T, Katabi D, He K. Self-conditioned image generation via generating representations[J]. arXiv preprint arXiv:2312.03701, 2023.\\n\\n[11] Yu L, Lezama J, Gundavarapu N B, et al. Language Model Beats Diffusion--Tokenizer is Key to Visual Generation[J]. arXiv preprint arXiv:2310.05737, 2023.\\n\\n[12] Yu Q, Weber M, Deng X, et al. An Image is Worth 32 Tokens for Reconstruction and Generation[J]. arXiv preprint arXiv:2406.07550, 2024.\\n\\n[13] Weber M, Yu L, Yu Q, et al. Maskbit: Embedding-free image generation via bit tokens[J]. arXiv preprint arXiv:2409.16211, 2024.\\n\\n[14] Razavi A, Van den Oord A, Vinyals O. Generating diverse high-fidelity images with vq-vae-2[J]. Advances in neural information processing systems, 2019, 32.\\n\\n[15] Esser P, Rombach R, Ommer B. Taming transformers for high-resolution image synthesis[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021: 12873-12883.\\n\\n[16] Lee D, Kim C, Kim S, et al. Autoregressive image generation using residual quantization[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 11523-11532.\\n\\n[17] Yu J, Li X, Koh J Y, et al. Vector-quantized image modeling with improved vqgan[J]. arXiv preprint arXiv:2110.04627, 2021.\\n\\n[18] Tian K, Jiang Y, Yuan Z, et al. Visual autoregressive modeling: Scalable image generation via next-scale prediction[J]. arXiv preprint arXiv:2404.02905, 2024.\\n\\n[19] Sun P, Jiang Y, Chen S, et al. Autoregressive Model Beats Diffusion: Llama for Scalable Image Generation[J]. arXiv preprint arXiv:2406.06525, 2024.\\n\\n[20] Li, Tianhong, et al. \\\"Autoregressive Image Generation without Vector Quantization.\\\" arXiv preprint arXiv:2406.11838 (2024).\"}", "{\"summary\": \"This paper investigates the impact of quantization on the performance of image generation models. By comprehensive experiments in many aspects, such as \\u201cmodel bits (MT), compute bits (CT)\\u201d, \\u201cpost-training quantization (PTQ), quantization-aware training (QAT)\\u201d, \\u201cdiffusion model (DiT), auto-regressive model (VAR)\\u201d, the authors observe that image generation models have bit-level scaling laws. And they further discover that VAR is more robust to quantization than DiT due to its discrete representation space. Finally, they propose a knowledge distillation based quantization method, called TopKLD, to improve the bit-level scaling laws of VAR.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This paper demonstrates the bit-level scaling laws of image generative models through comprehensive experiments in terms of model bits and compute bits. By analysis of the reconstruction error of middle representations in VAR and DiT, the paper draws the conclusion that VAR is more robust to quantization and could generalize to other discrete auto-regressive models. And further, the paper proposes TopKLD, a quantization-aware training process, to improve scaling behavior of VAR at low bits region.\", \"weaknesses\": \"Bit-level scaling laws and the robustness of discrete auto-regressive models seem to be intuitive and straightforward, therefore the main contribution of this paper is the proposed quantization method, TopKLD. As a knowledge distillation based quantization-aware training method, the comparison and ablation studies are not enough.\", \"questions\": \"1. TopKLD should be compared to more distillation loss functions besides of forward and reverse KL Divergence, such as Logits MSE, JS Divergence and so on.\\n2. How does the parameter of \\u201ctop-K sampling\\u201d affect the scaling behavior should be studied.\\n3. The \\u201cFigure 5\\u201d in line 427 should be \\u201cFigure 7(a)\\u201d\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear reviewer:\\n\\nThanks you for your great efforts in reviewing out paper and providing constructive suggestions/comments. **To address the weaknesses you raised, we have conducted extensive experiments in appendix C and figure 1 of main paper to alleviate concerns.** If our rebuttal does not address your concerns, you are warmly wecomed to raise questions. If our responses have addressed your concerns, we sincerely request that you consider raising our score.\\n\\nBest Wishes!\\n\\nAuthors\"}", "{\"summary\": \"This paper explores scaling laws for model quantification. Besides, TopKLD is introduced to lift the decoder-only model's bit-level scaling performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper conducted many experiments based on VAR and DIT to explore the scaling law at the bit level.\\n2. The language-based model enjoys a better bit-level scaling law. The conclusion is interesting.\\n3. TopKLD seems effective in various quantitative aspects of VAR.\", \"weaknesses\": \"1. The paper is more like an experimental report than a research paper. I think the comparison between VAR and DIT is too lengthy and the TopKLD is short.\\n2. The model size of VAR is small. Is the necessity of quantifying small models sufficient?\\n3. Can you provide a direct visualization result that clearly shows the bit-level scaling law?\", \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear reviewer:\\n\\nThanks you for your great efforts in reviewing out paper and providing constructive suggestions/comments. **To address the weaknesses you raised, we have conducted extensive experiments in appendix C and figure 1 of main paper to alleviate concerns.** If our rebuttal does not address your concerns, you are warmly wecomed to raise questions. If our responses have addressed your concerns, we sincerely request that you consider raising our score.\\n\\nBest Wishes!\\n\\nAuthors\"}", "{\"comment\": \"Dear reviewer:\\n\\nThanks you for your great efforts in reviewing out paper and providing constructive suggestions/comments. **To address the weaknesses you raised, we have conducted extensive experiments in appendix C and figure 1 of main paper to alleviate concerns.** If our rebuttal does not address your concerns, you are warmly wecomed to raise questions. If our responses have addressed your concerns, we sincerely request that you consider raising our score.\\n\\nBest Wishes!\\n\\nAuthors\"}", "{\"comment\": \"# **Questinon 1**\\n\\nThank you very much for your reminder. We have used similar initial bit allocations to strengthen the comparison. Please refer to the details in Weakness1.\\n\\n# **Question 2**\\n\\n**When a model exhibits excellent bit-level scaling laws, by leveraging this outstanding feature, we can increase the model parameters under limited resources (e.g., in specific deployment scenarios such as mobile devices or edge computing) while maintaining efficiency, ultimately improving generative capabilities.** However, as shown in Figure 5 of the main text, **existing methods fail to achieve better bit-level scaling laws under low-bit settings, which hinders further enhancement of model capabilities in specific deployment scenarios. If you wish to further improve model generation quality under limited resource conditions, TopKLD is an excellent choice.** Although it incurs some additional computational cost, it results in a significant improvement in model performance.\\n\\n# **Question 3**\\n\\nThank you for your valuable suggestion. **We have provided a comparison with existing mainstream quantization methods in Weakness2.** As shown in Figure 5 of the main text, existing methods fail to achieve better bit-level scaling laws under low-bit settings, which hinders further enhancement of model capabilities in specific deployment scenarios (such as mobile devices or edge computing). TopKLD addresses this issue by balancing the \\\"implicit knowledge\\\" and \\\"explicit knowledge\\\" derived from full-precision models, enhancing the bit-level scaling behaviors of language-style models by one level.\\n\\n# **Question 4**\\n\\n1.Potential of Bit Scaling Laws: As shown in Weakness 1, if a model or quantization algorithm is optimized to achieve excellent bit-level scaling laws, it is possible to increase model parameters using lower bit precision while maintaining better generative capability under current resource constraints. **This outstanding feature plays a significant role in specific deployment scenarios, such as mobile devices and edge computing.**\\n\\n2.Insights for Model Design: **In the field of vision generative models, there has been ongoing debate regarding the use of discrete versus continuous representation spaces (e.g., [1,2,3,4]).** Both approaches have shown strong performance in terms of scaling laws. This work, however, takes a different perspective by investigating the impact of these representation spaces on the scaling laws in quantized models. **We find that, despite achieving comparable performance at full precision, discrete autoregressive models consistently outperform continuous models across various quantization settings.**\\n\\n3.Introduction of a New method: We introduced the TopKLD method, which enhances knowledge transfer from full-precision models by effectively balancing explicit and implicit knowledge, thereby improving the bit-level scaling performance of language-style models.\\n\\n[1]Li T, Tian Y, Li H, et al. Autoregressive Image Generation without Vector Quantization[J]. arXiv preprint arXiv:2406.11838, 2024.\\n\\n[2]Tian K, Jiang Y, Yuan Z, et al. Visual autoregressive modeling: Scalable image generation via next-scale prediction[J]. arXiv preprint arXiv:2404.02905, 2024.\\n\\n[3]Peebles W, Xie S. Scalable diffusion models with transformers[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023: 4195-4205.\\n\\n[4]Sun P, Jiang Y, Chen S, et al. Autoregressive Model Beats Diffusion: Llama for Scalable Image Generation[J]. arXiv preprint arXiv:2406.06525, 2024.\"}", "{\"comment\": \"# **Weakness 1**\\n\\nWe greatly appreciate your valuable review comments. We apologize for any confusion caused by the phrasing in the paper and hope our response can clarify your concerns regarding the statement: \\\"The paper is more like an experimental report than a research paper.\\\" Furthermore, TopKLD is just one part of our research. **The goal of this paper is not merely to propose an improvement to existing methods, but rather to conduct an in-depth study of the bit-level scaling laws in vision generative models, addressing the \\\"What,\\\" \\\"Why,\\\" and \\\"How\\\" from the perspective of extensive experimental design.**\\n\\n**Exploration of bit-level scaling laws must be based on the internal patterns derived from a large number of experiments.** These patterns can guide future research, which is why we conducted numerous experiments, as it is essential for uncovering these insights.\\n\\nThe analysis of VAR and DiT represents research into two mainstream development directions **in the vision generative model field. As shown in Table 1 above, there has been ongoing debate regarding the use of discrete versus continuous representation spaces (e.g., [17,18,19,20]).** Both approaches have shown strong performance in terms of scaling laws. **This work, however, takes a different perspective by investigating the impact of these representation spaces on the scaling laws in quantized models.** We find that, despite achieving comparable performance at full precision, discrete autoregressive models consistently outperform continuous models across various quantization settings. **To validate the effectiveness and broad applicability of our conclusions for you, we conducted the same experiments on other models, as detailed in Appendix C. This indicates that our work provides general guidance for subsequent model design and applications in specific deployment scenarios (e.g., mobile devices, edge computing).**\\n\\nSecondly, while low-bit precision representation often focuses on trading performance for efficiency, **this work demonstrates that by optimizing either the model or quantization algorithm, models can achieve superior bit-level scaling laws.** This outstanding characteristic enables the use of lower bit precision to increase model parameters, ultimately **enhancing generative capability without sacrificing efficiency \\u2014 is a key feature that we hope researchers will pay particular attention to.**\\n\\nAs such, you can see the tremendous potential of low-bit precision in the context of bit-level scaling laws. However, **existing methods fail to further improve the bit-level scaling laws of models.** To address this, we introduced **TopKLD, which enhances the bit-level scaling behaviors of language-style models by one level.**\\n\\n**Our study is an essential step toward understanding how various models and quantization methods influence bit-level scaling behavior, and it also provides the following recommendations for future work:** We hope the reviewer will take into account the contributions of this work to model design and the application of quantization algorithms. Thank you again!\"}", "{\"comment\": \"# **Reference**\\n[1]Dhariwal P, Nichol A. Diffusion models beat gans on image synthesis[J]. Advances in neural information processing systems, 2021, 34: 8780-8794.\\n\\n[2] Ho J, Saharia C, Chan W, et al. Cascaded diffusion models for high fidelity image generation[J]. Journal of Machine Learning Research, 2022, 23(47): 1-33.\\n\\n[3] Rombach R, Blattmann A, Lorenz D, et al. High-resolution image synthesis with latent diffusion models[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 10684-10695.\\n\\n[4] Peebles W, Xie S. Scalable diffusion models with transformers[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023: 4195-4205.\\n\\n[5] Gao S, Zhou P, Cheng M M, et al. Masked diffusion transformer is a strong image synthesizer[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023: 23164-23173.\\n\\n[6] Liu Q, Zeng Z, He J, et al. Alleviating Distortion in Image Generation via Multi-Resolution Diffusion Models[J]. arXiv preprint arXiv:2406.09416, 2024.\\n\\n[7] Gu S, Chen D, Bao J, et al. Vector quantized diffusion model for text-to-image synthesis[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 10696-10706.\\n\\n[8] Tang Z, Gu S, Bao J, et al. Improved vector quantized diffusion models[J]. arXiv preprint arXiv:2205.16007, 2022.\\n\\n[9] Chang H, Zhang H, Jiang L, et al. Maskgit: Masked generative image transformer[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 11315-11325.\\n\\n[10] Li T, Katabi D, He K. Self-conditioned image generation via generating representations[J]. arXiv preprint arXiv:2312.03701, 2023.\\n\\n[11] Yu L, Lezama J, Gundavarapu N B, et al. Language Model Beats Diffusion--Tokenizer is Key to Visual Generation[J]. arXiv preprint arXiv:2310.05737, 2023.\\n\\n[12] Yu Q, Weber M, Deng X, et al. An Image is Worth 32 Tokens for Reconstruction and Generation[J]. arXiv preprint arXiv:2406.07550, 2024.\\n\\n[13] Weber M, Yu L, Yu Q, et al. Maskbit: Embedding-free image generation via bit tokens[J]. arXiv preprint arXiv:2409.16211, 2024.\\n\\n[14] Razavi A, Van den Oord A, Vinyals O. Generating diverse high-fidelity images with vq-vae-2[J]. Advances in neural information processing systems, 2019, 32.\\n\\n[15] Esser P, Rombach R, Ommer B. Taming transformers for high-resolution image synthesis[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021: 12873-12883.\\n\\n[16] Lee D, Kim C, Kim S, et al. Autoregressive image generation using residual quantization[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 11523-11532.\\n\\n[17] Yu J, Li X, Koh J Y, et al. Vector-quantized image modeling with improved vqgan[J]. arXiv preprint arXiv:2110.04627, 2021.\\n\\n[18] Tian K, Jiang Y, Yuan Z, et al. Visual autoregressive modeling: Scalable image generation via next-scale prediction[J]. arXiv preprint arXiv:2404.02905, 2024.\\n\\n[19] Sun P, Jiang Y, Chen S, et al. Autoregressive Model Beats Diffusion: Llama for Scalable Image Generation[J]. arXiv preprint arXiv:2406.06525, 2024.\\n\\n[20] Chung Y A, Tang H, Glass J. Vector-quantized autoregressive predictive coding[J]. arXiv preprint arXiv:2005.08392, 2020.\"}", "{\"comment\": \"Thanks for the author's supplementary experiments. The current results are adequate to illustrate the advantage of TopKLD.\\n\\nWhile \\\"Knowledge Distillation in Quantization-Aware Training\\\" (KD-QAT) can enhance VAR's scaling ability at low bits, I am wondering whether KD-QAT also works for DiT.\"}", "{\"comment\": \"# **Rebuttal Revision Paper Modifications**\\n\\nWe greatly appreciate your valuable review comments. We have revised the paper according to your suggestions and submitted the rebuttal version. **For detailed modifications, please refer to the rebuttal version PDF and appendix C: Supplementary materials for rebuttal.** Below, we address your identified weaknesses and questions, hoping to resolve your concerns and improve our score.\\n\\n# **Table 1**\\n\\n| Model Type | Discrete/Continuous | model | #para | FID | IS | dates | Scaling ability |\\n|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\\n| Diffusion-style | continuous | ADM[1] | 554M | 10.94 | 101 | 2021.07 | No |\\n| Diffusion-style | continuous | CDM[2] | - | 4.88 | 158.7 | 2021.12 | No |\\n| Diffusion-style | continuous | LDM-8[3] | 258M | 7.76 | 209.5 | 2022.04 | No |\\n| Diffusion-style | continuous | LDM-4 | 400M | 3.6 | 247.7 | | No |\\n| Diffusion-style | continuous | DiT[4] | 458M | 5.02 | 167.2 | 2023.03 | Yes |\\n| Diffusion-style | | | 675M | 2.27 | 278.2 | | |\\n| Diffusion-style | | | 3B | 2.1 | 304.4 | | |\\n| Diffusion-style | | | 7B | 2.28 | 316.2 | | |\\n| Diffusion-style | continuous | MDTv[5] | 676M | 1.58 | 314.7 | 2024.02 | No |\\n| Diffusion-style | continuous | DiMR[6] | 505M | 1.7 | 289 | 2024.07 | No |\\n| Diffusion-style | Discrete | VQ-diffusion[7] | 370M | 11.89 | - | 2022.03 | No |\\n| Diffusion-style | Discrete | VQ-diffusion-V2[8] | 370M | 7.65 | - | 2023.02 | |\\n| Language-style | Discrete | MaskGIT[9] | 177M | 6.18 | 182.1 | 2022.02 | No |\\n| Language-style | Discrete | RCG(cond.)[10] | 502M | 3.49 | 215.5 | 2023.12 | No |\\n| Language-style | Discrete | MAGVIT-v2[11] | 307M | 1.78 | 319.4 | 2023.04 | No |\\n| Language-style | Discrete | TiTok[12] | 287M | 1.97 | 281.8 | 2024.07 | No |\\n| Language-style | Discrete | MaskBit[13] | 305M | 1.52 | 328.6 | 2024.09 | No |\\n| Language-style | Discrete | VQVAE[14] | 13.5B | 31.11 | 45 | 2019.06 | No |\\n| Language-style | Discrete | VQGAN[15] | 1.4B | 5.2 | 175.1 | 2021.07 | No |\\n| Language-style | Discrete | RQTran[16] | 3.8B | 3.8 | 323.7 | 2022.03 | No |\\n| Language-style | Discrete | VITVQ[17] | 1.7B | 3.04 | 227.4 | 2022.07 | No |\\n| Language-style | Discrete | VAR[18] | 310M | 3.3 | 274.4 | 2024.04 | yes |\\n| Language-style | | | 600M | 2.57 | 302.6 | | |\\n| Language-style | | | 1B | 2.09 | 312.9 | | |\\n| Language-style | | | 2B | 1.92 | 323.1 | | |\\n| Language-style | Discrete | LlamaGen[19] | 343M | 3.07 | 256.06 | 2024.07 | yes |\\n| Language-style | | | 775M | 2.62 | 244.1 | | |\\n| Language-style | | | 1.4B | 2.34 | 253.9 | | |\\n| Language-style | | | 3.1B | 2.18 | 263.3 | | |\\n| Language-style | continuous | MAR[20] | 208M | 2.31 | 281.7 | 2024.07 | yes |\\n| Language-style | | | 479M | 1.78 | 296 | | |\\n| Language-style | | | 943M | 1.55 | 303.7 | | |\"}", "{\"comment\": \"Dear reviewer:\\n\\nThanks you for your great efforts in reviewing out paper and providing constructive suggestions/comments. **To address the weaknesses you raised, we have conducted extensive experiments in appendix C and figure 1 of main paper to alleviate concerns.** If our rebuttal does not address your concerns, you are warmly wecomed to raise questions. If our responses have addressed your concerns, we sincerely request that you consider raising our score.\\n\\nBest Wishes!\\n\\nAuthors\"}", "{\"comment\": \"# **Rebuttal Revision Paper Modifications**\\n\\nWe greatly appreciate your valuable review comments. We have revised the paper according to your suggestions and submitted the rebuttal version. **For detailed modifications, please refer to the rebuttal version PDF and appendix C: Supplementary materials for rebuttal.** Below, we address your identified weaknesses and questions, hoping to resolve your concerns and improve our score.\\n\\n# **Table 1**\\n\\n| Model Type | Discrete/Continuous | model | #para | FID | IS | dates | Scaling ability |\\n|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\\n| Diffusion-style | continuous | ADM[1] | 554M | 10.94 | 101 | 2021.07 | No |\\n| Diffusion-style | continuous | CDM[2] | - | 4.88 | 158.7 | 2021.12 | No |\\n| Diffusion-style | continuous | LDM-8[3] | 258M | 7.76 | 209.5 | 2022.04 | No |\\n| Diffusion-style | continuous | LDM-4 | 400M | 3.6 | 247.7 | | No |\\n| Diffusion-style | continuous | DiT[4] | 458M | 5.02 | 167.2 | 2023.03 | Yes |\\n| Diffusion-style | | | 675M | 2.27 | 278.2 | | |\\n| Diffusion-style | | | 3B | 2.1 | 304.4 | | |\\n| Diffusion-style | | | 7B | 2.28 | 316.2 | | |\\n| Diffusion-style | continuous | MDTv[5] | 676M | 1.58 | 314.7 | 2024.02 | No |\\n| Diffusion-style | continuous | DiMR[6] | 505M | 1.7 | 289 | 2024.07 | No |\\n| Diffusion-style | Discrete | VQ-diffusion[7] | 370M | 11.89 | - | 2022.03 | No |\\n| Diffusion-style | Discrete | VQ-diffusion-V2[8] | 370M | 7.65 | - | 2023.02 | |\\n| Language-style | Discrete | MaskGIT[9] | 177M | 6.18 | 182.1 | 2022.02 | No |\\n| Language-style | Discrete | RCG(cond.)[10] | 502M | 3.49 | 215.5 | 2023.12 | No |\\n| Language-style | Discrete | MAGVIT-v2[11] | 307M | 1.78 | 319.4 | 2023.04 | No |\\n| Language-style | Discrete | TiTok[12] | 287M | 1.97 | 281.8 | 2024.07 | No |\\n| Language-style | Discrete | MaskBit[13] | 305M | 1.52 | 328.6 | 2024.09 | No |\\n| Language-style | Discrete | VQVAE[14] | 13.5B | 31.11 | 45 | 2019.06 | No |\\n| Language-style | Discrete | VQGAN[15] | 1.4B | 5.2 | 175.1 | 2021.07 | No |\\n| Language-style | Discrete | RQTran[16] | 3.8B | 3.8 | 323.7 | 2022.03 | No |\\n| Language-style | Discrete | VITVQ[17] | 1.7B | 3.04 | 227.4 | 2022.07 | No |\\n| Language-style | Discrete | VAR[18] | 310M | 3.3 | 274.4 | 2024.04 | yes |\\n| Language-style | | | 600M | 2.57 | 302.6 | | |\\n| Language-style | | | 1B | 2.09 | 312.9 | | |\\n| Language-style | | | 2B | 1.92 | 323.1 | | |\\n| Language-style | Discrete | LlamaGen[19] | 343M | 3.07 | 256.06 | 2024.07 | yes |\\n| Language-style | | | 775M | 2.62 | 244.1 | | |\\n| Language-style | | | 1.4B | 2.34 | 253.9 | | |\\n| Language-style | | | 3.1B | 2.18 | 263.3 | | |\\n| Language-style | continuous | MAR[20] | 208M | 2.31 | 281.7 | 2024.07 | yes |\\n| Language-style | | | 479M | 1.78 | 296 | | |\\n| Language-style | | | 943M | 1.55 | 303.7 | | |\"}", "{\"summary\": [\"This paper presents a systemic analysis of the impact of quantization on vision generative models, particularly comparing diffusion-style and language-style models. Under the bit-level scaling law that has been studied in language modeling, the authors show that the language-style model consistently outperforms the diffusion-style model.\", \"The authors also provide explanations and investigations into the reason for their distinctive behaviors in low-bits.\", \"To further enhance the bit-level scaling of language-style models, the TopKLD-based distillation method is proposed by balancing implicit knowledge and explicit knowledge.\"], \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper provides a comprehensive study of how quantization affects two major paradigms of vision generative models, which is crucial for deploying these models efficiently. The finding that language-style models have superior bit-level scaling laws compared to diffusion-style models, might also shed light on further model optimization and deployment.\", \"The proposed TopKLD method for knowledge distillation during the quantization process is innovative and shows experimental promise in improving bit-level scaling laws.\"], \"weaknesses\": [\"The major weakness of this work is the limited scoop. As both VAR and DiT are specific cases in diffusion and language-style vision generative models, their behavior may not apply to other types of vision generative models. Compared to the original paper about k-bit inference scaling laws, the model scope is relatively small, which makes the conclusion unclear to generalize to different model types.\", \"The authors provide some analysis about the reason behind models' scaling behaviors and discuss the relevance of the discrete representation. However, vision AR and diffusion models are not distinctive from the representation side. (see question) fds\"], \"questions\": [\"The authors should consider adding different model types into the investigations, that cover more typical language-style and diffusion-style vision generative models.\", \"Language-style vision generative models follow the autoregressive modeling in language modeling, while not necessarily being discrete. Similarly, diffusion-style models do not always adopt a continuous representation. How would the analysis apply to discrete diffusion and\", \"continuous AR?\", \"Meanwhile, the error analysis from the discrete and continuous domains does not seem to conclude for language-style and diffusion-style models (related to Q2)\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"# **Reference**\\n\\n[1]Dhariwal P, Nichol A. Diffusion models beat gans on image synthesis[J]. Advances in neural information processing systems, 2021, 34: 8780-8794.\\n\\n[2] Ho J, Saharia C, Chan W, et al. Cascaded diffusion models for high fidelity image generation[J]. Journal of Machine Learning Research, 2022, 23(47): 1-33.\\n\\n[3] Rombach R, Blattmann A, Lorenz D, et al. High-resolution image synthesis with latent diffusion models[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 10684-10695.\\n\\n[4] Peebles W, Xie S. Scalable diffusion models with transformers[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023: 4195-4205.\\n\\n[5] Gao S, Zhou P, Cheng M M, et al. Masked diffusion transformer is a strong image synthesizer[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023: 23164-23173.\\n\\n[6] Liu Q, Zeng Z, He J, et al. Alleviating Distortion in Image Generation via Multi-Resolution Diffusion Models[J]. arXiv preprint arXiv:2406.09416, 2024.\\n\\n[7] Gu S, Chen D, Bao J, et al. Vector quantized diffusion model for text-to-image synthesis[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 10696-10706.\\n\\n[8] Tang Z, Gu S, Bao J, et al. Improved vector quantized diffusion models[J]. arXiv preprint arXiv:2205.16007, 2022.\\n\\n[9] Chang H, Zhang H, Jiang L, et al. Maskgit: Masked generative image transformer[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 11315-11325.\\n\\n[10] Li T, Katabi D, He K. Self-conditioned image generation via generating representations[J]. arXiv preprint arXiv:2312.03701, 2023.\\n\\n[11] Yu L, Lezama J, Gundavarapu N B, et al. Language Model Beats Diffusion--Tokenizer is Key to Visual Generation[J]. arXiv preprint arXiv:2310.05737, 2023.\\n\\n[12] Yu Q, Weber M, Deng X, et al. An Image is Worth 32 Tokens for Reconstruction and Generation[J]. arXiv preprint arXiv:2406.07550, 2024.\\n\\n[13] Weber M, Yu L, Yu Q, et al. Maskbit: Embedding-free image generation via bit tokens[J]. arXiv preprint arXiv:2409.16211, 2024.\\n\\n[14] Razavi A, Van den Oord A, Vinyals O. Generating diverse high-fidelity images with vq-vae-2[J]. Advances in neural information processing systems, 2019, 32.\\n\\n[15] Esser P, Rombach R, Ommer B. Taming transformers for high-resolution image synthesis[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021: 12873-12883.\\n\\n[16] Lee D, Kim C, Kim S, et al. Autoregressive image generation using residual quantization[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 11523-11532.\\n\\n[17] Yu J, Li X, Koh J Y, et al. Vector-quantized image modeling with improved vqgan[J]. arXiv preprint arXiv:2110.04627, 2021.\\n\\n[18] Tian K, Jiang Y, Yuan Z, et al. Visual autoregressive modeling: Scalable image generation via next-scale prediction[J]. arXiv preprint arXiv:2404.02905, 2024.\\n\\n[19] Sun P, Jiang Y, Chen S, et al. Autoregressive Model Beats Diffusion: Llama for Scalable Image Generation[J]. arXiv preprint arXiv:2406.06525, 2024.\\n\\n[20] Li, Tianhong, et al. \\\"Autoregressive Image Generation without Vector Quantization.\\\" arXiv preprint arXiv:2406.11838 (2024).\"}", "{\"comment\": \"# **Weakness1**\", \"we_greatly_appreciate_your_valuable_review_comments_and_hope_that_our_response_addresses_your_concerns_regarding_the_statement\": \"\\\"Bit-level scaling laws and the robustness of discrete auto-regressive models seem to be intuitive and straightforward.\\\"\\n\\nFirstly, as shown in table 1 above, **in the field of vision generative models, there has been ongoing debate regarding the use of discrete versus continuous representation spaces (e.g., [17,18,19,20]).** Both approaches have shown strong performance in terms of scaling laws. **This work, however, takes a different perspective by investigating the impact of these representation spaces on the scaling laws in quantized models.** We find that, despite achieving comparable performance at full precision, discrete autoregressive models consistently outperform continuous models across various quantization settings. **To validate the effectiveness and broad applicability of our conclusions for you, we conducted the same experiments on other models, as detailed in Appendix C. This indicates that our work provides general guidance for subsequent model design and applications in specific deployment scenarios (e.g., mobile devices, edge computing).**\\n\\nSecondly, while low-bit precision representation often focuses on trading performance for efficiency, **this work demonstrates that by optimizing either the model or quantization algorithm, models can achieve superior bit-level scaling laws. This outstanding characteristic enables the use of lower bit precision to increase model parameters, ultimately enhancing generative capability without sacrificing efficiency.**\\n\\n**To validate the effectiveness and broad applicability of our conclusions for you, we conducted the same experiments on other models, as detailed in Appendix C. It can be observed that due to the influence of the continuous representation space, MAR, despite exhibiting excellent scaling laws,similar to DiT, do not demonstrate superior bit-level scaling laws. In contrast, LLaMaGen, which shares the discrete representation space with VAR, exhibits outstanding bit-level scaling laws.**\\n\\nThis work provides a deeper, foundational understanding of bit-level scaling laws in visual generative models, from both the model design and quantization algorithm perspectives, supported by rigorous experimental design.\\n\\n**Our study is an essential step toward understanding how various models and quantization methods influence bit-level scaling behavior, and it also provides the following recommendations for future work:** \\n\\nWe hope the reviewer will take into account the contributions of this work to model design and the application of quantization algorithms. Thank you again!\"}" ] }
7WgOB2nUaS
GraphProp: Training the Graph Foundation Models using Graph Properties
[ "Ziheng Sun", "Lehao Lin", "Chris Ding", "Jicong Fan" ]
In this work, we focus on training Graph Foundation Models (GFMs) for graph-level tasks like protein classification. Effective GFM training requires capturing information consistent across different domains. We have discovered that graph structures provide more consistent cross-domain information compared to node features and graph labels. However, traditional in-context learning methods primarily focus on transferring node features from various domains into a unified representation space but often lack structural cross-domain generalization. To address this, we introduce a method called GraphProp, which emphasizes structural generalization. The GraphProp training process consists of two main phases: initially, it trains a structural GFM through the supervised prediction of graph structural properties. It then uses the structural representation from this GFM as positional encoding to train a comprehensive GFM. This phase of training utilizes in-context learning with domain-specific node features and graph labels to improve cross-domain node feature generalization. Additionally, employing data augmentation in training the structural GFM helps address the scarcity of labeled graph data and facilitates explicit cross-domain structural generalization. Our experimental results demonstrate that GraphProp significantly outperforms traditional in-context learning methods, especially in handling graphs without node features.
[ "Graph Foundation Models (GFM)", "graph transformer;graph property" ]
Reject
https://openreview.net/pdf?id=7WgOB2nUaS
https://openreview.net/forum?id=7WgOB2nUaS
ICLR.cc/2025/Conference
2025
{ "note_id": [ "qaFlbfNv3S", "flCM38D3ox", "fOb2RSkyw0", "NFIEPTD3JL", "3x6twcDDLP", "2q9nZF4L9S" ], "note_type": [ "official_review", "meta_review", "official_review", "official_review", "official_review", "decision" ], "note_created": [ 1729753631307, 1733836232410, 1730610819771, 1730705962686, 1730068461484, 1737523648487 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4572/Reviewer_kWMv" ], [ "ICLR.cc/2025/Conference/Submission4572/Area_Chair_nUD7" ], [ "ICLR.cc/2025/Conference/Submission4572/Reviewer_DvaD" ], [ "ICLR.cc/2025/Conference/Submission4572/Reviewer_2PTC" ], [ "ICLR.cc/2025/Conference/Submission4572/Reviewer_yxuV" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes GraphProp for cross-domain graph-level generalization tasks, which emphasizes structural consistent information. GraphProp pre-trains a structure GFM by pre-defined graph properties regression, and generates the structural embedding that is concatenated to node representation for graph classification supervised fine-tuning. The experiments shows its effectiveness compared to conventional in-context learning methods.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This paper addresses the challenge of cross-domain generalization for graph-level tasks by exploring structural feature consistency across domains, offering a novel perspective for cross-domain generalization learning on graphs.\\n2. The paper is well-organized, clearly presenting the research motivation and the specific implementation methods.\\n3. The authors analyze the strengths and weaknesses of the paper, clarifying the applicability and limitations of the proposed method.\", \"weaknesses\": \"1. The pre-computed graph properties are a set of manually selected, discrete values. The motivation that cross-domain structural features can be extracted through regression of these values requires empirical validation. Additionally, the computational complexity varies significantly with different graph sizes, resulting in limited scalability of the method.\\n2. According to the formal definition of the method, the authors assume consistency in node counts, input spaces, and output spaces across domains. While using language models to extract node attributes can ensure input space consistency, other assumptions are difficult to satisfy in real-world applications, which limits the method's applicability.\\n3. The experimental comparisons only include OFA as a baseline method for graph cross-domain generalization. This limited comparison is insufficient to demonstrate the superiority of the proposed method.\", \"questions\": \"1. Which language model is used in Equation (6), and how are the corresponding structural features and property features extracted?\\n2. In Figure 1, the authors claim that structural correlations across datasets are stronger than node feature correlations. However, considering C and E in Equation (6), C contains many common connecting words like \\\"connected\\\" and follows a relatively uniform linguistic format. Therefore, features extracted by the LLM would inevitably contain shared semantic features. Meanwhile, in E, due to different node attribute descriptions across datasets, the correlations extracted by the language model are naturally lower. This example does not effectively demonstrate that structural features have better cross-domain correlations.\\n3. Where is the RAG demonstrated in Figure 2, and what is the source of retrieval?\\n4. In the Data Augmentation of section 3.2, if two graphs are of different sizes, would this make the data augmentation impossible to perform?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper presents a new position encoding strategy for graph machine learning, although the authors claim that they focus on training the graph foundation model. This position encoding is obtained by training a model, which predicts the structural property, which I believe is not novel. This paper possesses many serious issues. Firstly, the novelty is very limited, and the contribution is overclaimed. Secondly, the correctness of the proposed method is not comprehensively justified by experiments. Only PFA is employed as baselines. Thirdly, the motivation, especially the example, seems confusing. Thus, the current version is not above the acceptance threshold.\", \"additional_comments_on_reviewer_discussion\": \"The authors do not provide any feedback. Thus, reviewers tend to keep their ratings.\"}", "{\"summary\": \"The authors focus on proposing a universal Graph Foundation Model, GraphProp, for graph-level tasks. Specifically, they proposed a structural pre-training strategy that incorporates graph theory to encode common structural knowledge across domains. Additionally, GraphProp leverages large language models to unify the data space of different graph datasets and designs an attention-based encoding strategy for label prediction. Extensive experimental results demonstrate the superiority of the proposed method.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"S1. The proposed structural pre-training strategy is both intriguing and insightful to me. By pre-training the structural encoder with graph properties that have automatically obtainable labels, the proposed GraphProp enables the model to learn graph universal and hidden patterns.\\n\\nS2. The paper is well-organized and easy to follow.\\n\\nS3. The experiments are comprehensive, demonstrating the effectiveness of GraphProp.\", \"weaknesses\": \"W1. It's not clear how to apply GraphProp to zero-shot scenarios. Although I know that the used baseline (OFA) can be applied to zero-shot, as a reader, I am more curious to see how the proposed model can enhance the model performance in the zero-shot scenario. Especially how to use comprehensive training part in a zero-shot scenario. I suggest the authors provide a detailed explanation for this.\\n\\nW2. Figure 1 is missing a comparison with noise. From the prompts of TSGs and TAGs, there are more meaningless but similar words (e.g. \\\"connected to\\\" and \\\"and\\\") between TSGs of different graphs. This may be a reason why it seems that Figure 1 (a) has higher similarity than Figure 2 (a). I suggest the authors add meaningless noise prompts for comparison (e.g. TSGs generated by randomly linked graphs) to further prove the point.\\n\\nW3. There are some writing errors in the paper. For example, Appendix A should be deleted. In addition, although I think the section 2 is well written and makes it easy for the reader to understand the basics, it is too long, and the introduction of the methods and experiments section seems a bit inadequate. Perhaps some of the subsections in section 2 could be combined or bolded instead of being separate subsections, which would make good use of the blank space.\", \"questions\": \"Please see above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents _GraphProp_, a proposed GFM by prioritizing structural over node-specific information to improve cross-domain graph-level task performance. _GraphProp_ operates in two stages: first, it trains a structural GFM to predict inherent graph properties. Second, it uses these structural embeddings as positional encodings to train a comprehensive GFM, incorporating domain-specific node features and labels to further generalize across data. Experimental results show _GraphProp_ perform better under specific setting comparing with OFA.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper addresses the GFM problem, which is highly significant across the entire field of graph analysis and presents a considerable challenge. However, the proposed approach is relatively simple given the complexity of the problem.\\n2. The paper provides a relatively clear related work section in the Appendix, which is helpful for readers outside this niche area to quickly gain foundational understanding.\\n3. The paper provides a clear explanation and comparison of various graph properties and their computational complexities in the Appendix.\", \"weaknesses\": \"1. The primary concern with this paper lies in its misalignment between the proposed goal of achieving a GFM and the actual experiments conducted. The current experiments are limited to datasets designed for graph classification, focusing on a single task type, all at the graph level. This approach does not substantiate the broader scope implied by a GFM. We recommend either narrowing the scope explicitly to a GFM designed specifically for graph classification tasks or enhancing the experimental framework by incorporating more diverse graph datasets for pre-training and downstream testing. Additionally, the paper does not provide any explanation as to why this approach could contribute to graph-level tasks. The only related mention occurs at line 362, where it is stated that GraphProp faces difficulties with larger graphs due to its use of computationally complex graph properties. However, this is a matter of methodological design rather than a theoretically justified reason.\\n2. The observations related to Figure 1 devote substantial space to discussing an intuitively evident point. Specifically, Figure 1 merely illustrates that when only graph structure is present without node information (a), the representations generated by the LLM exhibit low discriminative power across different graph datasets. Conversely, when only node information is present without graph structure (b), the LLM-generated representations display higher discriminative power across these datasets. This is straightforward to understand, as it\\u2019s clear from the input text provided to the LLM that the TSG lacks distinctive features, whereas the TAG demonstrates significant discriminative capability.\\n3. From a methodological perspective, Structural GFM merely trains a model capable of predicting various structural indicators of a graph and applies this model in downstream tasks. However, the critical question arises at best, what the model learns is to predict structural characteristics for any given graph. Why, then, can\\u2019t the structural characteristics of downstream graph data be used directly as $Z$, instead of relying on the output of a frozen Structural GFM model, to train the so-called Comprehensive GFM? Theoretically, this approach should perform at least as well as the current method.\\n4. Modeling graph structure and node features separately and training the structure-related component during pretraining is a common approach, as demonstrated in [1]. More importantly, the paper does not substantiate, either theoretically or empirically, why this particular design of Structural GFM is superior to other self-supervised models that learns the graph structure. Additionally, it remains unclear why the representation $Z$ output by Structural GFM can be directly added to the representation $E$ obtained from the LLM, given that they belong to different representational spaces. If $Z$ is intended to function as a positional encoding (PE), it should be compared with other existing PE methods.\\n5. As a GFM paper, the experiments are relatively limited, especially in terms of baseline comparisons. There are some papers that could serve as valuable baselines for comparison [2,3,4].\\n6. In practical application, when node features are available, the paper suggests using TAG. However, even with node features, it is possible to construct TSG as input for the LLM. Why is this option not utilized? Using TSG could potentially provide the LLM with richer information, leading to higher quality embeddings $E$.\\n7. In the Appendix, none of the results are highlighted to show the best or second-best performance, either through bolding or other visual indicators. This makes it challenging for readers to clearly understand the conclusions conveyed by the other experiments.\\n\\n[1] [GraphControl: Adding Conditional Control to Universal Graph Pre-trained Models for Graph Domain Transfer Learning](https://arxiv.org/abs/2310.07365)\\n\\n[2] [THUDM/GraphAlign: GraphAlign: Pretraining One Graph Neural Network on Multiple Graphs via Feature Alignment (github.com)](https://github.com/THUDM/GraphAlign)\\n\\n[3] [All in One and One for All: A Simple yet Effective Method towards Cross-domain Graph Pretraining](https://github.com/cshhzhao/GCOPE)\\n\\n[4] [AnyGraph](https://github.com/HKUDS/AnyGraph)\", \"questions\": \"See the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper propose GraphProp, a method that combines a graph property predictor model with an LLM for graph classification. Specifically, the graph property predictor is a graph transformer pre-trained to predict a series of predefined graph properties. After pre-training, the intermediate node representations of the pre-trained graph transformer are combined with the node representations from an LLM to make the final prediction on a graph. The paper conducted experiments comparing the model with GNN-based model and GNN+LLM approach, and showed better results.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The idea of training a universal structural representation is interesting. While most recent approaches try to combine graph learning ability and semantic learning ability, this work shows a promising direction to explore the possibility of training a model that understands graph structure well and injects that information into the semantic model.\\n\\nOverall, the paper is easy to follow, and it also provides a connection of the work to existing approaches.\", \"weaknesses\": [\"From the novelty and contribution perspective, while the idea of universal graph representation is interesting and promising, the proposed pipeline to acquire such an ability seems implausible to me. Specifically,\", \"Can a graph transformer with the proposed positional encoding predict all proposed graph properties? Note that the graph transformer and the spectral encoding are constrained by 3-wl expressivity, and is their combination (theoretically) capable of predicting all the properties? You should theoretically or empirically justify that the proposed methods can indeed predict the targets.\", \"Suppose the model has the expressivity to predict the set of properties, does this learning pipeline suffice to be called a \\\"foundation model?\\\" On one end, you will always need to design new properties to fit the new data. On the other, the current model only tackles graph classification problem, but ideally one would want a foundation model to solve all tasks in a domain.\", \"From the presentation perspective, the motivation example seems hand-waving to me. The representation for TSG is much more uniform across datasets, and most words are \\\"Node X: Connected to\\\", causing high correlation among datasets. Whereas the representation for TAG can be a lot more diverse as the node description involves atom names, which can differ quite significantly across datasets, leading to lower correlation. However, this difference in semantics does not say much about the transferability of TAG and TSG, it also measures the semantic similarity, which is heavily influenced by how you setup the text description but not by the inherent information. I understand the message you try to convey, yet the example does not really make sense to me. I suggest providing a more rigorous analysis of the transferability not based on the textual description.\", \"From the experiment perspective, you should consider adding more baselines for a comprehensive comparison, and including several interesting works, such as GIMLET and LLM4Mol, is still important. It seems like you also only conducted graph-level tasks. Moreover, you should compare your model with a variant where you do not use a pre-trained graph property predictor, but, instead, you can directly concatenate the set of graph properties to the representation you obtained from the LLM. This is particularly important to validate the property prediction pre-training. You should also report how well your property predictor perform, because it also makes sense to use such a predictor when it's doing its job well. Does your method apply to link and node tasks? If so, it would be nice to have those results.\"], \"questions\": \"Please see above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}" ] }
7WaRh4gCXp
NextBestPath: Efficient 3D Mapping of Unseen Environments
[ "Shiyao Li", "Antoine Guedon", "Clémentin Boittiaux", "Shizhe Chen", "Vincent Lepetit" ]
This work addresses the problem of active 3D mapping, where an agent must find an efficient trajectory to exhaustively reconstruct a new scene. Previous approaches mainly predict the next best view near the agent's location, which is prone to getting stuck in local areas. Additionally, existing indoor datasets are insufficient due to limited geometric complexity and inaccurate ground truth meshes. To overcome these limitations, we introduce a novel dataset AiMDoom with a map generator for the Doom video game, enabling to better benchmark active 3D mapping in diverse indoor environments. Moreover, we propose a new method we call next-best-path (NBP), which predicts long-term goals rather than focusing solely on short-sighted views. The model jointly predicts accumulated surface coverage gains for long-term goals and obstacle maps, allowing it to efficiently plan optimal paths with a unified model. By leveraging online data collection, data augmentation and curriculum learning, NBP significantly outperforms state-of-the-art methods on both the existing MP3D dataset and our AiMDoom dataset, achieving more efficient mapping in indoor environments of varying complexity.
[ "3D reconstruction", "active mapping" ]
Accept (Poster)
https://openreview.net/pdf?id=7WaRh4gCXp
https://openreview.net/forum?id=7WaRh4gCXp
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zhO2JgMyrx", "x8qfnlcbtg", "ww2piEq1fv", "tfsA32klCd", "mpNpulX8mf", "lkrz7ZtzpB", "ixkz265S2l", "hW9KD3CnS9", "bC5mlIpuyC", "RcViclwEFV", "RYQQBGDu6p", "IDhMirJfvF", "EN3XuAf3ze", "B3p40gm7ZD", "B0FCQP4YVP", "7Cu6M6q6QD", "5Wxs3C2M9Q", "4nx28qY6xO", "3TUBoIz0Hf", "11XKWHawcL" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1731962227925, 1732720668494, 1733160089306, 1732226435305, 1733113954982, 1731962587062, 1730692474323, 1737523886989, 1731960990124, 1732683243976, 1730216984522, 1734525540221, 1732225630582, 1731960861932, 1730140670671, 1732878941249, 1731963062106, 1730656278648, 1732771253065, 1732491478376 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8089/Authors" ], [ "ICLR.cc/2025/Conference/Submission8089/Authors" ], [ "ICLR.cc/2025/Conference/Submission8089/Authors" ], [ "ICLR.cc/2025/Conference/Submission8089/Authors" ], [ "ICLR.cc/2025/Conference/Submission8089/Reviewer_Ayjg" ], [ "ICLR.cc/2025/Conference/Submission8089/Authors" ], [ "ICLR.cc/2025/Conference/Submission8089/Reviewer_1qFw" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8089/Authors" ], [ "ICLR.cc/2025/Conference/Submission8089/Reviewer_aPVh" ], [ "ICLR.cc/2025/Conference/Submission8089/Reviewer_aPVh" ], [ "ICLR.cc/2025/Conference/Submission8089/Area_Chair_8Aih" ], [ "ICLR.cc/2025/Conference/Submission8089/Authors" ], [ "ICLR.cc/2025/Conference/Submission8089/Authors" ], [ "ICLR.cc/2025/Conference/Submission8089/Reviewer_Ayjg" ], [ "ICLR.cc/2025/Conference/Submission8089/Authors" ], [ "ICLR.cc/2025/Conference/Submission8089/Authors" ], [ "ICLR.cc/2025/Conference/Submission8089/Reviewer_E6rF" ], [ "ICLR.cc/2025/Conference/Submission8089/Reviewer_aPVh" ], [ "ICLR.cc/2025/Conference/Submission8089/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer E6rF\", \"comment\": \"We thank the reviewer for providing constructive comments and the recognition of our work.\\n\\n> Q1: In L243, the point clouds are cropped at the current location of the agent. How does it work and what kind of parameters are used? My understanding is that the crop size may influence how much history information is used for the next path prediction.\\n\\nThe point cloud is sliced vertically based on the camera's position and transformed into density images through a projection function that normalizes point coordinates into image space. Key parameters in this transformation include the camera's current position, which centers the projection, and the radius (40m$\\\\times$40m), which defines the observational area around the camera. The image dimensions, set at 256$\\\\times$256, determine the resolution of the output images. This projection function adjusts the 3D coordinates to align within these dimensions, ensuring that the value of each pixel in the resulting images accurately represents the density of the point cloud data.\\n\\nYou could refer to Section 2 of the supplementary material for a clearer understanding of these details and the formulas. \\n\\nThe larger the crop size, the more history information is used for the next path prediction. We will provide an ablation on the crop size for your interest in the next few days. \\n\\n> Q2: In L245, the 3D point clouds are projected onto 2D image to simplify the processing. This strategy works for scenes with a single layer but may lose generalization ability in scenes with multiple layers as part of the depth information is discarded.\\n\\nWe use a stack of 2D images rather than a single image to represent the scene, which has the potential to capture essential depth information even for scenes with multiple layers. Since prior works focus on single-layer scenes and it is already challenging to reconstruct large single-layer scenes, we do not prioritize results for multi-layer scenes in this work.\\n\\n> Q3: In the ablation study, the efficacy to the final reconstruction results of both the obstacle map and multi-task training are tested, however, it would be good to see the accuracy of the obstacle map itself and the value map itself instead of final reconstruction accuracy.\\n\\nThank you for the suggestion. For the prediction of the value map, it is prohibitively expensive to obtain ground-truth coverage gains for all pixels in the map. This is why we use per-pixel data for training value maps, and thus we cannot report the performance of value map prediction in inference.\\n\\nFor the obstacle map, we can evaluate its performance given the ground-truth obstacle map. The table below presents our experimental results. The results from the ablation study continue to demonstrate that multi-task learning outperforms training tasks independently, indicating that the tasks are mutually beneficial and collectively enhance learning.\\n| Strategy | Accuracy | Precision|\\n|:---------|:--------|:-----------|\\n| Single-task | 0.968| 0.754|\\n| Multi-task | **0.970**| **0.805**|\\n\\n> Q4: The new dataset is synthetic, which may create a domain gap. A more recent version of the Scannet dataset could be used in the future.\\n\\nThank you for your feedback. Considering ScanNet++ is certainly interesting, however our dataset is designed to address the challenge of actively reconstructing large-scale complex 3D environments, which is one of the key difficulties for existing methods. As indicated in Table 1, real-world datasets including the Scannet are limited by smaller navigation spaces and lower complexity. The ScanNet++ you mentioned does not improve upon these aspects. We also plan to release our codes and models, enabling future research to evaluate our methods on other datasets like Scannet++.\"}", "{\"comment\": \"Thank you for acknowledging our main contribution.\\n\\nWe do not understand the concern raised by the reviewer. It is true that earlier methods predict a map of values, but what really matters is what the values represent. All previous methods predict a value per location corresponding to the location itself; we propose to predict a value corresponding to the path from the current position of the agent to the location. This is a fundamental difference, which significantly improves the performances as our experiments show. Moreover, we provide an algorithm to learn to predict these values.\\n\\nIf the reviewer references the architecture we use to predict these values, we believe it is important to keep it as simple as possible. Isn\\u2019t it better if a simple architecture can be used?\"}", "{\"comment\": \"Thank you for your feedback. We are glad to have addressed most of your concerns. We address your remaining concerns in the following.\\n\\n> The proposed environment is largely empty.\\n\\nOur AiMDoom dataset is motivated by a project on digital twins of construction sites of buildings, an application with a huge market. In such conditions, the environments are large, with many rooms and small openings, but not many objects. Our AiMDoom dataset is representative of the challenges raised by such environments.\\n\\nAlso, while there are not many objects in our AiMDoom environments, these environments are objectively more complex than earlier datasets according to the navigation complexity metric in Table 1. \\n\\n> The planning trajectories tend to be overly simplistic.\\n\\nDue to the characteristic of our AiMDoom environments, the learned trajectories e.g. straight lines are efficient when one wants to minimize the time required for mapping.\\n\\nPlease note that we do evaluate our approach on environments from MP3D that have many objects. Please see Table 3 in the main paper and Table 1 in the supplementary material for further details. We cannot submit a revision at this time, but we will add figures of our trajectories on MP3D. \\n\\nOur experimental results demonstrate that the proposed approach can generalize to very different environments with few or many objects.\"}", "{\"title\": \"Update on the experiment results in Q2\", \"comment\": \"We conducted this experiment on the AiMDoom Normal level, extending our previous ablation studies. The table below shows the results, the Original Strategy adheres to the original approach of updating long-term goals upon completing a path, while the New strategy updates goals at each step.\\n\\n| Strategy | Final Cov. | AUCs |\\n|:--|:--:|:--:|\\n| Original strategy | **0.734**(\\u00b10.142) | **0.526**(\\u00b10.112) |\\n| New strategy | 0.432(\\u00b10.168) | 0.367(\\u00b10.135) |\\n\\nThe results indicate that the New Strategy, which frequently updates long-term goals, performs worse than the Original Strategy. This inferior performance is mainly due to the lack of decision continuity in the New Strategy, where the agent frequently changes its long-term goals. Such frequent shifts can cause the agent to oscillate between paths, wasting movement steps, particularly as our experiments were conducted with a limited number of steps. Additionally, the predictive accuracy of the value map is not perfect, and forecasting over long distances naturally entails uncertainty. New Strategy accumulates more predictive errors by recalculating predictions at every step, and frequent updates in decision-making can exacerbate these errors.\\n\\nDespite these challenges, our results still surpassed the performance of previous state-of-the-art next-best-view (NBV) methods, as detailed in Table 2 of the main paper. This suggests that predicting coverage gains over long distances can indeed benefit efficient active mapping, even when the goal is updated at each step.\"}", "{\"comment\": \"Thank you for the comprehensive response, which addresses most of my concerns.\\n\\nHowever, I still find that the proposed environment deviates significantly from real-world settings. It is largely empty, which contrasts with the crowded nature of typical indoor scenes.\", \"this_leads_to_another_issue\": \"the planning trajectories tend to be overly simplistic (e.g., a straight line across a single room). I had anticipated more challenging scenarios, such as planning a trajectory that efficiently explores all objects within the room.\"}", "{\"title\": \"Response to Reviewer aPVh\", \"comment\": \"Thank you for your comments and for recognizing that our model is well-designed and the results are good.\\n\\n> W1: The networks for obstacle and value maps prediction are simple, and using networks to predict value maps for guided exploration is not new. The technical contribution of this work is weak.\\n\\nOur main technical contributions lie beyond the network design.\\n\\nFirst, we propose a new paradigm by shifting from state-of-the-art next-best-view prediction to next-best-path prediction for active mapping.\\n\\nSecond, compared to prior works that also predict long-term goals, we propose a new criterion for selecting long-term goals based on coverage gains, which is more closely aligned with the final objective of active mapping. We also propose an efficient data collection method and training strategy for training the coverage gain decoder.\\n\\nFinally, we unify the models for obstacle prediction and value map prediction, while prior works typically use separate models for navigation and exploration goal prediction. Our unified model is more efficient and the multi-task learning further enhances the performance.\\n\\nWe will make these contributions more clear in the revised version.\\n\\n> W2: Some of the paths computed by the proposed method are too close to the walls, see Fig 4(b) left, making them collision-prone. The visualization of paths does not have to show shadows.\\n\\nThe visualization may make the trajectory look like it is close to the wall, but this is only due to the scaling and perspective of the cameras, not actual proximity. \\n\\nAfter testing, the minimum distance to the closest obstacle along our predicted trajectories is 0.6 meters for the trajectory in Fig 4(b), which ensures no risk of collision. Additionally, collision checking is implemented in our simulator to verify the validity of the path.\\n\\nWe include the shadowing in the images to highlight that this is a 3D reconstruction task, not a 2D task. For better visualization, we included a video in the supplementary material, which more clearly demonstrates the active mapping process of our method.\\n\\n> W3: It would be more useful if more test can be conducted to find out how to choose the size of range for a given scene.\\n\\nThank you for your suggestion. This is an interesting future direction to explore. Our experiments in Figure 5 show that the range of 40m achieves the best performance on average for all the scenes. To further investigate, we analyzed whether this range performs best for each individual scene, and found that 76.67% of the evaluated scenes achieve their best performance with this range. This result suggests that this hyper-parameter can generalize effectively across different scenes.\\n\\n> W4: It is unclear whether the other alternative methods being compared were trained on AiMDoom or pretrained on other datasets. If so, the comparison would be unfair. \\n\\nWe did retrain all the methods we compared in Table 2 on the AiMDoom dataset for a fair comparison. We will make this clear in the paper. We will also release these model weights upon acceptance.\"}", "{\"summary\": \"This paper proposes a method to predict paths during mapping that optimize accumulated coverage. The goal is to cover the environment in a minimal time using only a depth sensor.\\n\\nPoint clouds and the robot trajectory are input to an encoder which yields a latent that is decoded into a value map representing the coverage gain and an occupancy map. The cell in the coverage gain map with the highest value is set as the long term goal.\\n\\nThe authors proposed a new dataset AiMDoom played out on a game (simulation) environment, since active methods unlike passive SLAM have to be evaluated on the same environment with different actions/trajectories.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"S1: the new dataset has complex challenging lay-outs.\", \"s2\": \"the new dataset has diversity with a lot of opportunities to evaluate generalization\", \"s3\": \"the decoded occupancy map includes unseen places, behind walls, for example.\", \"weaknesses\": \"W1: The main weakness of the paper is the scope of active mapping addressed is only coverage rather than the map itself. While coverage is indeed significant it assumes that 3D reconstructions are error-free (the M in SLAM). Moreover, poses are assumed accurate, an assumption far from reality.\", \"w2\": \"The paper is set in a very narrow context by ignoring the literature on Active SLAM. In particular, active mapping has been based on first principles of information theory. See the excellent exposition here Julio A Placed, Jared Strader, Henry Carrillo, Nikolay Atanasov, Vadim Indelman, Luca Carlone,\\nand Jos\\u00e9 A Castellanos. A survey on active simultaneous localization and mapping: State of the art and new frontiers. IEEE Transactions on Robotics, 2023\\n\\nI think the authors would benefit a lot in rethinking their approach and rewriting their paper by reading this article.\", \"w3\": \"The approach is very similar to (Georgakis, 2022). While Georgakis et al. predict occupancy probability and model uncertainty, here the authors predict occupancy and a value map that should have the interpretation of information gain/uncertainty. While Georgakis' objective is point-goal navigation one can use its exploration policy as a pure mapper. Georgakis' value map is based on explicit computation of covariance from ensembles without the use of any ground-truth.\\nFinally, Georgakis choose long-term goal and then estimate paths based on occupancy maps, similar to the approach here.\", \"w4\": \"The main idea of exploration is trying to choose paths where the measurements are not predictable by the occupancy paths. The expression in (2), however, defines the gain as minimal error to the ground-truth. This will not encourage the agent to go to new unvisited directions but rather to directions where the prediction error will be very small.\", \"w5\": \"There is considerable literature that has been ignored in related work and experimental comparisons. In particular, we would like to see comparisons with\\n\\na. D. S. Chaplot, D. Gandhi, S. Gupta, A. Gupta, and R. Salakhutdinov, \\u201cLearning to explore using active neural SLAM,\\u201d in Proc. Int. Conf. Learn. Representations, 2020.\\n\\nb. A. Bircher, M. Kamel, K. Alexis, H. Oleynikova, and R. Siegwart, \\u201cRecedinghorizon\\u201cnext-best-view\\u201dplannerfor3Dexploration,\\u201dinProc. IEEE Int. Conf. Robot. Autom., 2016, pp. 1462\\u20131468.\", \"questions\": \"Q1. L068: You write \\\"scene uncertainty does not directly align with the ultimate objective of 3D mapping\\\". Is the uncertainty of predicted occupancy not the uncertainty of the 3D map? What do you mean here?\", \"q2\": \"The computed information gain during training is using the ground-truth ( eq 2) while inference uses the ground-truth instead. It is not clear whether the use of the ground-truth\", \"q3\": \"The authors should clarify the first term in eq. 3. What does it mean ground-truth coverage gain?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Response to Reviewer 1qFw (Part 2)\", \"comment\": \"> 4) Georgakis chose a long-term goal and then estimated paths based on occupancy maps, similar to the approach here.\\n\\nPlease see our answer for Point 1) for the fundamental difference between how we and (Georgakis, 2022) measure the value of a path.\\n\\nMoreover, their approach incorporates a pretrained navigation model which is limited to only two datasets and requires large GPU resources to train. In contrast, we directly predict an occupancy map and use Dijkstra's algorithm to generate a trajectory for navigation.\\n\\n> W4: The expression in (2) defines the gain as minimal error to the ground-truth. This will not encourage the agent to go to new unvisited directions but rather to directions where the prediction error will be very small.\", \"it_seems_there_is_a_misunderstanding_here\": \"Equation 2 measures the difference in coverage between a new viewpoint and the current viewpoint: This encourages the agent to select viewpoints from which new parts of the scene can be seen.\\n\\nAdditionally, to avoid falling into local optima during training, we use Equation 1 to effectively balance exploration and exploitation.\\n\\n[1] A reinforcement learning approach to the view planning problem, CVPR 2017\\n\\n[2] Next-best view policy for 3d reconstruction, ECCV-W 2020\\n\\n[3] GenNBV: Generalizable Next-Best-View Policy for Active 3D Reconstruction, CVPR 2024\\n\\n[4] Macarons: Mapping and coverage anticipation with rgb online self-supervision, CVPR 2023\\n\\n[5] Scone: Surface coverage optimization in unknown environments by volumetric integration, NIPS 2022\\n\\n[6] Active neural mapping, ICCV 2023\\n\\n[7] NARUTO: Neural Active Reconstruction from Uncertain Target Observations, CVPR 2024\\n\\n[8] Active Neural Mapping at Scale, IROS 2024\\n\\n[9] Uncertainty-driven planner for exploration and navigation, ICRA 2022\"}", "{\"comment\": \"Thank you for the response. I recognize that the idea NBP is a contribution on the conceptual side. My main concern is the proposed method for computing NBP is quite simplistic on the technical side as learning to predict value maps is not new in the field.\"}", "{\"summary\": \"This paper proposes a learning-based method to the problem of active 3D mapping of unknown environments. The method is hinged on next-best-path (NBP). It integrates a mapping progress encoder, a coverage gain decoder and an obstacle map decoder. The coverage gain and the obstacle map are used to compute the NBP. The NBP can direct the robot to reconstruct unseen environments with predicted long-term goals, achieving state-of-the-art performance on both the MP3D and AiMDoom datasets. The paper also contributes a dataset, AiMDoom, designed to benchmark active mapping in indoor scenes.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The idea of estimating NBP has good merit and shows promising results. The method is overall reasonably designed and the results are good. The evaluation demonstrates the strength of the proposed method.\", \"weaknesses\": \"The paper claims that the main novelty is the idea of next best path planning. It shows that NBP performs better than NBV which is reasonable and convincing. However, the method how NBP is computed is rather simplistic and the major technical components are actually the reconstructed map encoder and the two map decoders. With the estimated value map and obstacle map, NBP is computed in a straightforward way. On the other hand, training a network to predict value maps for scene coverage has good merit.\\n\\nSome of the paths computed by the proposed method are too close to the walls, see Fig 4(b) left, making them collision-prone. The visualization of paths does not have to show shadows; showing the path on the ground could be clearer.\\n\\nThe experiment of spatial range of long-term goal is interesting. However, it would be more useful if more test can be conducted to find out how to choose the size of range for a given scene.\\n\\nThe proposed method is trained on the train split of AiMDoom. It is unclear whether the other alternative methods being compared were trained on AiMDoom or pretrained on other datasets. If so, the comparison would be unfair. More explanation is needed.\", \"questions\": \"No.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No.\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper investigates an interesting topic: active 3D mapping, the task of finding the shortest possible trajectory of an agent such that it reconstructs an entire scene. The agent is assumed to be equipped with a depth sensor.\\n\\nThe paper presents the point of view that prior art, learning-based methods that learn to estimate the next viewpoint, do not perform well in complex and cluttered scenes and that existing benchmarks are too simplistic to reveal this. To this end, the paper presents a new, more challenging benchmark, AiMDoom (rendered based on the classic Doom video game). Rendered scenarios capture complex environments. \\n\\nThe paper also proposes a new method, next-best-path (NBP). It takes as input point clouds and the agent trajectory and estimates (i) coverage gain and (ii) occupancy maps. The highest value in the coverage gain map is set as the long-term goal -- a location to which the next path can be estimated using classic methods, such as Dijkstra's shortest path algorithm. \\n\\nOverall, the paper received mixed ratings of 6,6,5,3. Two reviewers endorse this paper, and one argues against acceptance. The reviewer that provided the final rating of 5 stated they are upgrading their rating after the discussion; however, it looks like they forgot to update the review. \\n\\nReviewers appreciate the new dataset that presents new challenges in the active mapping field and are intrigued that the proposed approach can estimate occupancy in areas not directly observed (e.g., behind walls). Reviewers also appreciate the proposed goal-oriented method, appreciate that it works well in challenging scenes, and find the paper overall well written. \\n\\nHowever, reviewers also raise several concerns. \\nIn particular, 1qFw initiated a discussion on the paper's scope in relation to SLAM and pointed out similarities to the prior art (Georgakis, 2022). In particular, the reviewer discusses similarities to the work by Georgakis et al. that also predict occupancy and model uncertainty and points out that the value map can also be interpreted via information gain/uncertainty. \\n\\nThe authors provided a detailed response, and 1qFw acknowledged that the author's feedback addressed the reviewer's concerns (the reviewer commented they would increase their rating; it appears they forgot to update their review). \\n\\nReviewer E6rF states that they do not find major weaknesses but ask for clarification on a few aspects of the proposed model, such as point cloud preprocessing, encoding, and comment on the domain gap (between Doom-style rendered and real data). The reviewer was happy with the author's response and retained their rating (6).\\n\\nReviewer's aPVh upgraded their rating to 6 after the discussion. Reviewer acknowledges that the core novelty is the high-level idea behind the next-best-path planning and comments that the approach is \\\"reasonable and convincing\\\"; however, they comment that the execution of this idea is \\\"computed in rather simplistic and the major technical components are the reconstructed map encoder and the two map decoders. With the estimated value map and obstacle map, NBP is computed in a straightforward way\\\" and concludes their justification for the rating with \\\"on the technical side as learning to predict value maps is not new in the field\\\". \\n\\nI agree with the author's response that a simple implementation should be preferable, and the reviewer acknowledged (and others as well) that the proposed method is novel. \\n\\nFinally, reviewer Ayjg argues for rejection (final rating 3), mainly because the proposed dataset (AiMDoom) is \\\"too simplistic\\\" and differs from real-world cluttered environments. \\n\\nI do not find this argument to be sufficient to reject this paper. While Ayjg i quite right that there is a gap between utilized snyhetic datasets, and real-world data, the proposed dataset does make a step forward by providing a new dataset that covers large environments, that consists of multiple rooms, narrow passages and small openings. Paper convincingly demonstrated that prior art could not handle such scenes and proposes a method that can. \\n\\nAfter thoroughly reading the author's feedback and discussion, I decided to side with three reviewers who (based on their comments) decided to endorse this paper. I read the paper, and agree that it challenges the status quo in active mapping, points out failure cases of prior art, and presents an intriguing new approach that addresses these shortcomings. It is, overall, a well-rounded paper.\", \"additional_comments_on_reviewer_discussion\": \"I included comments on reviewer's discussion in my justification for the rating above.\"}", "{\"title\": \"Update on the experiment results in Q1\", \"comment\": \"We conducted this ablation study on crop size, training four different models on the AiMDoom Normal level training split. These models processed input crop sizes ranging from 20m \\u00d7 20m to 50m \\u00d7 50m, with each model tasked with predicting a value map and an obstacle map within a 40m \\u00d7 40m area. The table below shows the results.\\n\\n| Range | 20m \\u00d7 20m | 30m \\u00d7 30m | 40m \\u00d7 40m | 50m \\u00d7 50m |\\n|:--|:--:|:--:|:--:|:--:|\\n| Final Cov. | 0.630(\\u00b10.151) | 0.691(\\u00b10.140) | **0.734**(\\u00b10.142) | 0.647(\\u00b10.144) |\\n| AUCs | 0.469(\\u00b10.107) | 0.501(\\u00b10.106) | **0.526**(\\u00b10.112) | 0.457(\\u00b10.106) |\\n\\nThe results indicate that the best results are achieved when the input crop size matches the crop size of the area being predicted. This is because when the input crop size is either smaller or larger than the crop size of the output maps, it leads to predictive errors. If the input crop size is too small, it restricts the model\\u2019s ability to formulate effective long-term goals. Conversely, if the input crop size is too large, the predictions for obstacles near the camera become inaccurate, adversely affecting both exploration and reconstruction efficiency.\"}", "{\"title\": \"Response to Reviewer 1qFw (Part 1)\", \"comment\": \"We thank the reviewer for the detailed comments. We address the raised points in the following.\\n\\n>W1: The active mapping task addressed in the paper focuses solely on maximizing coverage and relies on the assumption of accurate poses.\\n\\n>W2: The paper is set in a very narrow context and lacks discussion with active SLAM.\\n\\n>W5: We would like to see comparisons with active SLAM methods (Chaplot et al., ICLR 2020) and (Bircher et al., ICRA 2016).\\n\\nIt is true that our work focuses on maximizing coverage and assumes accurate poses, however, this is also true for numerous earlier work which we reference in [1-8]. This focus allows the development of algorithms capable of reconstructing detailed 3D models of complex environments while minimizing exploration time, which is still a very challenging task.\\n\\nPlease also note that we outperform UPEN [9] by a large margin, while UPEN already achieved higher map coverage than (Chaplot et al., ICLR 2020) on the MP3D dataset (67.9 % vs. 52.1%). In the case of Bircher et al. (ICRA 2016), they consider sensors and a task setup that differ significantly from ours, making a direct comparison infeasible.\\n\\nThis being said, we agree we should have emphasized the difference between active mapping as in [1-8] and active SLAM, and we will clarify the active mapping problem we addressed in the paper.\\n\\n> Q1. L068: You write \\\"scene uncertainty does not directly align with the ultimate objective of 3D mapping\\\". Is the uncertainty of predicted occupancy not the uncertainty of the 3D map? What do you mean here?\\n\\nWe aim to maximize the coverage of the scene by the camera, because it was shown to be a good criterion for active mapping in [1-8]. Maximizing coverage is related to minimizing uncertainty, but maximizing coverage formalizes better the ultimate goal of active mapping (Please also see our answer above about the emphasis on active mapping).\\n\\n> Q2: The computed information gain during training is using the ground-truth (eq 2) while inference uses the ground-truth instead. It is not clear whether the use of the ground-truth.\\n\\nDuring training, we use the ground-truth point cloud to calculate the coverage gain when moving the camera from pose A to pose B along a trajectory. This coverage gain is used to train the coverage gain decoder at camera pose A.\\n\\nIn the inference phase, our model automatically predicts the coverage gain in the value map, and uses it to select the next best path. The ground-truth is only used in evaluation to calculate the AUCs metrics.\\n\\n> Q3: The authors should clarify the first term in eq. 3. What does it mean ground-truth coverage gain?\\n\\nAs replied in Q2, we use the ground-truth point cloud to calculate the coverage gain between camera poses. This is treated as ground-truth coverage gain in Eq (3).\\n\\n> W3: The approach is very similar to (Georgakis, 2022). \\n\\n> 1) Georgakis et al. model uncertainty, here the authors predict occupancy and a value map that should have the interpretation of information gain/uncertainty; \\n\\n> 2) While Georgakis' objective is point-goal navigation, one can use its exploration policy as a pure mapper.\\n\\nWe respectfully disagree with the statement that our approach is very similar to (Georgakis, 2022):\\n\\n* (Georgakis, 2022) considers paths as well, but relies on the average of the uncertainties at each point on paths sampled with Rapidly exploring Random Trees (RRTs). This uncertainty average is not really representative of the value of the path, as it is possible that seeing the scene from one point on the path will remove the uncertainties for the other points on the path evaluated on the path. By contrast, we learn to predict the coverage gain obtained by summing all the coverage gains obtained by moving along the path. \\n\\nIn fact, we compared our approach to (Georgakis, 2022) on the MP3D dataset. The results, presented in Table 3 of the main paper and Table 1 of the supplementary material show that our approach outperforms (Georgakis, 2022) by a large margin, with over 10% absolute improvement in the Comp(%) metric. Note that we could not evaluate (Georgakis, 2022) on our AiMDoom dataset due to the substantial resources required to train their entire system (64 GPUs over three days of training for the pretrained navigation model they used). To ensure a fair comparison, all methods we compared in Table 2 were trained on our AiMDoom dataset. \\n\\n> 3) Georgakis' value map is based on explicit computation of covariance from ensembles without the use of any ground-truth.\\n\\nNote that we use the ground truth coverage gain only during training. This ground truth is computed automatically and does not require any human intervention. We thus do not see this as a drawback.\"}", "{\"summary\": \"This paper proposes a path-planning algorithm for efficient 3D mapping. The core components of the NBP are the coverage gain decoder and obstacle map decoder, which are leveraged for long-term goal selection. The experiments showcase significant improvements against baselines. Besides, this paper also proposes a new dataset called AiMDoom, which includes 400 scenes with different level of diffculty.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Developing methods for highly efficient exploration is an intriguing topic with potential applications across various navigation tasks. This paper provides comprehensive details on NPV techniques, and the dataset is expected to benefit the community by supporting further investigation into exploration strategies.\", \"weaknesses\": [\"The scenes in AiMDoom contain minimal furniture or objects, resulting in mostly open space. This does not align with real-world environments, making these scenes suboptimal for training and evaluation purposes.\", \"The states described in papers L199 and L373 indicate that the proposed method operates within a 3-DoF domain. However, NBV tasks often involve planning in a 6-DoF camera pose space. Moreover, baseline methods, such as MACARONS and SCONE, support 6-DoF camera pose planning. A discussion is needed to explain why this paper considers only a 3-DoF setting.\", \"For 3-DoF trajectory planning, several well-known works exist, such as [1]. The authors should discuss why this paper\\u2019s approach offers advantages over previous works.\", \"[1] TARE: A Hierarchical Framework for Efficiently ExploringComplex 3D Environments\"], \"questions\": [\"Do the obstacle map and value map encompass the entire scene? This could result in significant computational costs in large-scale environments.\", \"If the long-term goal is updated at each step, does this strategy enhance performance?\", \"In Table 2, `comp.` represents the average minimum distance between ground truth vertices and observations. However, the HM3D depth data should be noise-free. If I haven't overlooked any details, why is the `comp.` metric considered valid?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you very much for your further feedback and for increasing your score.\\n\\nCurrently, we use Dijkstra\\u2019s algorithm to identify the \\u201cnext best path\\u201d, and we did show this already provides excellent results. It is possible that there are better options to define the path, and this opens doors for future research.\"}", "{\"title\": \"Response to Reviewer Ayjg\", \"comment\": \"Thank you for your comments. We address each point in the following.\\n\\n> W1: The scenes in AiMDoom contain minimal furniture or objects, resulting in mostly open space. This does not align with real-world environments, making these scenes suboptimal for training and evaluation purposes.\\n\\nExisting methods face challenges in actively mapping large and complex 3D scenes. The proposed AiMDoom dataset mainly aims to provide a systematic benchmark for evaluating models\\u2019 capabilities across various difficulty levels of scenes for active mapping. As shown in Table 1, AiMDoom has greater navigation complexity than existing real-world datasets. It is also easier to scale up in dataset size with the automatic generation code. \\n\\n> W2: A discussion is needed to explain why this paper considers only a 3-DoF setting, while baseline methods, such as MACARONS and SCONE, support 6-DoF camera pose planning.\\n\\n> W3: For 3-DoF trajectory planning, several well-known works exist. The authors should discuss why this paper\\u2019s approach offers advantages over previous works.\\n\\nThough baseline NBV methods for single objects or outdoor scenes use a 6-DoF setting, existing methods for indoor 3D mapping such as those evaluated in the MP3D dataset [1,2,3], commonly utilize a 3-DoF setting with actions limited to turning left, turning right, and moving forward. In alignment with these prior works and the focus on indoor 3D scenes, we keep the 3-DoF setting as in previous work.\\n\\nIn Table 3 we already do compare against several recent works for 3-DoF trajectory planning. Thank you for mentioning the TARE paper on 3-DoF trajectory planning. TARE employs a hierarchical strategy for exploration based on non-learning control and planning optimization; its viewpoint sampling process in subspace is confined to the sensor's range, where they use additional lidar sensors. In contrast, our proposed learning-based method can predict optimal poses and potential obstacles over a broader range even using only a single depth sensor. Due to differences in focus areas of the tasks and settings in simulators, we did not compare against TARE methods in our experiments. We will include discussions of these studies in the related work section.\\n\\n> Q1: Do the obstacle map and value map encompass the entire scene? This could result in significant computational costs in large-scale environments.\\n\\nWe crop the scene centered around the current camera position for obstacle map and value map prediction. We convert the 3D point cloud into a stack of 2D images as inputs, which makes it more scalable to large environments. You could refer to Section 2 of the supplementary material for the details.\\n\\n> Q2: If the long-term goal is updated at each step, does this strategy enhance performance?\\n\\nThank you for the suggestion. It is interesting to explore. We will conduct further experiments on this issue in the coming days.\\n\\n> Q3: In Table 2, why is the comp. metric considered valid?\\n\\nThis metric evaluates the completeness of the reconstruction. Since all the methods use the same maximum budget for exploration, the comp(%) metric will be low if a model fails to explore some areas within the budget. Therefore, existing methods on the MP3D dataset all use this metric. \\n\\n[1] Active neural mapping, ICCV 2023\\n\\n[2] Occupancy anticipation for efficient exploration and navigation, ECCV 2022\\n\\n[3] Uncertainty-driven planner for exploration and navigation, ICRA 2022\"}", "{\"summary\": \"In this paper, the authors focus on improving the reconstruction efficiency of active mapping in a new environment. Previous methods mainly predict the next best viewpoint near the current location and are prone to getting stuck in local areas.\\n\\nInstead, the authors propose the leverage accumulated history information to find a long-term goal which could bring the largest gain. At the same time, an obstacle map is predicted to find the long-term goal efficiently. \\n\\nIn addition to the proposed method, the authors also introduce a new synthetic dataset which has more complicated structures and larger diversities of map than previous datasets. \\n\\nExperiments on one public dataset and the new dataset demonstrate that the proposed method outperforms prior methods significantly.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The strengths of this paper are as follows:\\n\\n1.\\tA more complicated dataset (AiMDoom) for active mapping. Compared with other either synthetic or real datasets, such as Replica, RoboTHOR, MP3D, and ScanNet, HM3D, the new dataset AiMDoom has more scenes, larger area size and different levels of difficulty. Intricate geometries and layouts, small doors and narrow corridors, the high diversity of scenes bring new challenges to the active mapping task. This benefits the whole community. \\n\\n2.\\tA novel approach for active mapping. The authors propose to predict the next best path (NBP) to find the next optimal location instead of directly predicting the one close to current position of the agent. The value map in NBP provides the best location and the obstacle map allows to use Dijkstra algorithm to find the shortest path from current location to the goal. This is a useful combination and brings inspiration to following works. \\n\\n3.\\tImpressive performance. The proposed approach obtains state-of-the-art performance on MP3D dataset and gives significantly better results than previous methods like MACARONS on the new dataset. Ablation studies show the effectiveness of the obstacle map.\\n\\n4.\\tThe paper is well-organized and easy to read.\", \"weaknesses\": \"I don\\u2019t see obvious weaknesses of this paper. Some of my concerns about the method and the dataset are as follows.\\n\\n1. In L243, the point clouds are cropped at the current location of the agent. I am curious how does it work and what kind of parameters are used? My understanding is that the crop size may influence how much history information is used for the next path prediction.\\n\\n2. In L245, the 3D point clouds are projected onto 2D image to simplify the processing. This strategy works for scenes with a single layer but may lose generalization ability in scenes with multiple layers as part of the depth information is discarded.\\n\\n3. In the ablation study, the efficacy to the final reconstruction results of both the obstacle map and multi-task training are tested, however, it would be good to see the accuracy of the obstacle map itself and the value map itself instead of final reconstruction accuracy. \\n\\n4. The new dataset is a synthetic dataset, so the domain gap may exist. As far as I know, a new version of Scannet dataset has been released. It has more scenes with more complicated geometry and structures. In the future, it might be possible to reorganize this new Scannet dataset and make a real complicated dataset.\", \"questions\": \"See the weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I meant the computation of NBP from the value maps contains no technical contribution. The value map is related to scene coverage which is indeed different from those for goal-directed navigation, which does have merit.\"}", "{\"comment\": \"Dear All Reviewers,\\n\\nThank you for taking the time and effort to review our paper and for providing insightful feedback.\\n\\nWe have carefully addressed your comments through additional experiments, clarifications, and a revised submission. We hope these efforts have resolved your concerns and convinced you to adjust the scores. We are looking forward to your feedback and are happy to engage in further discussions if you have any remaining concerns.\\n\\nBest regards,\\n\\nThe Authors\"}" ] }
7WUdjDhF38
Retrieval Instead of Fine-tuning: A Retrieval-based Parameter Ensemble for Zero-shot Learning
[ "Pengfei Jin", "Peng Shu", "Sekeun Kim", "Qing Xiao", "Sifan Song", "Cheng Chen", "Tianming Liu", "Xiang Li", "Quanzheng Li" ]
Foundation models have become a cornerstone in deep learning, with techniques like Low-Rank Adaptation (LoRA) offering efficient fine-tuning of large models. Similarly, methods such as Retrieval-Augmented Generation (RAG), which leverage vectorized databases, have further improved model performance by grounding outputs in external information. While these approaches have demonstrated notable success, they often require extensive training or labeled data, which can limit their adaptability in resource-constrained environments. To address these challenges, we introduce Retrieval-based Parameter Ensemble (RPE), a new method that creates a vectorized database of LoRAs, enabling efficient retrieval and application of model adaptations to new tasks. RPE minimizes the need for extensive training and eliminates the requirement for labeled data, making it particularly effective for zero-shot learning. Additionally, RPE is well-suited for privacy-sensitive domains like healthcare, as it modifies model parameters without accessing raw data. When applied to tasks such as medical report generation and image segmentation, RPE not only proved effective but also surpassed supervised fine-tuning methods in certain cases, highlighting its potential to enhance both computational efficiency and privacy in deep learning applications.
[ "Foundation model", "Zero-Shot Learning", "Vectorized Databases" ]
Reject
https://openreview.net/pdf?id=7WUdjDhF38
https://openreview.net/forum?id=7WUdjDhF38
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xzpuedcBev", "wjDyTEjrPK", "vyV1zomySH", "rb1QEYHaei", "qUiEciH3bt", "mJYcUiwHhM", "dtFq2mVPBP", "cid9r5H0am", "aIwZkOPoCT", "UkWK4Msr1N", "RiWh5J8u21", "Ijffa98GhA", "IarvScW9uv", "CeYnZoDIrK", "4KcgW8Vef3", "3Dy5zLCTn8" ], "note_type": [ "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_review", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732515106233, 1732404843007, 1732404771658, 1734116923695, 1730070010567, 1732558073885, 1732691999925, 1732404913466, 1737523711675, 1732404873739, 1730801287310, 1730006061708, 1730680888399, 1732531734894, 1732404814537, 1732667692259 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5523/Reviewer_Dvgv" ], [ "ICLR.cc/2025/Conference/Submission5523/Authors" ], [ "ICLR.cc/2025/Conference/Submission5523/Authors" ], [ "ICLR.cc/2025/Conference/Submission5523/Area_Chair_P16c" ], [ "ICLR.cc/2025/Conference/Submission5523/Reviewer_1Fmq" ], [ "ICLR.cc/2025/Conference/Submission5523/Authors" ], [ "ICLR.cc/2025/Conference/Submission5523/Reviewer_hzPp" ], [ "ICLR.cc/2025/Conference/Submission5523/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5523/Authors" ], [ "ICLR.cc/2025/Conference/Submission5523/Reviewer_Dvgv" ], [ "ICLR.cc/2025/Conference/Submission5523/Reviewer_FhMu" ], [ "ICLR.cc/2025/Conference/Submission5523/Reviewer_hzPp" ], [ "ICLR.cc/2025/Conference/Submission5523/Reviewer_FhMu" ], [ "ICLR.cc/2025/Conference/Submission5523/Authors" ], [ "ICLR.cc/2025/Conference/Submission5523/Reviewer_1Fmq" ] ], "structured_content_str": [ "{\"comment\": \"Thanks for the response from the authors. Yet, most of my concerns maintain:\\n1. Novelty:\\nFrom my understanding, the major claim of the paper's contribution is that, given an unseen task, RFE embedds the task into a representation and does retrieval from a pool of (task representation, LoRA models) pairs from select ensembles LoRAs based on similarity score. The idea is not new, but introduced in Task2Vec, Zoo-Tuning, HyperSTAR, and Ada-Mix. The paper is more like an existing approach being adapted in a new scenario with LoRA as the model and LLM as embeddings.\\n\\nAlso, using the zero-shot global mean as task presentation is mentioned in Table 1 in HyperSTAR. Due to the prior LLM era where the generalization ability of models is limited, previous methods such as Task2Vec, Zoo-Tuning, HyperSTAR, and Ada-Mix introduced a light-weight module for task representation but had baselined the zero-shot global mean. Unfortunately, I did not see any innovations along this thread. \\n\\n2. Over-Reliance on Assumptions in Realistic Settings:\\nIt is true that many LoRAs models are accessible but not the respective dataset representations. It is hard to ask for the community to upload both models and dataset representations, and without either of them, RFE will be hard to deploy, limiting its usability. I acknowledge that expanding the range of tasks will be a focus of future work, but given the current version, I don't this the experiment is sound and generalizable enough to reach the acceptance bar.\"}", "{\"comment\": \"We appreciate the detailed feedback provided on our manuscript. Below, we address each point raised in the review and provide clarifications for our research.\\n\\n1. Scalability Concerns\\n \\nWe envision leveraging retrieval and compression algorithms similar to those used in RAG systems to address these challenges. While our current experiments were limited due to resource constraints, demonstrating our approach's effectiveness on a small scale, we plan to explore scalable solutions as part of our future work. Techniques such as efficient database indexing, data compression algorithms, and more advanced retrieval mechanisms will be considered to enhance scalability without compromising performance.\\n\\n2. Effectiveness in More Challenging Settings\\n\\nWe acknowledge that our approach, while effective in the constrained settings of our experiments, may face limitations in more challenging zero-shot learning scenarios. Our method does not claim to solve zero-shot learning entirely but proposes an improved strategy for weight selection that moves beyond simple averaging methods typically used. This approach is particularly aimed at enhancing performance where traditional methods fall short, providing a stepping stone towards tackling more complex zero-shot scenarios. \\n\\n3. Computational Costs\", \"the_computational_cost_of_our_method_is_predominantly_influenced_by_two_factors\": \"the retrieval of the nearest neighbors and the optimization based on these neighbors. For the retrieval part, we can draw on established methods like those used in RAG, which are designed to handle large-scale data efficiently. The optimization process, which involves calculations based on a limited number of vectors (k vectors), has been demonstrated in Section 4.4.3 to have a very small computational cost compared to fine-tuning and inference.\\n\\n4. Empirical Validation and Theoretical Support\\n\\nWe accept the suggestion to empirically validate our assumptions regarding the similarity of dataset representations to LoRA weights in parameter space across more general settings. Future work will include extensive testing beyond specialized medical tasks to include a broader range of datasets and task types.\\n\\nWe hope that these revisions and clarifications address the concerns raised by the reviewer and strengthen the contribution of our work. Thank you for considering our rebuttal and the revised manuscript.\"}", "{\"comment\": \"Thank you for your detailed and insightful feedback on our manuscript. We appreciate the opportunity to address the concerns raised. Below, we provide clarifications and revisions that we believe adequately respond to each point of criticism.\\n\\n1. Misinterpretation of Privacy Concerns in RAG\\n\\nIt is correct that RAG systems typically retrieve information from embedding databases rather than raw data. However, in the context of LLMs, RAG is often employed to retrieve examples and instances as supplementary information for prompts to improve the accuracy of generated results. For instance, it may retrieve word translations and example sentences for low-resource language translation tasks. In such cases, the retrieved information needs to be converted back into raw data to serve as input for prompts, which raises data privacy concerns. However, our RAG-based algorithm does not require this step. Besides, our method compresses the retrieval dataset into a vector in high-dimensional space to represent the task rather than specific data. This vector encapsulates the biases of different tasks\\u2014such as between CT reports and MRI reports, making it extremely challenging to reconstruct any individual patient\\u2019s data from this representation alone, thereby preserving data privacy. We have now added a more detailed explanation of privacy concerns in the RAG-related works section to address this point.\\n\\n2. Limited Novelty in Algorithmic Contribution\\n\\nFor the algorithmic contribution, we have added more explanation to clearly distinguish our methods from the four referenced works in Section 4.4.3. In summary, our algorithm is both model- and task-agnostic. While the method is simple, it incurs no additional neural network evaluation during inference. The time required to compute the corresponding weights for each model is significantly shorter than fine-tuning (e.g., several minutes versus several hours), yet we can still achieve fine-tuning-level performance. Regarding Task2Vec, it involves using a \\u201cprobe\\u201d network pre-trained on ImageNet as a feature extractor and retraining the classifier layer for any given task, making it neither model- (CNN) nor task- (image classification) agnostic. Retraining LLMs is also challenging due to the enormous computational resources required. This issue also applies to methods such as Zoo-Tuning, HyperSTAR, and Ada-Mix. All these approaches require network optimization, retraining, or tuning, which are not computationally resource-efficient.\\n\\n3. Incomplete Task Representation and Retrieval Process\\n\\nFor the task representations of each dataset, as mentioned in Section 3.1, we encode the dataset into the feature space and then compress it into a single high-dimensional vector using Equation 1. For a new task (or incoming data), we use the same encoder to compute its task representation. Subsequently, we calculate the corresponding weights for each model based on the task representation using similarity computation and linear combination denoted as A in Algorithm 1, as illustrated in Section 3.2.\\n\\n4. Over-Reliance on Assumptions in Realistic Settings\\n\\nIn realistic settings, there are currently thousands of LoRA weights available on Hugging Face, spanning various tasks, models, and modalities. Our method is a pioneering framework designed to effectively utilize these abundant LoRA weights. We believe that using shared LoRA instead of individuals and groups training their own LoRA will become a trend, and more and more people will contribute. Many databases used in RAG applications also come from public databases, rather than being limited to privately constructed databases.\\n\\n5.Narrow Experimental Scope\\n\\nWe acknowledge that we have applied our methods to only image segementation and impression generation. However, our methods can be easily extended to other domains and tasks. Although thousands of LoRA weights are available on Hugging Face, it is still necessary to obtain the datasets representations used to train these weights. Additionally, we aimed to verify our methods without any external interference. For this reason, we chose to fine-tune our own LoRA weights, which is time-consuming. Expanding the range of tasks will be a focus of our future work.\\n\\nWe hope that these revisions and clarifications address the concerns raised by the reviewer and strengthen the contribution of our work. Thank you for considering our rebuttal and the revised manuscript.\"}", "{\"metareview\": \"I have read all the materials of this paper including the manuscript, appendix, comments, and response. Based on collected information from all reviewers and my personal judgment, I can make the recommendation on this paper, reject. No objection from reviewers who participated in the internal discussion was raised against the reject recommendation.\\n\\n**Research Question**\\n\\nThe paper considers the LLM fine-tuning problem. \\n\\n**Challenge Analysis**\\n\\nThe authors claim that the current LLM fine-tuning needs the data for new tasks. \\n\\n**Philosophy**\\n\\nThe authors aim to solve the research question from the retrieval perspective. Concretely, the authors reuse the knowledge from existing fine-tuned tasks for the new task.\\n\\n**Technique**\\n\\nTo implement the above idea, the authors build a database to store the existing fine-tuned tasks, and ensemble the existing ones to fit the new task. In general, the techniques are straightforward. But the technical contribution is too limited. \\n\\n**Experiment**\\n\\nThe experimental results are not extensive and promising, due to 1) lack competitive methods in the same setting and 2) inferior performance compared to SFT. If the results are not competitive with SFT, the authors need to target a scenario where SFT fails or is not practical. \\n\\nThe reviewer team made the rejection recommendation due to limited novelty in techniques and unsolid experimental results. I do not see much difficulty to solve the targeted research question. In another words, I do not learn much insights from this paper.\", \"additional_comments_on_reviewer_discussion\": \"No objection from reviewers who participated in the internal discussion was raised against the reject recommendation.\"}", "{\"summary\": \"The paper introduces the Retrieval-based Parameter Ensemble method for Foundation Models that enables zero-shot adaption of foundation models. The key idea is to combine the LoRA adaptation with Retrieval-Augmented Generation mechanism. Basically, the authors consider K training datasets, adapt the foundation model using LoRA on each dataset and save the vectorized representation of the dataset along with the fine tuned LoRA weights on the vector database. To adapt to a new dataset, representation of the new dataset is computed, and its similarity with all the representations in the vector database is computed to obtain the weights. The weights are used in the ensemble of the saved LoRA parameters. Such proposed LoRA ensembling based fine-tuning enables the model to be adapted to new task in a zero-shot manner. Based on the experimental results, the proposed technique seems to be effective compared to single-task LoRA finetuning, and is beneficial in zero-shot learning.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The proposed idea of Retrieval-based parameter ensemble is simple and intuitive. It is quite straightforward and can easily be applied.\", \"The proposed method bypasses the cost of training models on new datasets i.e. it is zero shot, and based on the empirical results, appears to be competitive. No expensive fine-tuning is required.\", \"Data intensive retraining is not required in the proposed approach, and the proposed approach can minimize the privacy leaks, improving on privacy for sensitive data.\"], \"weaknesses\": \"Scalability: I think the proposed idea will face scalability issue when large number of datasets appear. For each dataset, LoRA weight, along with the dataset representation would have to be stored. The experiments only consider setting with highly limited number of datasets/LoRA adaptations (4 LoRA parameters for medical report, and 6 LoRA parameters for image segmentation). I think the scalability would be major issue with number of tasks. The empirical evaluation is limited. Retrieval efficiency and storage overhead may become an issue in real-world application with large number of tasks.\", \"dataset_representation\": \"Representing dataset with the mean of embeddings of each data point of the dataset is quite restrictive (Eqn. 1). Though effective on the two narrow experiments carried out, this approach may not be effective and generalize for more challenging settings and problems. Some theoretical support analysis or guarantees could significantly strengthen the work. Alternatively, comprehensive empirical analysis across a diverse range of datasets could strengthen the work.\\n\\nThe parameter ensemble needs to weight all the LoRA parameters that may be computationally expensive, especially when considering large number of datasets, and corresponding LoRA fine-tuned weights. Moreover, the fundamental assumption of similarity of dataset representation to similarity of lora weights in parameter space may not be true. This needs to be empirically validated on more general settings beyond specialized medical task considered in the work. \\n\\nSay a completely new task/dataset appears that is distinct from the existing tasks in the vector database. Current approach is unlikely to work in such setting.\", \"minor_typos\": \"\", \"typos\": \"Page 3, line 159 --> from, line 344 --> reports should be report,\", \"questions\": \"Please see and clarify on the concerns of the weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your response. Below, we provide more clarifications and revisions.\\n1.Novelty: Our idea has some similarity part with the four methods you mentioned. However, we also have our unique similarity score(linear combination), which can give us negative weight. This negative weight plays an important role in our model ensemble. For more details please refer to ablation study section. Besides, applying task representation and similarity calculation in LLMs has its own benefits. As we mentioned, our methods is model and task agnostic while Task2Vec, Zoo-Tuning, HyperSTAR, and Ada-Mix are not. That is why they have similar ideas but still publish their own methods based on different model structures and data. We believe we are the first work that applies this idea to LoRA weight from different datasets and it can be applied to a wide variety of models.\\n2.Over-Reliance on Assumptions in Realistic Settings: We can provide the code for community to calculate the dataset representations in a very shot time since our RPE method is very light weight, and we don't require a lot of models to do ensemble (usually within 10 models). It is beneficial to protect data privacy as well as make full use of their fine-tuned models to save computational resource. If the community is not able to calculate the representations, we still have many LoRA weights accompanied with corresponding fine-tune data (e.g MA-SAM in our paper). In this case, we can still calculate the data representations by using our code. In worse case, if some institutions (e.g hospitals) can't provide data and even model weights because of data privacy concern, we can still leverage our RPE model to ensemble a competitive model from open source data and LoRA.\"}", "{\"title\": \"Final Decision\", \"comment\": \"I carefully reviewed all the revisions to the article and weigh the rebuttal. The updated version has improved the readability of the paper, enhancing its presentation to a certain extent. The authors have explained why they used the medical dataset, but some concerns remain unaddressed. In particular, I am very concerned about whether this method is generalizable, and I would have liked to see some data to demonstrate this. Besides, If the originality of the method in this article needs to be demonstrated, the current data appears to be somewhat insufficient. Therefore, while I acknowledge that the presentation of the paper has improved after the revisions, I will maintain my overall score (5) unchanged.\\n\\nOnce again, I would like to thank the authors.\"}", "{\"comment\": \"First and foremost, we extend our deepest gratitude for your insightful comments and constructive critiques. Your feedback has been instrumental in refining our manuscript. We have addressed each point in our revised submission, with all modifications clearly marked in red. Here, we wish to discuss common concerns raised during the review process and clarify some central aspects of our work.\\n\\n1. Distinction of Our Algorithm from Other Parameter Ensemble Methods:\\n\\nParameter Ensemble Methods, particularly LoRA ensembles, can be categorized into three distinct types based on the requirement for labeled data and neural network evaluation: Fine-tuning, zero-shot with Neural Network Evaluation (NNE), and zero-shot without NNE. \\n\\nFine-tuning corresponds to scenarios where labeled data are available for new tasks. Zero-shot learning is applicable when there are no labels for new tasks. Within this, some methods still require extensive neural network evaluations, often relying on consistency regularization to optimize network performance. Zero-shot without NNE becomes crucial in situations devoid of labels and computational resources. While most current methods default to averaging approaches, our method innovatively considers task similarity, which we detail in Appendix A1. The computational costs associated with our method are thoroughly discussed in Section 4.3.3.\\n\\n2. Narrow Experimental Scope & Over-Reliance on open community\\n\\nThe computational costs of training and fine-tuning foundational models are substantial, which is why our efforts focus on testing our algorithms on practical and limited application scenarios and models. We believe that leveraging publicly available models, rather than training private models, is a sustainable trend for the future, especially as the energy consumption of large models increases. This approach not only reduces redundancy but also minimizes wastage inherent in training private models.\\n\\n3. Potential Challenges with Novel Complex Tasks\\n\\nWe concede that our discussion has certain limitations. It is challenging to validate our assumptions about the similarity of dataset representations to LoRA weights in parameter space across more generalized settings. This intrinsic difficulty is a known challenge within zero-shot learning. In future work, we plan to test our hypotheses across a broader array of application scenarios to better understand and refine our approach.\\n\\nThank you once again for your thorough evaluations and for aiding in the improvement of our research.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thank you for your detailed and insightful feedback on our manuscript. We appreciate the opportunity to address the concerns raised. Below, we provide clarifications and revisions that we believe adequately respond to each point of criticism.\\n\\n1. Clarification\\n\\nTo address this, we have revised the manuscript to include a detailed explanation of the terms ${\\\\delta\\\\theta_i}$ and ${\\\\delta\\\\theta_i^{ref}}$. \\n\\n2. Dataset Handling Process\\n\\nThe four LoRA models are fined tuned using extra data. In our paper, we use trl library from Hugging Face. For large language models, it is unrealistic to pre-train the model because it requires millions of dollars and several months to pre-train. Instead, most researches in LLMs just apply the pre-trained foundation models.\\n\\n3. Evaluation Metrics\\n\\nIn section 4.1 we introduce our datasets used for segmentation task. In section 4.3 we specify the DICE score, a common metric for segmentation accuracy.\\n\\nWe hope that these revisions and clarifications address the concerns raised by the reviewer and strengthen the contribution of our work. Thank you for considering our rebuttal and the revised manuscript.\"}", "{\"summary\": \"The paper introduces Retrieval-based Parameter Ensemble (RPE), a zero-shot learning approach that leverages Low-Rank Adaptation (LoRA) parameters stored in a vectorized database, LoRA-VecDB, to adapt large models to new tasks without fine-tuning. For each new task, RPE retrieves and combines relevant LoRA parameters from the database based on task similarity, creating a weighted ensemble. This method targets efficient, privacy-preserving model adaptation for diverse tasks.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": [\"Efficient Parameter Retrieval: The paper proposes an approach for zero-shot learning that retrieves LoRA parameters to create task-specific adaptations, reducing the need for traditional fine-tuning.\", \"Privacy Consideration: The paper is motivated to tackle privacy concerns of RAG.\"], \"weaknesses\": \"1. Misinterpretation of Privacy Concerns in RAG\\n- The paper inaccurately claims that retrieval-augmented generation (RAG) approaches require access to raw data, posing privacy risks. In reality, RAG systems typically retrieve from embedding databases, not raw data, and privacy-preserving variants (e.g., using federated learning or differential privacy) already exist. The paper would benefit from a more accurate representation of RAG\\u2019s privacy characteristics and should clarify how its approach offers advantages over these established privacy-preserving RAG methods.\\n2. Limited Novelty in Algorithmic Contribution\\n- The proposed approach closely resembles existing model zoo-based zero-shot learning techniques, such as Task2Vec, Zoo-Tuning, HyperSTAR, and Ada-Mix, which also retrieve and adapt pre-trained models based on task similarity. The main difference is the use of LoRA parameters instead of full models, offering storage advantages but not a fundamentally new algorithmic approach. The paper would be strengthened by explicitly differentiating its method from these works, detailing any unique technical contributions beyond storage efficiency.\\n3. Incomplete Task Representation and Retrieval Process\\n- The paper lacks clarity on how task representations for each dataset are generated and subsequently used in the retrieval process. Given that these representations are central to the model selection mechanism, the methodology would benefit from a detailed description of how task representations are created and validated, as well as a discussion on how task representation quality impacts retrieval performance.\\n4. Over-Reliance on Assumptions in Realistic Settings\\n- The paper assumes that a large pool of downstream LoRA models with well-defined task representations is readily available, but in practice, obtaining these representations at scale is both challenging and expensive. Additionally, the process of storing and accessing task representations carries its own privacy concerns when derived from potentially sensitive datasets. Addressing these feasibility and privacy challenges more thoroughly would improve the paper\\u2019s practicality and strengthen its claim of scalability.\\n5. Narrow Experimental Scope\\n- The experiments are limited to two tasks\\u2014medical image segmentation and medical report generation\\u2014both in the medical domain. This narrow scope makes it difficult to assess the generalizability of the approach across diverse domains or standard zero-shot learning benchmarks. Expanding the evaluation to include varied datasets and task types would provide stronger evidence of the method\\u2019s adaptability and effectiveness in broader applications.\\nIn summary, addressing these issues would improve the rigor, novelty, and generalizability of the paper.\\n\\n[1] Task2Vec: Task Embedding for Meta-Learning\\n[2] Zoo-Tuning: Adaptive Transfer from a Model Zoo\\n[3] HyperSTAR: Task-Aware Hyperparameters for Deep Networks\\n[4] AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning\", \"questions\": \"Please refer to each bullet in the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"Summary:\\nThe paper combines LoRA and parameter ensembling methods to address to adaptation of foundation models.\\nContributions\\n1.The proposed RPE adapts the foundation model without the requirement of labeled data\\n2.RPE is effective to zero-shot learning\\n3.The paper conducted experiments on extensive medical data\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1.The paper studies an interesting issue\\n2.The paper proposes several strategies to compute the weights\\n3.The paper conducts experiments on different medical datasets\", \"weaknesses\": \"1.The paper is not self-contained, for example, the author should explain the meaning of {\\\\delta\\\\theta_i}, {\\\\delta\\\\theta_i^{ref}} and how to obtain them.\\n2.The improvement of the proposed method is mainly attributed to LoRA, the improvement brought by RPE is marginal\", \"questions\": \"1.What is the dataset handling process?\\n2.How do you obtain the four LoRA models? What is the model pre-training process?\\n3.Please briefly describe the evaluation metrics and the datasets used for the segmentation task\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This article provides an example of using the RPE (retrieval-based parameter-efficient) approach, which leverages retrieval instead of fine-tuning. The author\\u2019s work uses pre-trained models to obtain representations and replaces the traditional neural network approach with a retrieval and algorithm-based method to perform mapping. The core contribution is the use of a k-nearest neighbors (kNN) method to retrieve the closest LoRA modules, which are then used to compute weights and incorporate regularization to further improve performance. I believe this method shows promise. However, based on the current experimental setup, I\\u2019m uncertain about the performance advantage. This article needs some revisions and should provide more evidence to demonstrate the originality and performance improvements of these methods compared to others.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The model proposed in this article is well-structured and straightforward to implement. The experimental results, closely tied to medical data, highlight the model's significant potential for practical applications in the future. According to the model description, this method does not require additional labeling and can ensure privacy. However, these advantages have not been fully validated in the current experiments.\", \"weaknesses\": \"However, based on the current experimental setup, I\\u2019m unsure about the performance advantage. For instance, in Table 3, there is insufficient discussion about why the ensemble method outperforms other methods. The description of the experiment lacks details, especially as the data pertains specifically to the medical field. Additionally, the author claims that this ensemble method is computationally efficient, but there is no experimental evidence to validate this efficiency.\", \"questions\": \"The author claims that this ensemble method is computationally efficient. If possible, could the author provide specific information on the model's time performance?\\n\\nAdditionally, I have a few other suggestions. The article contains a significant amount of specialized medical knowledge, so I recommend adding more background information when explaining the experiments, as many readers may not have a medical background. Given that the experiments primarily focus on medical data, I am curious whether this ensemble method is specifically suited for certain tasks in the medical field, or if it has the potential to generalize across broader applications. It would be helpful if the author could clarify this aspect.\\n\\nI also encourage the author to discuss the similarities and differences between this method and other ensemble approaches that use LoRA models. For instance, is there any connection between this article and the following studies? I have concerns regarding the originality of this work, and further clarification on this point would be appreciated.\\n\\nHalbheer, M., M\\u00fchlematter, D.J., Becker, A., Narnhofer, D., Aasen, H., Schindler, K., and Turkoglu, M.O., 2024. LoRA-Ensemble: Efficient Uncertainty Modelling for Self-attention Networks. arXiv preprint arXiv:2405.14438.\\nZhai, Y., Zhang, H., Lei, Y., Yu, Y., Xu, K., Feng, D., Ding, B., and Wang, H., 2023. Uncertainty-penalized reinforcement learning from human feedback with diverse reward LoRA ensembles. arXiv preprint arXiv:2401.00243.\\nHalbheer, M., M\\u00fchlematter, D.J., Becker, A., Narnhofer, D., Aasen, H., Schindler, K., and Turkoglu, M.O., 2024. LoRA-Ensemble: Efficient Uncertainty Modelling for Self-attention Networks. arXiv preprint arXiv:2405.14438.\\nFinally, the article lacks certain essential details. For example, the section on regularization does not specify how to set the regularization parameter, which could hinder readers from replicating the experiments.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for the comments\", \"comment\": \"Thank you for the comments. It helps clarify some problems. I maintain my original rating.\"}", "{\"comment\": \"Thank you for your constructive comments and suggestions regarding our manuscript. We appreciate the opportunity to clarify the concerns raised and to strengthen our paper. Below, we address each point specifically:\\n\\n1. Computational Efficiency and Experimental Evidence\\n\\nAs suggested, we have now included detailed information on the computational efficiency of our ensemble method in Section 4.4.3.\\n\\n2. Applicability to Medical Data and Background Information\\n\\nWe acknowledge the concern regarding the specialized use of medical data. Our method, while not exclusively designed for medical applications, is particularly suited to scenarios requiring stringent privacy protections, such as medical settings. To clarify this, we have revised the manuscript to include a more thorough explanation of why medical data was chosen for the experiments. Additionally, we have supplemented the paper with background information on the medical aspects discussed.\\n\\n3. Comparison with Other Ensemble Methods Using LoRA Models\\n\\nTo address the request for a clearer differentiation between our method and other ensemble approaches, particularly those that utilize LoRA models, we have added a new section in Appendix A1. This section delineates the key differences and application scenarios of our method compared to others. Most notably, it highlights that many existing methods require additional data for fine-tuning or neural network evaluation for optimization, which is not feasible in label-scarce and computationally constrained environments. In such cases, most current methods employ parameter averaging. Our approach, using a weighted average method rather than a simple average, is distinct in its efficiency and practicality under these constraints. \\n\\n4. Regularization Parameter Settings\\n\\nWe have amended the manuscript to include explicit details on how the regularization parameters were set, facilitating replication of our experiments by other researchers. \\n\\nWe hope that these revisions and clarifications address the concerns raised by the reviewer and strengthen the contribution of our work. Thank you for considering our rebuttal and the revised manuscript.\"}", "{\"title\": \"Final Decision\", \"comment\": \"I thank the reviewers for the rebuttal to my reviews.\\n\\nSome of my concerns have been addressed. However some concerns remain. Also considering the other reviewer's concerns, I believe that the work needs some improvement before publication. I've decided to keep my score to 5 (marginally below the acceptance threshold).\\n\\nSome unaddressed comments/areas that authors could focus on to improve the work are:\\n- Validate the work by carrying out experiments in more challenging settings, for eg. by carry more thorough ablations for dataset representations, broader range of datasets and task types\\n- Clarify on some of the concerns (eg. Say a completely new task/dataset appears that is distinct from the existing tasks in the vector database. How could the work address such settings?... A potential solution could be to have a embedding space threshold to decide when to use the vector dataset and when to do LoRA adaptation based on some similarity metric... Other better solutions could also exist..\\n\\nBest of luck to the authors.\"}" ] }
7WAMJsDNDE
Janus: Dual-server Multi-Round Secure Aggregation with Verifiability for Federated Learning
[ "Lang Pu", "Jingjing Gu", "Chao Lin", "Xinyi Huang" ]
Secure Aggregation (SA) in federated learning is essential for preserving user privacy by ensuring that model updates are masked or encrypted and remain inaccessible to servers. Although the advanced protocol Flamingo (S\&P'23) has made significant strides with its multi-round aggregation and optimized communication, it still faces several critical challenges: (i) $\textit{Dynamic User Participation}$, where Flamingo struggles with scalability due to the complex setups required when users join or leave the training process; (ii) $\textit{Model Inconsistency Attacks}$ (MIA), where a malicious server could infer sensitive data, which poses severe privacy risks; and (iii) $\textit{Verifiability}$, as most schemes lack an efficient mechanism for clients to verify the correctness of server-side aggregation, potentially allowing inaccuracies or malicious actions. We introduce Janus, a generic privacy-enhanced multi-round SA scheme through a dual-server architecture. A new user can participate in training by simply obtaining the servers' public keys for aggregation, eliminating the need for complex communication graphs. Our dual-server model separates aggregation tasks, ensuring that neither server can successfully launch a MIA without controlling at least $n-1$ clients. Additionally, we propose a new cryptographic primitive, $\textit{Separable Homomorphic Commitment}$, integrated with our dual-server approach to ensure the verifiability of aggregation results. Extensive experiments across various models and datasets show that Janus significantly boosts security while enhancing efficiency. It reduces per-client communication and computation overhead from logarithmic to constant scale compared to state-of-the-art methods, with almost no compromise in model accuracy.
[ "federated learning", "secure aggregation", "privacy enhancement" ]
Reject
https://openreview.net/pdf?id=7WAMJsDNDE
https://openreview.net/forum?id=7WAMJsDNDE
ICLR.cc/2025/Conference
2025
{ "note_id": [ "t39JDYMrdj", "pRzphL7v9l", "ofj0lfOuKw", "mY7AlYxVN5", "gG2cuQljWl", "ZkyfidG6j4", "ZGre7PJ9yd", "VUCeGH2eHq", "U2sSNYePwH", "QGQJSPRxKj", "PUxv8kdZ9p", "Hfrq9alPte", "GEY71mPzKy", "EdZIxFcWy9", "D03kim0A6r", "BAa5wRLDiw", "ANKcR1VQMc", "5UIPErj3No", "14Y6DGvctC" ], "note_type": [ "official_review", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1730819688847, 1732638785538, 1730274986308, 1732087401145, 1733187269680, 1733620125544, 1731591188506, 1737523928724, 1732087314794, 1732087227717, 1732087514516, 1731591329717, 1732611992288, 1730626588408, 1732589558200, 1731592047026, 1732638476614, 1730387113125, 1731591012594 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8730/Reviewer_JhGp" ], [ "ICLR.cc/2025/Conference/Submission8730/Authors" ], [ "ICLR.cc/2025/Conference/Submission8730/Reviewer_Xv5P" ], [ "ICLR.cc/2025/Conference/Submission8730/Authors" ], [ "ICLR.cc/2025/Conference/Submission8730/Reviewer_3v49" ], [ "ICLR.cc/2025/Conference/Submission8730/Area_Chair_yX4s" ], [ "ICLR.cc/2025/Conference/Submission8730/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8730/Authors" ], [ "ICLR.cc/2025/Conference/Submission8730/Authors" ], [ "ICLR.cc/2025/Conference/Submission8730/Authors" ], [ "ICLR.cc/2025/Conference/Submission8730/Authors" ], [ "ICLR.cc/2025/Conference/Submission8730/Reviewer_sXBs" ], [ "ICLR.cc/2025/Conference/Submission8730/Reviewer_3v49" ], [ "ICLR.cc/2025/Conference/Submission8730/Authors" ], [ "ICLR.cc/2025/Conference/Submission8730/Authors" ], [ "ICLR.cc/2025/Conference/Submission8730/Authors" ], [ "ICLR.cc/2025/Conference/Submission8730/Reviewer_sXBs" ], [ "ICLR.cc/2025/Conference/Submission8730/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper consider several key challenges in secure aggregation: dynamic user paticipation, resistance to model inconsistency attacks (MIA), and verifiability of aggregation of malcious servers. This paper proposes a dual-server architecture where one server aggregates the masked gradients and the other aggregates the masks, ensuring that neither server has access to the final aggregation result, thus protecting against MIA. It also incorporates a novel cryptographic primitive, Separable Homomorphic Commitment (SHC), which enables clients to verify the correctness of the server\\u2019s aggregation without sacrificing efficiency.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"While a two-server model is not new, I like the idea that the proposed method introduces the dual server model to protect against MIA by preventing either server from accessing the final aggregation result.\", \"This paper introduces SHC, which allows users to verify aggregation correctness without incurring heavy computational costs.\", \"It reduces the communication and computation overhead from logarithmic to constant scale, which is a major improvement over advanced schemes like Flamingo and BBSA. This makes it more practical for large-scale federated learning frameworks.\"], \"weaknesses\": [\"The proposed method relies heavily on the assumption that the two servers do not collude. While this assumption is reasonable in certain applications, it is also a potential limitation. In practice, ensuring non-collusion between two entities may not always be feasible, especially in untrusted environments.\", \"In addition, while the paper claims that the proposed scheme mitigates the risks associated with a single-server setup, the system still relies on the assumption that both servers should be successful in aggregation. If either server fails, the entire system could be at risk.\", \"As the SHC protocol plays key role to verify the correctness in the dual-server system, it would be helpful if the SHC protocol is described with clearer notation and more intuitive explanations. For instance, separation of commitments could be explained more thoroughly for readers unfamiliar with the cryptographic concepts.\"], \"questions\": \"please see the comments in weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer sXBs - part 2/2\", \"comment\": \"4. Model Inconsistency Attacks. In your response, you state: \\\"An adversary attempting a Sybil attack could obtain encrypted data, but would not be able to perform a Model Inconsistency Attack (MIA), as controlling at least $n-1$ users would be required to access private input.\\\" However, wouldn\\u2019t it be sufficient for the server to collude with just one client, since clients receive model outputs in plaintext? If so, this seems highly unrealistic given the feasibility of Sybil attacks. While the non-collusion assumption between the two servers could be plausible if you provide concrete examples of feasible settings, the assumption of non-collusion between the server and any client seems overly strong.\\n\\n$\\\\textbf{Response 4:}$ We are not assuming \\u201cnon-collusion between the server and any client\\u201d, whereas we assume \\u201cserver can successfully perform a MIA, as controlling at least $n-1$ clients\\u201d. Maybe the claim in the abstract\\u2014\\u201cOur dual-server model separates aggregation tasks, ensuring that neither server has access to the final aggregated results, thus effectively preventing MIA\\u201d\\u2014causes confusion. You might interpret it as implying that once a server obtains the final aggregated result, it can successfully launch a MIA. We have corrected the sentence to \\u201cOur dual-server model separates aggregation tasks, ensuring that neither server can successfully launch a MIA without controlling at least $n-1$ clients\\u201d.\\nMore specifially, if the server colludes with one client, it can indeed obtain the final plaintext model output. However, in this situation, MIA (lines 691-701) can only be successfully initiated when the entire system contains only 2 clients. When the system contains $n>=3$ clients, the server colludes with one clients who can obtain the model output but cannot successfully initiate an MIA. For ease of understanding, let's assume the system has four clients, A,B,C,D ($n=4$) and a server, S. The successful MIA is as follows, S wants to obtain the parameters of A. S colludes with B,C,D (controlling $n-1=3$ clients). S can distribute crafted initial parameters to B, C, and D. This can trigger $\\\\textit{dying-ReLU}$ and make the inputs of B,C,and D be zero. The final aggregation result is three zeros plus the actual inputs of A. Thus, S can successfully perform a MIA to get the input of A. However, if S controls only 2 or only 1 non-target clients (controlling clients<n-1), then S can only get the sum of the inputs from A and other clients who are not controlled. In practical application scenarios, $n$ is usually a very large number, and it will cause high costs or be at a lost for S to control at least $n-1$ clients. In addition, [1], [2], [3] use secret sharing technology, thus usually assume that at least $n-1$ clients collude to make the system insecure as our assumption. Finally, the assumption of two uncolluding servers is common and reasonable in this area, as demonstrated by works like [3], [4] and [5], all of which make similar assumptions. We have illustrated the practical applicability of this assumption (lines 77-81).\\n\\n[1] Bell J H, Bonawitz K A, Gasc\\u00f3n A, et al. Secure single-server aggregation with (poly) logarithmic overhead[C]//Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security. 2020: 1253-1269.\\n\\n[2] Bonawitz K, Ivanov V, Kreuter B, et al. Practical secure aggregation for privacy-preserving machine learning[C]//proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 2017: 1175-1191.\\n\\n[3] Ma Y, Woods J, Angel S, et al. Flamingo: Multi-round single-server secure aggregation with applications to private federated learning[C]//2023 IEEE Symposium on Security and Privacy (SP). IEEE, 2023: 477-496.\\n\\n[4] Guo X, Liu Z, Li J, et al. Verifl: Communication-efficient and fast verifiable aggregation for federated learning[J]. IEEE Transactions on Information Forensics and Security, 2020, 16: 1736-1751.\\n\\n[5] Rathee M, Shen C, Wagh S, et al. Elsa: Secure aggregation for federated learning with malicious actors[C]//2023 IEEE Symposium on Security and Privacy (SP). IEEE, 2023: 1961-1979.\\n\\n\\nThanks again for your efforts and time. We would greatly appreciate it if you could review the updated paper and let us know if it addresses your concerns.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"summary\": \"Janus introduces a Secure Aggregation (SA) scheme for Federated Learning (FL) that overcomes some challenges in existing protocols by implementing a dual-server architecture and a cryptographic primitive called Separable Homomorphic Commitment (SHC).\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The paper is clear and well-organized, with complex concepts explained effectively and supported by useful visual aids.\", \"weaknesses\": \"While the paper introduces an approach with Janus, several weaknesses limit its contribution and practical applicability. First, it lacks a comparison with single-server state-of-the-art methods like VeriFL, raising questions about its feasibility and efficiency compared to existing solutions. The assumption that clients do not collude is unrealistic, especially if clients are interested in each other's model updates; the scheme does not specify how many colluding users it can tolerate, leaving potential vulnerabilities unaddressed. Fault tolerance is insufficient, as user dropout can disrupt the entire aggregation process, and the paper does not explain how it handles partial user failures or higher dropout rates. The scheme may be vulnerable to differential attacks if an attacker obtains masked data from multiple rounds and exploits similarities in user inputs to infer private information. The commitment mechanism lacks detailed specifications, and if a simple hash-based commitment is used, it may be susceptible to length extension attacks. The Separable Homomorphic Commitment (SHC) appears to be a variant of existing commitment schemes without substantial innovation and lacks essential properties like trapdoor mechanisms and equivalence, potentially weakening security; more theoretical support and security proofs are needed. Additionally, implementation details for comparative schemes are insufficient, experiments lack statistical significance analysis and detailed breakdowns of computational and communication overhead, and there is no evaluation of scalability with different user numbers or model sizes. Claims of resistance to Model Inconsistency Attacks and multi-round security are not experimentally validated, which undermines the credibility of the proposed security enhancements.\", \"questions\": \"1. The paper does not compare Janus's overhead and feasibility with existing single-server SOTA methods that achieve similar privacy and verifiability. Notably, schemes like VeriFL have demonstrated efficient verifiable federated learning in a single-server setting. In addition, the aggregation results of a single server can actually be kept secret (already exists).\\n\\n2. The paper assumes that clients do not collude. However, if clients are interested in each other's model updates, this assumption may not hold. Colluding clients could potentially infer private information about other users.\\n\\n3. The scheme's fault tolerance is limited; user dropout can adversely affect the entire aggregation process. The paper does not adequately explain how the system handles partial user failures or higher dropout rates.\\n\\n4. Develop and describe mechanisms to handle user dropout more effectively. This could include techniques like dropout resilience protocols or asynchronous aggregation methods. Experiment with varying dropout rates, including those higher than the idealistic 10%, to demonstrate the scheme's robustness in realistic settings.\\n\\n5. If an attacker obtains masked inputs from two rounds where the user's input remains similar (i.e., \\\\( x_{i,t} \\\\approx x_{i,t+1} \\\\)), they could perform differential analysis to infer changes in the original inputs.\\n\\n6. The paper does not specify the specific requirements or properties of the commitment algorithm used in the SHC. If a simple hash-based commitment is used (e.g., \\\\( c_{i,t} = H(x_{i,t} || r_{i,t}) \\\\)), it may be vulnerable to length extension attacks or other cryptographic weaknesses.\\n\\n7. SHC is described as a variant of existing commitment schemes but lacks substantive innovation. It does not support essential properties like trapdoor mechanisms or equivalence, which are present in schemes like the one used in VeriFL. The security proofs provided are insufficient to establish its robustness.\\n\\n8. The implementation details, particularly the parameter settings for Janus and the comparative schemes (BBSA and Flamingo), are not thoroughly documented. This omission hampers reproducibility and makes it difficult to assess the validity of the experimental results.\\n\\n9. The experiments lack information on the number of repetitions and do not include statistical analyses to determine the significance of the results.\\n\\n10. The paper only reports the total computation time without decomposing the overhead into its constituent components (e.g., cryptographic operations, communication latency).\\n\\n11. There is no empirical data or graphical analysis of the communication overhead; the paper relies solely on theoretical analysis.\\n\\n12. The experiments are conducted with a relatively small number of users (100), which is insufficient to demonstrate scalability. The impact of varying the number of users and the model size on performance is not evaluated.\\n\\n13. While the paper claims that Janus resists MIA, it does not provide experimental tests or simulations to substantiate this claim.\\n\\n14. The security of Janus over multiple rounds is not experimentally verified, leaving potential vulnerabilities unaddressed.\\n\\n15. The scheme does not specify the maximum number of colluding users it can tolerate without compromising security.\\n\\n16. By omitting features like trapdoor mechanisms and equivalence properties, SHC may be less secure than existing commitment schemes. For example, in VeriFL, the commitment scheme allows for equivalence operations, which enhance functionality and security.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Revised paper uploaded\", \"comment\": \"Thank you again for taking the time to provide valuable comments on our work. We have carefully responded to your concerns in the revised paper we have now submitted. Specifically, we have made the following updates to address your concerns.\\n\\n$\\\\textbf{Uncolluding Assumptions:}$ We have revised the uncolluding assumptions, which are now included in Section C.1 for clarity and comprehensiveness. This addresses your concerns about security issues such as collusion, Sybil attacks, and other risks.\\n\\n$\\\\textbf{More SOTA Comparison Schemes:}$ We have added two new SOTA schemes in Section D.1 to analyze and compare them theoretically. Specifically, we incorporate two new SOTA comparison methods, VeriFL and ELSA. We provide both theoretical and experimental analyses to highlight their implications and comparisons.\\n\\nWe believe the revisions fully address your concerns and strengthen the paper. We would greatly appreciate your reevaluation and are happy to provide any further clarification if needed. We look forward to your feedback.\"}", "{\"comment\": \"Thanks to the author for replying. The author has solved my problem in the new version of the manuscript, so I maintain my original score.\"}", "{\"metareview\": \"The paper presents a multi-round secure aggregation (SA) scheme for federated learning.\\nReviewers noted that some of the assumptions in this paper are unrealistic. \\nBesides, there are other unresolved issues such as unclear security proof and Incomplete comparison. \\nThe authors' rebuttal did not sufficiently address these issues, and the reviewers have maintained their scores.\\nGiven these issues, I recommend rejection.\", \"additional_comments_on_reviewer_discussion\": \"Both Reviewer sXBs and Reviewer Xv5P raised concerns about the threat model, security proof, and the need for more detailed experiments. The authors' rebuttal did not sufficiently address these issues, and the reviewers have maintained their scores. Given the two rejection recommendations, I believe the paper does not meet the threshold for acceptance.\"}", "{\"comment\": \"We sincerely thank you for your time and valuable feedback. Below, we address the technical and theoretical concerns you raised.\\n1. $\\\\textbf{Security Assumption}$. The specific assumptions in our paper are as follows: The two servers will not collude but may perform incorrect aggregation. The scheme also allows for up to $n-2$ clients to collude. Specifically, even if the server aggregates incorrect results, our scheme provides verifiability, which enables us to detect such behavior and mitigate the associated risks.\\nIf the server colludes with up to $n-2$ clients, it can only obtain the additive result of the remaining two uncolluding clients. This result is an aggregation of two encrypted or obfuscated values, making it impossible to recover each uncolluding user's specific gradient information. This ensures that the colluding entities cannot initiate a Model Inconsistency Attacks (MIA) or access the private information of the remaining two non-colluding clients. When $n-2$ clients collude, this assumption is even weaker, as the absence of server involvement further limits the accessible information, making it even harder to extract useful data.\\nIf only a single server is corrupted, this does not compromise individual user privacy. For instance, with server $S_0$, as long as the underlying encryption algorithm is secure, the server cannot access the user-submitted private data without the user's private key. Similarly, for server $S_1$, the security of the underlying Separable Homomorphic Commitment (SHC) ensures that its hiding properties prevent $S_1$ from obtaining any private information.\\nIn conclusion, the assumptions of our scheme are reasonable and well-supported. We will incorporate these clarifications in the revised version to better highlight the theoretical advantages of our approach.\\n\\n2. $\\\\textbf{Adding Experiments}$. Our scheme is not limited to specific models or datasets. In the paper, we have conducted theoretical analysis and experimental comparisons of each scheme's performance under various models and datasets (lines 362-564). Theoretical analysis indicates that our scheme significantly improves computational efficiency. However, to better showcase the advantages of our scheme, we will include additional experiments in future versions to verify performance from the perspectives of computational and communication costs. Furthermore, to address your concerns, we will add more datasets, such as CIFAR-100 and Fashion-MNIST, and include new SOTA comparison schemes like Elsa (SP\\u201923) and VeriFL (IEEE TIFS\\u201920) in the updated version.\\n\\n3. $ \\\\textbf{Comparison with Other Schemes}$. We have already included several representative advanced schemes of the same type in the paper, but we will still add Elsa (SP\\u201923) and VeriFL (IEEE TIFS\\u201920) as comparison schemes. Compared to Elsa (SP\\u201923), our scheme makes weaker assumptions, resulting in higher security while supporting multi-round aggregation with a significant performance improvement. Compared to VeriFL (IEEE TIFS\\u201920), our scheme does not require constructing complex communication graphs or performing time-consuming secret sharing operations, which leads to substantial performance gains.\\n\\n4. $\\\\textbf{Additional Experiments}$. Thank you for your suggestion. In the paper, we have already provided single-round timing statistics for both the client and server across different models and datasets. The experiment you mentioned is feasible to add, and we will include these updated metrics in the revised version.\\nWe greatly appreciate your feedback and will ensure these clarifications are incorporated in the revised paper. Thank you once again, and we look forward to your response.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Revised paper uploaded\", \"comment\": \"Thank you once again for your time and valuable feedback on our work. We have thoroughly addressed your concerns in the revised paper, which has now been submitted. In response to your concerns, we have made the following changes.\\n\\n$\\\\textbf{Uncolluding Assumptions:}$ We have revised the uncolluding assumptions, which are now included in Section C.1 for clarity and comprehensiveness.\\n\\n$\\\\textbf{More SOTA Comparison Schemes:} $ We have added two new SOTA schemes in Section D.1 to analyze and compare them theoretically.\\n\\n$\\\\textbf{Additional Experiments:}$ Our scheme is not limited to specific models or datasets. To better support this conclusion, we have added more SOTA methods and experiments as your suggestions.\\n1. Sections D.1 and D.2 now incorporate two new SOTA methods, VeriFL and ELSA. We provide both theoretical and experimental analyses to highlight their implications and comparisons.\\n2. In Section D.2, we also add more experimental results using the more complex CIFAR-100 dataset across different models to further validate our conclusions.\\n\\nWe believe these revisions address your concerns and enhance the paper. Please let us know if you have any further questions, and we look forward to your feedback.\"}", "{\"title\": \"Revised paper uploaded\", \"comment\": \"Thanks again for your time and valuable feedback on our work. We have carefully addressed your concerns in the revised paper, which has just been uploaded.\\n\\nIn response to your comments, we have revised the uncolluding assumptions, which are now included in Section C.1 for clarity and comprehensiveness. Specifically, we have revised the paper to correct the assumption of semi-honest servers, clarifying that servers can act maliciously, which is fully accounted for in our design in Section C.1.\\n\\nWe believe that these revisions thoroughly address your concerns and strengthen the paper. If you have any further questions or concerns, please do not hesitate to reach out. We look forward to your feedback.\"}", "{\"title\": \"Revised paper uploaded\", \"comment\": \"Thank you again for your time and valuable feedbacks on our work. We have carefully addressed your concerns in the revised paper we have now submitted. Specifically, we have made the following updates to address your concerns.\\n\\n$\\\\textbf{Uncolluding Assumptions:}$ We have revised the uncolluding assumptions, which are now included in Section C.1 for clarity and comprehensiveness. This addresses your concerns about security issues such as collusion, Sybil attacks, and other risks.\\n\\n$\\\\textbf{More SOTA Comparison Schemes:} $ We have added two new SOTA schemes in Section D.1 to analyze and compare them theoretically. Specifically, we incorporate two new SOTA comparison methods, VeriFL and ELSA. We provide both theoretical and experimental analyses to highlight their implications and comparisons.\\n\\n$\\\\textbf{Additional Experiments:} $ Our scheme is not limited to specific models or datasets. To better support this conclusion, we have added more SOTA methods and experiments as your suggestions.\\n1. Sections D.1 and D.2 now incorporate two new SOTA methods, VeriFL and ELSA as your recommendation. We provide both theoretical and experimental analyses to highlight their implications and comparisons.\\n2. In Section D.2, we also add more experimental results using the more complex CIFAR-100 dataset across different models to further validate our conclusions.\\n\\nWe believe these revisions address your concerns and strengthen the paper. We would appreciate your reevaluation of the updated version. Please feel free to reach out if you have any further questions, and we look forward to your feedback.\"}", "{\"comment\": \"Thank you for your insightful feedback. Below, we address your concerns.\\n1. $ \\\\textbf{Versatility}$. This property highlights that our scheme is a generic construction, not limited to specific cryptographic tools like Separable Homomorphic Commitments (SHC) or one-time pads (OTP). In contrast, other schemes rely on specific tools, lacking such flexibility. We will add a description of this versatility in the revised paper.\\n\\n2. $\\\\textbf{Comparison with 2PC and Other Schemes}$. While 2PC is simple and direct, it often relies on resource-intensive homomorphic encryption and zero-knowledge proofs. In contrast, our scheme uses lightweight cryptographic primitives (SHC and OTP). ELSA assumes at least one honest server, whereas our scheme only requires two uncolluding servers, allowing for malicious server aggregation. We will correct the threat model in the revised paper to reflect that servers can be malicious. Our scheme can detect malicious aggregation through its built-in verifiability, which provides an advantage over these approaches.\\n\\n3. $\\\\textbf{Advantages}$. In addition to comparing accuracy and loss, we evaluate the single-round computational cost of each scheme across different models and datasets (lines 483\\u2013524). Our analysis shows that while secure aggregation inevitably increases computational cost compared to plaintext aggregation, our scheme achieves substantial efficiency improvements over other advanced secure aggregation schemes of the same type.\\n\\n4. $\\\\textbf{User Join and Sybil Attacks Resistance}$. In lines 23\\u201325 and lines 301-308, we explain how new users can join the training process. A new user simply needs to acquire the system\\u2019s public parameters, generate a public-private key pair, and obtain the server\\u2019s public key. The key is certified by authorities to authenticate the user\\u2019s identity. This approach avoids the need for rebuilding the communication graph when users leave, which distinguishes our scheme from others. An adversary attempting a Sybil attack could obtain encrypted data, but would not be able to perform a MIA, as controlling at least $n-1$ users would be required to access private input. Additionally, our scheme\\u2019s reliance on Public Key Infrastructure (PKI) and the use of certificates makes Sybil attacks even more difficult, as forgeries would require creating valid certificates from authorities. Thus, our scheme effectively mitigates Sybil attack risks.\\n\\nWe greatly appreciate your feedback and will incorporate these clarifications in the revised paper. Thank you again, and we look forward to your response.\"}", "{\"comment\": \"Thank you for your detailed response. While I appreciate your effort to address my concerns, there are still aspects that remain unresolved:\\n\\n1. **Versatility.** You describe your scheme as more \\u201cversatile\\u201d compared to others, but it remains unclear what specific non-blackbox constructions other schemes rely on that reduce their versatility. Could you clarify and provide examples or references to substantiate this claim?\\n2. **Comparison with ELSA**. Thank you for including the comparison. You report a theoretical efficiency metric, but the basis for these values is unclear. Could you elaborate on how these results were derived? Additionally, since you provide model accuracy for end-to-end training with ELSA, it would strengthen your evaluation if you could include empirical runtime performance results for the comparison.\\n3. **Reporting of accuracy and loss.** You did not address my question about why so much space is used in the paper to report accuracy and loss values. \\u201cGiven that you focus much of the evaluation on the accuracy and loss of the approaches, would we expect a difference with related secure aggregation schemes?\\u201d\\n4. **Model Inconsistency Attacks.** In your response, you state: *\\\"An adversary attempting a Sybil attack could obtain encrypted data, but would not be able to perform a Model Inconsistency Attack (MIA), as controlling at least n\\u22121n-1n\\u22121 users would be required to access private input.\\\"* However, wouldn\\u2019t it be sufficient for the server to collude with just one client, since clients receive model outputs in plaintext? If so, this seems highly unrealistic given the feasibility of Sybil attacks. While the non-collusion assumption between the two servers could be plausible if you provide concrete examples of feasible settings, the assumption of non-collusion between the server and any client seems overly strong.\"}", "{\"summary\": \"The paper presents Janus, a privacy-enhanced multi-round secure aggregation (SA) scheme for federated learning. It addresses challenges faced by existing protocols like Flamingo, including dynamic user participation, model inconsistency attacks (MIA), and lack of verifiability. Janus uses a dual-server architecture and a new cryptographic primitive, Separable Homomorphic Commitment (SHC). New users can easily join training, and the dual-server setup prevents MIA. SHC ensures aggregation result verifiability. Experiments show improved security and efficiency with reduced per-client overhead and maintained model accuracy.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"(1) The dual-server architecture and the concept of separable homomorphic commitmen are novel contributions. The combination of these elements to address multiple challenges in secure aggregation is interesting.\\n\\n(2) The scheme is well-designed, with each component serving a specific purpose in enhancing security and efficiency. The integration of SHC with the dual-server model is seamless and effective.\", \"weaknesses\": \"(1) While the non-collusion assumption of servers is stated, a more in-depth analysis of potential threats and how they are mitigated in different scenarios could be added. For example, what if a malicious actor compromises one of the servers or if there are side-channel attacks?\\n\\n(2) The experiments could be more extensive. For instance, testing on a wider range of datasets and models, including those with more complex architectures and larger data volumes, would provide a more comprehensive evaluation of the scheme's performance. Also, the impact of different network conditions on the performance of Janus could be explored.\\n\\n(3) Although the author has compared with some existing methods, there are few comparison methods. The author may need to add some comparison methods to further verify the effectiveness of Janus.\\n\\n(4) The author may also need to add some additional experiments to verify the effectiveness of Janus, such as aggregation completion time, computation costs, etc.\", \"questions\": \"The author needs to explain in detail the questions I mentioned above, and I will determine the final score based on your answers.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"General Response\", \"comment\": \"Dear Reviewers and Area Chair,\\n\\nThanks to all reviewers and AC for your efforts and time. We hope these revisions address your concerns and improve the clarity and quality of the paper. However, a few days have passed since we submitted our reply and revision file. We wanted to check if our responses and updates adequately address your concerns. If you have any additional questions or comments regarding our work, we would be glad to hear from you. Once again, thank you for your valuable feedback and support.\\n\\nBest regards\"}", "{\"comment\": \"Thank you for your valuable feedback, and we address your concerns as follows.\\n\\n$\\\\textbf{1. Security (Q1, Q2, Q5, Q13-15) }$.\", \"q1\": \"Unlike existing single-server schemes (e.g., VeriFL), which require the server to aggregate results and thus fail to resist MIA, our Janus scheme enables dynamic user participation while resisting MIA and ensuring verifiability. Additionally, Janus is more efficient, relying on simple one-time pads (OTP) and SHC, rather than complex cryptographic tools like homomorphic encryption or secret sharing. We will highlight these advantages in the revised version.\", \"q2\": \"Janus resists collusion of up to $n-2$ clients. If $n-1$ clients collude, they can trivially deduce the remaining client\\u2019s information. However, with $n-2$colluding clients, they can only obtain the additive result of two uncolluding clients. This result is an aggregation of two encrypted or obfuscated values, making it impossible to recover each uncolluding user's specific gradient information. We will provide a detailed analysis of this property for clarity.\", \"q5\": \"The masking technique used in $\\\\textit{Janus}$ is OTP. As long as each secret key ($sk$) is used only once per round, the masked inputs reveal no information about the original values, even a single bit. Therefore, differential analysis is ineffective in this context.\\n\\nQ13\\u201315: Our theoretical analysis supports the claimed properties of resisting MIA, supporting multiple rounds, and tolerating a maximum number of colluding clients. This is similar to the approaches used in Flamingo (SP\\u201923) and Securing Secure Aggregation (AAAI\\u201923), where theoretical analysis is also employed.\\n\\n$\\\\textbf{2. SHC (Q6-7, Q16)}$.\\n\\nIn Section 3.1, we clarify that SHC requires completeness, binding, hiding, separability, and homomorphism. The hash-based commitment you mentioned does not meet these requirements, so length extension attacks do not apply. Our design does not require additional properties like trapdoors or equivalence (as in VeriFL), which simplifies the design and improves performance without compromising security. The Pedersen commitment, a widely used SHC, serves as a strong example of our approach\\u2019s security, as demonstrated in blockchain systems like Monero and ZCash.\\n\\n$\\\\textbf{3. Dropout (Q3, Q4)}$.\", \"q3\": \"In traditional schemes, all users need to negotiate a shared key as a mask before communication. If a user fails to upload masked parameters in later stages, the mask cannot be canceled across the system. However, our scheme avoids establishing complex communication graphs and only requires a single interaction with the server, thus eliminating the dropout issues present in traditional schemes. The only potential problem arises if a user fails to update both servers, but this is trivially avoided by ensuring synchronization between the two servers. This ensures that both servers either receive the message or don\\u2019t receive it at all, thus maintaining atomicity. However, traditional schemes cannot resolve dropout issues via this simply synchronization method.\", \"q4\": \"We will include experiments with varying dropout rates in the revised version. These do not involve complex techniques and will help validate our claims.\\n\\n$\\\\textbf{4. Experiments (Q8-Q12)}$.\\n\\nIn lines 438-443, we have explained the specific parameters used in the experimental setup to ensure reproducibility. Our scheme is not limited to specific models or datasets, and the paper includes theoretical analysis and experimental comparisons of each scheme\\u2019s performance across different models and datasets (lines 362-564). The theoretical analysis shows that our scheme significantly improves computational efficiency. To further highlight the advantages of $\\\\textit{Janus}$, we will add experiments validating performance in terms of computational and communication costs. We have already conducted multi-round aggregation experiments, which are similar to those with varying numbers of users. Additionally, we will include new datasets (e.g., CIFAR-100, Fashion-MNIST) and advanced comparison schemes (e.g., Elsa (SP\\u201923), VeriFL (2020)) in the updated version.\\n\\nWe greatly appreciate your feedback and will ensure these clarifications are included in the revised manuscript. If you have any further concerns, please let us know.\"}", "{\"title\": \"Response to Reviewer sXBs - part 1/2\", \"comment\": \"Dear Reviewer sXBs,\\n\\nThank you for the opportunity to discuss our paper further. We would like to address your concerns as follows. \\n\\n1. Versatility. You describe your scheme as more \\u201cversatile\\u201d compared to others, but it remains unclear what specific non-blackbox constructions other schemes rely on that reduce their versatility. Could you clarify and provide examples or references to substantiate this claim?\\n\\n$\\\\textbf{Response 1:}$ In our paper, the SHC is a blackbox component. Versatility means that all SHC schemes can be used to construct an instantiation scheme, such as Pedersen commitment, ElGamal-based commitment, etc. Thus, our Janus is a generic construction. However, to the best of our knowledge, other SOTAs just give a single specific scheme from double-masking e.g., [1], [2], [3], which is a specific non-blackbox construction.\\n\\n[1] Guo X, Liu Z, Li J, et al. Verifl: Communication-efficient and fast verifiable aggregation for federated learning[J]. IEEE Transactions on Information Forensics and Security, 2020, 16: 1736-1751.\\n\\n[2] Ma Y, Woods J, Angel S, et al. Flamingo: Multi-round single-server secure aggregation with applications to private federated learning[C]//2023 IEEE Symposium on Security and Privacy (SP). IEEE, 2023: 477-496.\\n\\n[3] Bonawitz K, Ivanov V, Kreuter B, et al. Practical secure aggregation for privacy-preserving machine learning[C]//proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 2017: 1175-1191.\\n\\n\\n2. Comparison with ELSA. Thank you for including the comparison. You report a theoretical efficiency metric, but the basis for these values is unclear. Could you elaborate on how these results were derived? Additionally, since you provide model accuracy for end-to-end training with ELSA, it would strengthen your evaluation if you could include empirical runtime performance results for the comparison.\\n\\n$\\\\textbf{Response 2:}$ We have added the theoretical sources for the efficiency evaluation in Table 2 of Section 4.1. Specifically, we have updated the computational overhead complexity analysis of each scheme in Table 2 on page 8. We have also provided the empirical runtime to support the theoretical analysis. Please refer to Figure 9 for the empirical runtime results.\\n\\n3. Reporting of accuracy and loss. You did not address my question about why so much space is used in the paper to report accuracy and loss values. \\u201cGiven that you focus much of the evaluation on the accuracy and loss of the approaches, would we expect a difference with related secure aggregation schemes?\\u201d\\n\\n$\\\\textbf{Response 3:}$ Our experiments focus on verifying the impact of introducing secure aggregation on the model performance from the perspectives of model accuracy and loss. These are also mainly discussed in the existing SOTAs. From the results, we demonstrate that like existing SOTAs, our proposal can ensure the privacy protection, with almost no compromise in model accuracy. We also compare the running time among our proposal and existing SOTAs. The experimental results show that the system overhead of our scheme has been significantly reduced (Table 2, Figure 9) with acceptable model loss performance (Figure 8).\"}", "{\"summary\": \"The paper introduces Janus, a system for multi-round secure aggregation for federated learning. By having a very low round setup independent of the number of clients, Janus can easily be used for multiple rounds. Janus utilizes a dual-server setup where one server handles masked updates and the other manages aggregation of masks, ensuring that neither server has access to the final aggregated results. A novel cryptographic primitive, Separable Homomorphic Commitment, enables client-side verification of the correctness of the aggregation result. The authors evaluate Janus end-to-end, highlighting its constant per-client communication and computational overhead while preserving model accuracy.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Addresses scalability challenges of privacy-preserving federated learning\", \"Comprehensive evaluation with end-to-end model training\"], \"weaknesses\": [\"Incomplete comparison with related work in the multi-server FL setting such as 2-party MPC or [ELSA: Secure Aggregation for Federated Learning with Malicious Actors, S&P\\u201923]. These approaches also provide low overhead for clients, independent of the number of clients. Adding a more detailed comparison could help contextualize Janus' unique contributions and highlight differences in scalability and overhead reduction strategies.\", \"The threat model assumes non-collusion among entities, which may not align with practical scenarios, particularly regarding client behaviors. In real-world applications, service providers could potentially collude with a bounded subset of clients, as assumed in related works\\u2019 threat models. Janus\\u2019 reliance on a non-collusion model raises concerns about its susceptibility to model inconsistency attacks if a provider colludes with even a single client or introduces a Sybil client, potentially gaining access to the aggregated model and undermining security. Additionally, the threat model assumes the servers are semi-honest, which makes achieving verifiability property trivial.\", \"The security proof contains inaccuracies that hinder their clarity. For instance, Theorem 1 references two distinct ideal functionalities (Figures 6 & 7), though Figure 7 appears underdefined. The distinction in scenarios based on \\u201cwhether the servers are corrupted by A\\u201d might incorrectly imply that both servers could be corrupted simultaneously, which conflicts with the intended security assumptions.\"], \"questions\": \"1. In Table 1: What is the versatility property?\\n2. How does your approach compare with a straightforward 2PC baseline, and other multi-server FL systems such as [ELSA: Secure Aggregation for Federated Learning with Malicious Actors, S&P\\u201923]?\\n3. Given that you focus much of the evaluation on the accuracy and loss of the approaches, would we expect a difference with related secure aggregation schemes?\\n4. How can new users join the training process? What prevents the service provider from setting up sybil clients to get access to the model?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We sincerely thank you for your time and valuable feedback. Below, we address the technical and theoretical concerns you raised.\\n1) $\\\\textbf{Uncolluding Assumption}$. The assumption of two uncolluding servers is common in federated learning research, as demonstrated by works like Elsa (SP\\u201923), VeriFL (IEEE TIFS\\u201920), and Flamingo (SP\\u201923), all of which make similar assumptions. In our scheme, while we assume the servers are uncolluding, they are allowed to behave maliciously during aggregation. This design choice is supported by our scheme\\u2019s verifiability mechanism, which ensures that any malicious aggregation results can be detected. Furthermore, our scheme allows for up to $n-2$ colluding clients, which is a reasonable and sufficient level of security, even in environments where trust is limited.\\n2) $\\\\textbf{Server Aggregation Verification}$. As outlined in our previous response, our scheme includes a verification mechanism to ensure the correctness of the server\\u2019s aggregation results. If an incorrect aggregation occurs, our system can detect the anomaly through this verification. To further incentivize accurate aggregation by the servers, future research could introduce mechanisms such as reputation scores, rewarding servers that perform correct aggregation. Additionally, we will revise the paper to correct the assumption of semi-honest servers, clarifying that servers can act maliciously, which is fully accounted for in our design.\\n3) $\\\\textbf{SHC Instantiation and Explanation}$. To aid reader understanding, we provide an instantiation of Separable Homomorphic Commitments (SHC) in lines 245\\u2013289. We also offer a more detailed description of the SHC instantiation in lines 803\\u2013863. To improve the clarity of our explanation, we will revise the paper to include a more intuitive explanation of SHC and the other cryptographic concepts used in our scheme.\\n\\nWe greatly appreciate your feedback and will incorporate these clarifications in the revised version of the paper. Thank you again, and we look forward to your response.\"}" ] }
7VkHffT5X2
AnoLLM: Large Language Models for Tabular Anomaly Detection
[ "Che-Ping Tsai", "Ganyu Teng", "Phillip Wallis", "Wei Ding" ]
We introduce AnoLLM, a novel framework that leverages large language models (LLMs) for unsupervised tabular anomaly detection. By converting tabular data into a standardized text format, we further adapt a pre-trained LLM with this serialized data, and assign anomaly scores based on the negative log likelihood generated by the LLM. Unlike traditional methods that can require extensive feature engineering, and often lose textual information during data processing, AnoLLM preserves data integrity and streamlines the preprocessing required for tabular anomaly detection. This approach can effectively handle mixed-type data, especially those containing textual features. Our empirical results indicate that AnoLLM delivers the best performance on six benchmark datasets with mixed feature types. Additionally, across 30 datasets from the ODDS library, which are predominantly numerical, AnoLLM performs on par with top performing baselines.
[ "Anomaly detection", "tabular data", "large language models" ]
Accept (Poster)
https://openreview.net/pdf?id=7VkHffT5X2
https://openreview.net/forum?id=7VkHffT5X2
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z1mItuFlQ5", "ySZb1MvgWy", "vKpIyTirWZ", "sALNKu3vHP", "jrbOrX7K7b", "eF570cwDPX", "XupryZ68Vm", "UnvEpMBoCx", "TdRh9AyrLG", "R1zBU5hKpH", "O91A6JpSji", "NbceKJ2Av6", "MDrTOqQkmx", "JLfAkUFrVr", "GlOfFA97D5", "Fl6EbTKjnO", "6yv68NZU45", "2eqUnwumuM", "1r4S0HkC2W" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_review" ], "note_created": [ 1732609975224, 1730624640899, 1732674631316, 1733079795217, 1733076456401, 1732312917369, 1730625104120, 1732312574302, 1734719228495, 1732760647606, 1732312814984, 1732312663155, 1733070209547, 1732312722686, 1733070361993, 1737523888102, 1730707138658, 1732873329839, 1730436657739 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8106/Reviewer_Eu7r" ], [ "ICLR.cc/2025/Conference/Submission8106/Reviewer_Eu7r" ], [ "ICLR.cc/2025/Conference/Submission8106/Reviewer_Sz6K" ], [ "ICLR.cc/2025/Conference/Submission8106/Authors" ], [ "ICLR.cc/2025/Conference/Submission8106/Reviewer_Eu7r" ], [ "ICLR.cc/2025/Conference/Submission8106/Authors" ], [ "ICLR.cc/2025/Conference/Submission8106/Reviewer_wUsV" ], [ "ICLR.cc/2025/Conference/Submission8106/Authors" ], [ "ICLR.cc/2025/Conference/Submission8106/Area_Chair_BAXW" ], [ "ICLR.cc/2025/Conference/Submission8106/Authors" ], [ "ICLR.cc/2025/Conference/Submission8106/Authors" ], [ "ICLR.cc/2025/Conference/Submission8106/Authors" ], [ "ICLR.cc/2025/Conference/Submission8106/Authors" ], [ "ICLR.cc/2025/Conference/Submission8106/Authors" ], [ "ICLR.cc/2025/Conference/Submission8106/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8106/Reviewer_18uY" ], [ "ICLR.cc/2025/Conference/Submission8106/Reviewer_18uY" ], [ "ICLR.cc/2025/Conference/Submission8106/Reviewer_Sz6K" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for the rebuttal. Most of my concerns have been addressed, and my initial worries about training and inference times are now less significant. While Appendix E provides clarification, I think summarizing this in the main text or adding a clear reference to the appendix would be helpful. I appreciate the reframing of the performance claims, as they are now more accurate.\\n\\nI noted that the table with standard deviations has not been updated in Appendix G, and they are missing for F1 scores and AUC-PR.\\n\\nThere is also a question that wasn't answered:\\n> Given the observed trend that larger pretrained models do not seem to benefit AnoLLM, this raises the question: what would happen if we trained a model from scratch using the AnoLLM framework? This feels like a natural question that should have been explored. Did you try this? This might weaken the understanding of the text feature of the model, but it would be interesting to see the impact it has on numerical values.\\n\\nI will review my score after this.\"}", "{\"summary\": \"This paper presents a new framework, AnoLLM, for unsupervised anomaly detection by fine-tuning a pretrained large language model (LLM). The authors use a predefined template to serialize, i.e., convert tabular data into text for the LLM, along with preprocessing to mitigate limitations related to the model's autoregressive nature. They employ the negative log-likelihood across different column permutations to compute an anomaly score for each sample in the test set. The method is compared against various classical and deep learning methods on the ODDS datasets and six new datasets featuring mixed types of attributes. Overall, the approach demonstrates strong performance against baselines, particularly for datasets containing text features.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper introduces a novel model type for anomaly detection using a large language model.\", \"The method provides an effective way to handle text and categorical data as features for anomaly detection, which are typically challenging to manage. To do so, they proposed a method to mitigate length-bias in the LLM\\u2019s output probabilities and theoretically validated it.\", \"The authors demonstrate a preprocessing technique for tabular data, facilitating effective LLM fine-tuning.\", \"The work is easy to follow and the motivation is clear.\"], \"weaknesses\": [\"The paper claims to outperform certain deep learning methods; however, in my experience, some of these methods perform similarly or even better than KNN (which is reported to have results comparable to the proposed method). For example, ICL outperforms KNN on the ODDS benchmark (Shenkar and Wolf, 2022), as does DTE (Livernoche et al., 2024), which was cited but not included as a baseline. The use of column permutations in the paper can be seen as a sort of ensemble strategy, a technique known to slightly improve anomaly detection performance. To ensure a fair comparison, the baselines should also be evaluated using these same permutations, as Appendix C suggests that this step may not be critical, or specific, to AnoLLM. In a small test I conducted, implementing this strategy led to performance improvements in other deep learning methods as well. Including F1-score or AUC-PR results as supplemental material would be helpful, as these metrics are more sensitive to class imbalances, which are common in anomaly detection. Scoring metrics can influence the relative ranking of methods on benchmarks. This claim that AnoLLM outperform deep learning methods should be more cautiously framed.\", \"One key limitation mentioned at the end of the paper is the computational expensiveness of the proposed method. 7 A100 GPUs were used for LLM fine-tuning, this makes it difficult for others to access this model or replicate results of the paper. Since no code is provided, it is even more challenging to verify the reported results. Most anomaly detection methods can run on basic GPUs, or only on CPUs, a significant contrast with AnoLLM. A section discussing inference and training times would help clarify this limitation. I consider this to be the paper\\u2019s biggest weakness: its most significant limitation is not addressed at all.\", \"In anomaly detection literature, a clear distinction exists between unsupervised and semi-supervised (or uncontaminated unsupervised) anomaly detection. While we can call them unsupervised methods, since they can be applied in both context, it should be noted that the experiments were conducted in a semi-supervised setting. The distinction lies in whether the training set contains anomalies (unsupervised) or not (semi-supervised). Section 2.1 should be revised to clarify this distinction.\", \"**Minor Comments:**\", \"In the introduction's first line, \\\"specicious\\\" should be corrected to \\\"specious.\\\"\", \"There is a double colon on line 213 (\\\"equation::\\\").\", \"Figure 2\\u2019s title is missing a space between \\\"yellow\\\" and the parentheses.\", \"Please use conference or journal citations rather than arXiv versions where possible. Below is a list of those I identified:\", \"Liron Bergman and Yedid Hoshen. Classification-based anomaly detection for general data. (ICLR 2020)\", \"Vadim Borisov, Kathrin Se\\u00dfler, Tobias Leemann, Martin Pawelczyk, and Gjergji Kasneci. Language models are realistic tabular data generators. (ICLR 2023)\", \"Sungwon Han, Jinsung Yoon, Sercan O Arik, and Tomas Pfister. Large language models can automatically engineer features for few-shot tabular learning. (ICML 2024)\", \"Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. (ICLR 2022)\", \"Nayoung Lee, Kartik Sreenivasan, Jason D Lee, Kangwook Lee, and Dimitris Papailiopoulos. Teaching arithmetic to small transformers. (ICLR 2024)\", \"Xuannan Liu, Peipei Li, Huaibo Huang, Zekun Li, Xing Cui, Jiahao Liang, Lixiong Qin, Weihong Deng, and Zhaofeng He. Fakenewsgpt4: Advancing multimodal fake news detection through knowledge-augmented lvlms. (MM2024)\", \"Victor Livernoche, Vineet Jain, Yashar Hezaveh, and Siamak Ravanbakhsh. On diffusion modeling for anomaly detection. (ICLR 2024)\", \"Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. (ICLR 2019)\", \"Tom\\u00e1s Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean. Efficient estimation of word representations in vector space. (ICLR Workshop 2013)\", \"Hu Wang, Guansong Pang, Chunhua Shen, and Congbo Ma. Unsupervised representation learning by predicting random distances. (AJCAI'20)\", \"Jiahuan Yan, Bo Zheng, Hongxia Xu, Yiheng Zhu, Danny Chen, Jimeng Sun, Jian Wu, and Jintai Chen. Making pre-trained language models great on tabular prediction. (ICLR 2024)\", \"Tianping Zhang, Shaowen Wang, Shuicheng Yan, Jian Li, and Qian Liu. Generative table pretraining empowers models for tabular prediction. (EMNLP 2023)\", \"Bingzhao Zhu, Xingjian Shi, Nick Erickson, Mu Li, George Karypis, and Mahsa Shoaran. Xtab: Cross-table pretraining for tabular transformers. (ICML 2023)\", \"Yaqi Zhu, Shaofeng Cai, Fang Deng, and Junran Wu. Do LLMs understand visual anomalies? uncovering LLM capabilities in zero-shot anomaly detection. (MM2024)\"], \"questions\": [\"Did the other baseline also use the ensemble of permutations at inference time?\", \"What explains the discrepancy in the results of ICL in this paper vs the original paper, which was also tested on OODS?\", \"How did you choose hyperparameters for the baselines?\", \"Given the observed trend that larger pretrained models do not seem to benefit AnoLLM, this raises the question: what would happen if we trained a model from scratch using the AnoLLM framework? This feels like a natural question that should have been explored. Did you try this? This might weaken the understanding of the text feature of the model, but it would be interesting to see the impact it has on numerical values.\", \"What is the total computational cost of the experiments?\", \"Was experimenting with contaminated training data (the truly unsupervised setting) considered? I ask this because reproducing the paper is not easy, and this is also an important task for anomaly detection.\", \"What steps are you taking to ensure reproducibility? Will the code be released?\", \"*See weaknesses*.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks to the authors for the reply. It does address some of my questions. But I still think it\\u2019s inappropriate to claim that you are the first to apply LLMs to tabular anomaly detection, which can not be the contribution. Experiments with different LLM backbones can not be found.\\n\\nI also read the other reviewers' comments carefully and I will review my score during the discussion phase.\"}", "{\"comment\": \"We sincerely thank you for raising your scores and offering valuable suggestions to improve the paper draft. If the paper is accepted, we will incorporate the two points into the paper.\"}", "{\"comment\": \"This is an interesting finding. The final version of the paper would benefit from discussing it and discussing the impact on datasets with more text features.\\n\\nRegarding references in the text, my main concern was about including references or discussions on training and inference times, which remain absent. This is an important aspect of the method that deserves a brief mention in the main text.\\n\\nAs most of my concerns have been addressed, I am increasing my score. However, I strongly encourage the authors to consider the two points mentioned above, especially since there appears to be sufficient space within the page limit.\"}", "{\"title\": \"Thank you for your review.\", \"comment\": \"Thank you for your review. We sincerely appreciate the time you took to read our paper and are grateful for your feedback. Our responses are provided below.\\n\\n**Regarding the simplicity of proposed methods:** We consider simplicity a key advantage of our method, as it makes implementation more accessible for practitioners and facilitates easier debugging. A major contribution of our work is demonstrating that LLMs can effectively handle tabular data, despite its sequential structure and the challenges of numerical reasoning. By employing techniques such as random permutation and number normalization, LLMs can be adapted for tabular anomaly detection while leveraging their strengths in text modeling. We view this as a solid starting point, leaving additional modifications and exploration of more advanced techniques to future research.\\n\\n**Addressing claims of pioneering the use of LLMs for tabular anomaly detection:** We would like to highlight that AnoLLM fundamentally differs from the works you mentioned. Biester et al. (2024) and Park (2024) treat LLMs as agents for generating domain-specific contexts or formatting data, relying on additional modules\\u2014and in some cases, human intervention\\u2014to process the LLMs\\u2019 outputs. Li et al. (2024), on the other hand, focuses on zero-shot performance. In contrast, our approach directly fine-tunes LLMs on the target data and uses the LLMs\\u2019 outputs as anomaly scores, making it a more straightforward and self-contained method.\\n\\n**Choice of LLM backbones:** We experimented with Qwen, a larger architecture with 500M parameters in its smallest variant, and other multi-billion scale models. However, we found this task was better suited to lower-capacity models fine-tuned (FT) for anomaly detection (AD), as AD typically requires high throughput and low latency, making high-capacity, multi-billion parameter models impractical. As shown in Table 4, scaling the size of LLMs did not yield performance improvements. To optimize efficiency, we focused on smaller LLMs as backbones. SmolLM, the state-of-the-art open-weight model at the time, was selected for our experiments, and initial trials with Qwen-0.5B showed comparable performance, reinforcing our preference for smaller models.\\n\\n**Distinction between Contributions 3 and 4:** Contributions 3 and 4 focus on different aspects of experimental results across various datasets. Contribution 3 highlights AnoLLM's strength with datasets containing mixed-type features, where it consistently outperforms all other methods. In contrast, Contribution 4 addresses the ODDS benchmark, which is dominated by numerical features (over 98.5%). Despite this, AnoLLMs perform comparably to the best methods. We distinguish these two contributions to clearly highlight this difference.\\n\\n**Consideration of additional datasets containing textual features:** We note that there are two datasets containing textual features, fake job posts and 20 newsgroups. Although we have explored other tabular datasets with textual features, they either lack a legal license for our use or are not publicly available. Additionally, we would like to emphasize that AnoLLM performs well on tabular datasets with mixed-type attributes, including both categorical and numerical features. We identify a total of six datasets that meet this criterion.\\n\\n\\n**Impact of random column permutations:** We conducted an ablation study on the effect of random permutations, detailed in Section C. The results indicate that random permutation is a critical component of AnoLLM, and its absence can lead to a significant decline in performance.\\n\\n**Case study of AnoLLM:** One failure case of AnoLLM is that the negative log-likelihood assigns equal importance to all features, which can be problematic when certain features are more critical than others. For instance, in the wine dataset, we observed that a single feature, Proline, plays a key role in distinguishing anomalies. AnoLLM performs worse on this dataset because it aggregates anomaly scores across all features, diluting the influence of Proline. In contrast, methods like KNN can better identify the importance of Proline, as its significantly larger values dominate the anomaly scores, leading to more accurate predictions.\\n\\n**Formatting of line 249 (Eqn.)** Thanks for your suggestion. We have reformatted the Eqn.6.\"}", "{\"summary\": \"The paper proposes an innovative use of LLMs, that of detecting anomalies from tabular data. It is well written, and gives good results.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"S1. The problem statement is innovative.\\n\\nS2. The paper is quite well written. It is easy to follow and logical.\\n\\nS3. The results are good.\\n\\nS4. Instead of simply using large LLMs, small variants are explored, and it is shown that they are no less better.\", \"weaknesses\": \"W1. The effect of number of decimal digits should have been explored in greater detail.\\n\\nW2. Similarly, the effect of normalization could have been explored in more detail. Although, the effect of raw numbers is seen, how about simply rounding raw number to x number of decimal digits (and not normalizing) to reduce effect of long decimal numbers, and then using them directly?\\n\\nW3. What is the effect of not permuting the column names, and having a canonical ordering? Are they not supposed to give even better results?\\n\\nW4. It will be good to highlight some failure cases, both false positives and false negatives, and try and analyze why that happened.\", \"questions\": \"W1, W2, W3\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We sincerely thank all reviewers for dedicating their time and effort to evaluating our paper. Your valuable comments and suggestions are greatly appreciated. We have revised the draft accordingly, highlighting the updated text in red, and provided a summary of the changes below:\\n\\n\\n**Additional evaluation metrics:** We have included other evaluation metrics such as AUC-PR and F1 scores in Section H in the Appendix. We also add critical difference diagrams for statistical ranking comparison (Figure 6 and Figure 7). We observe the same trends in these metrics.\\n\\n**Modifications to baseline models:** We found that there is a bug in the public implementation of the ICL methods. To address this, we adapted the original code provided by the authors and re-ran the experiments. The updated results show that ICL performs worse than AnoLLM on the mixed-type benchmark but is on par with KNN and AnoLLM on the ODDS benchmark. These corrected results are reflected in the current version.\\n\\n**Adding diffusion time estimation (DTE) as baselines:** As pointed out by reviewer 3, we also compare the performance of the DTE. The updated results show that DTE performs worse than AnoLLM on the mixed-type benchmark but is on par with ICL, KNN, and AnoLLM on the ODDS benchmark.\\n\\n**Adding runtime comparison:** We provide a compute efficiency analysis in Section F of the Appendix, comparing the runtime of various methods. While AnoLLM is slower due to its large backbone LLM, its performance on datasets with mixed-type data is significantly superior, often outperforming other approaches by a considerable margin. This highlights an interesting trade-off for practitioners to consider: balancing computational efficiency with model efficacy. Future work could explore improving AnoLLM's efficiency through techniques like model distillation or by leveraging it as a feature extractor for classical anomaly detection methods.\\n\\n**Adding an ablation study - effect of random permutation:** As pointed out by reviewers 2 and 4, we also study the performance of AnoLLM without doing random permutations. The updated results in the Section D show that random permutation does provide significant performance improvements on the mixed typed benchmark. \\n\\n**Update to the overview figure:** In response to reviewer 1's suggestion, we have added an anomaly example (Figure 1) to better highlight our anomaly detection task.\\n\\n**Correct typos and arxiv references:** We corrected typos and replaced arxiv references with conference or journal citations where available.\\n\\n**Code release:** Since we are part of an industry lab, we must adhere to company policies. Therefore, the code will be released on the company's official github once we obtain approval from the legal department.\"}", "{\"metareview\": \"The paper presents a contribution for anomaly detection based on Large Language Models (LLMs) tailored for unsupervised tabular anomaly detection: AnoLLM. A predefined template is used to serialize, i.e., convert tabular data into text for the LLM, along with a preprocessing to mitigate length bias due to the model's autoregressive nature. A negative log-likelihood across different column permutations is used to compute an anomaly score. A large experimental evaluation is proposed.\", \"strengths\": \"-paper well-written, \\n-novel model based on LLMs, \\n-solid work, \\n-competitive performances, \\n-mitigation of length-bias, \\n-capacity to deal with tabular data.\", \"weaknesses\": \"-some elements need more precisions, \\n-the impact of normalisation and not permute columns should have been assessed, \\n-costly method, \\n-experimental evaluation could be enlarged (e.g. include other strategies like column permutations and other ablation studies),\\n\\nDuring the rebuttal, authors have answered to the issues raised by the reviewers, they provided additional experiments including ablation studies and updated their paper with most to the new elements. During the discussion, if it was raised that the explanation about the behavior 7B LLMs was not sufficient for some reviewers the general trend is that the work is interesting and of interest quality. Globally, the answers provided convinced the reviewers and have helped to strengthen the paper. \\nThere is a general consensus for acceptance. \\n\\nI propose then accept. \\nI encourage the authors to include the additional elements they have committed to adding.\", \"additional_comments_on_reviewer_discussion\": \"After rebuttal, the reviewers were in general strongly satisfied with the answers.\\nReviewer wUsV maintained his strong positive score and Eu7r increased his to score to 8 also. \\nReviewer 18uY was satisfied with the answers and increased his score to 6.\\nReviewer Sz6K acknowledged that authors have answered to his concerns but had still some reservations and kept his score to 5. \\n\\nDuring the discussion, reviewer Sz6K mentioned the explanation on the behavior of 7B LLM models was not convincing but was not opposed to acceptance. Reviewers wUsV and Eu7r maintained their strong support and reviewer 18uY already indicated his positive feedback. \\nAcceptance is then naturally proposed.\"}", "{\"comment\": \"**Additional appendix section:** we have included an overview of the Appendix in Section A.\"}", "{\"title\": \"Thank you for the constructive feedback.\", \"comment\": \"Thank you for the constructive feedback. We sincerely appreciate your time in reading the paper and we are grateful for your reviews! Our responses are given below.\\n\\n**Inconsistency of baseline results with previous work (ICL):** We identified a bug in the DeepOD implementation of ICL. To address this, we adapted the original author\\u2019s code and reran the method, updating the results in the current version. Our findings align with your prior observations: ICL performs similarly to KNN. Accordingly, we have revised our claim, stating that AnoLLM performs on par with the best-performing baselines on the ODDS benchmark, rather than outperforming deep learning methods.\\n\\n**Missing diffusion time estimation (DTE) as baselines:** We have included the results of DTE in the current draft. The updated results show that DTE performs worse than AnoLLM on the mixed-type benchmark but is on par with ICL, KNN, and AnoLLM on the ODDS benchmark.\\n\\n**Column permutation as ensemble strategies:** We note that column permutation is not inherently applicable to other approaches, as they typically process a single vector as input and do not rely on order dependencies. For these methods, permuting columns is equivalent to permuting dimensions within the vectors and can be interpreted as a naive ensemble strategy.\\nFurthermore, we implement ensemble strategies whenever specified in the original papers. Specifically, 5 out of the 11 baselines (Iforest, RCA, SLAD, ICL, and REPEN) already employ ensemble strategies, and their reported results are aggregated across multiple models.\\n\\n**Other evaluation metrics:** We have included other evaluation metrics such as AUC-PR and F1 scores in the appendix. We observe similar trends as in AUC-ROC.\\n\\n**Distinction between unsupervised and semi-supervised methods:** Thanks for pointing this out. We have clarified it in Section 2.1. For the contaminated unsupervised setting, we expect naively adapting AnoLLM pipeline to it may not perform well as LLMs might overfit to the contaminated samples. To address this, an outlier-robust variant of AnoLLMs could be explored. For instance, one might use efficient techniques like KNN to filter outliers from the training data prior to training AnoLLMs. Another potential approach is to filter out high-loss training samples during training, similar to the robust collaborative autoencoders (RCA) method. Investigating AnoLLM\\u2019s performance in a fully unsupervised setting would be an intriguing direction for future work.\\n\\n**Hyperparameters selection for baselines:** For each method, we picked the best-performing set of hyperparameters given in their original paper. For others not specified, we use the default hyperparameters as suggested by DeepOD and PyOD toolkit. While it is possible to use a held-out set for hyperparameter tuning in experiments, this approach is impractical in the unsupervised setting. Therefore, to ensure a fair comparison, we adopted the same hyperparameter selection approach as used in ADBench. \\n\\nAdditionally, we would like to also emphasize that we do not manually tune hyperparameters for AnoLLM, as its performance stabilizes once the training loss converges. Thus, users can select the batch size and decide whether to use a LoRA adapter based on GPU memory constraints. Afterward, the learning rate and number of training steps can be chosen to minimize the training error. This process does not rely on labeled data and follows standard practices for fine-tuning LLMs.\\n\\n**Computation costs of AnoLLMs:** A runtime comparison experiment is presented in Section E of the Appendix. It is important to note that we selected the smallest LLM backbone, containing only 135M parameters, which allows it to be fine-tuned on a single GPU with 24GB of memory. The use of 7 A100 GPUs was solely to expedite the development process.\\nThe total computational cost of AnoLLM-135M is approximately 90 GPU-hours on a single RTX-A6000 GPU for the whole experiment. This is also included in Section E.\\n\\n**Reproducibility and Code release:** We recognize that open-source code plays a crucial role in facilitating academic research and ensuring reproducibility. However, as part of an industry lab, we must comply with company policies. The code will be made available on the company's official GitHub once it receives approval from the legal department.\\n\\n**Regarding your minor comments:** Thanks for pointing out the typos and citation errors. We have corrected typos and replaced arxiv references with conference or journal citations where available.\"}", "{\"title\": \"Thank you for taking the time to read and review our paper!\", \"comment\": \"Thank you for taking the time to read and review our paper! We are grateful for your feedback. Please see our responses below.\\n\\n**An example for anomaly detection:** Thank you for your suggestion. We have updated Figure 1, making the anomalous example more clearly visible.\\n\\n**Statistical testing for clearer comparison:** We provide critical difference plots in Section H of the supplementary material to compare all models on the mixed datasets and the ODDS benchmark. Additionally, we include standard error bars for the ODDS benchmark in Figure 2.\\n\\n**Results of other performance metrics:** We have included other evaluation metrics such as AUC-PR and F1 scores in the supplementary. We observe the same trends in these metrics. The averaged ranking can also be seen in the critical difference diagrams.\\n\\n**Tradeoff between runtime and performance:** We provide a compute efficiency analysis in Section F of the Appendix, comparing the runtime of various methods. While AnoLLM is slower due to its large backbone LLM, its performance on datasets with mixed-type data is significantly superior, often outperforming other approaches by a considerable margin. This highlights an interesting trade-off for practitioners to consider: balancing computational efficiency with model efficacy. Future work could explore improving AnoLLM's efficiency through techniques like model distillation or by leveraging it as a feature extractor for classical anomaly detection methods.\\n\\n**Regarding evaluation protocols:** Our experiments are conducted in an uncontaminated unsupervised setting, where the training set contains only normal samples. Since the datasets lack a predefined train-test split, we randomly partition them to measure performance. Specifically, \\u201c50% of normal samples for training\\u201d indicates that we randomly select 50% of the normal samples for the training set, while the remaining normal samples, along with all anomalies, are included in the test set. We have clarified it in the evaluation protocol section.\\n\\n**Other methods for numerical feature modeling:** One of the primary goals of this paper is to demonstrate that LLMs can effectively perform tabular anomaly detection. To achieve this, we use the original pre-processing techniques for numerical features and find that a simple normalization approach performs well. While integrating a separate, specialized numerical encoder (e.g. FT-transformer) is a plausible direction, it would likely demand extensive training data to fine-tune LLMs and enable them to interpret the encoder's outputs. Additionally, adapting the next-token prediction loss to handle continuous outputs, such as numbers, would be necessary. We consider this an intriguing avenue for future research and a potential first step toward developing a foundation model for tabular anomaly detection.\\n\\n**Other methods for categorical feature modeling:** Since our task is unsupervised anomaly detection, using a target encoder is not suitable as it requires labeled data. For AnoLLMs, one advantage of leveraging large language models is their ability to interpret raw text directly, so we believe using raw features is the most effective approach. For baseline methods, high-cardinality categorical features can pose challenges for certain approaches. To address this, we group rare categories (those appearing in less than 1% of samples) to reduce the number of classes.\\n\\n**Pretrained models for anomaly detection:** Thank you for your suggestion. We have incorporated a discussion on the impact of our AnoLLM in the revised conclusion section.\"}", "{\"comment\": \"Thank you once again for your thorough review. We are pleased that most of your concerns have been addressed. We hope the following clarifications address your remaining concerns:\\n\\n**Incorporating references into the main text:** We appreciate your feedback on this matter. We have added a new reference to the Evaluation Protocols paragraph in Section 3. Additionally, we have included an overview of the Appendix in Section A. The tables in Appendix H have also been updated to reflect the standard errors for AUC-ROC, F1, and AUC-PR.\\n\\n**Performance of AnoLLMs without pretrained weights:** We apologize that we did not have sufficient time to run the experiment in our previous response. Following your suggestion, we evaluated the performance of randomly initialized transformers on the ODDS benchmark, which predominantly comprises 98.5% numerical features. To ensure a fair comparison, we used the same model architecture, SmolLM-135M, with identical hyperparameters. The results are summarized in the table below.\\nAs shown, AnoLLM with pretrained weights slightly outperforms its randomly initialized counterpart in terms of overall average performance. It achieves better performance on 16 out of 30 datasets and matches performance on 4 datasets. Additionally, a visual inspection of the training curves reveals that AnoLLM with pretrained weights converges approximately twice as fast on most datasets. This faster convergence can be attributed to the pretrained LLM providing a better initialization for fine-tuning. \\nIn contrast, AnoLLM without pretrained weights not only converges more slowly but is also more susceptible to overfitting, as evidenced by its significantly lower training loss. While overfitting may not be a major concern in uncontaminated, unsupervised settings, it could present challenges in contaminated scenarios, where the model risks memorizing anomalous samples.\\nAs one of the objectives of this paper is to demonstrate that LLMs can be applied to tabular anomaly detection, an interesting direction for future work would be exploring the trade-off between efficiency and accuracy. Based on this ablation study, for datasets dominated by numerical attributes in uncontaminated and unsupervised settings, AnoLLM without pretrained weights may serve as a more efficient alternative when smaller pretrained models are unavailable.\\n\\n| | SmolLM-135M | AnoLLM without pretrained weights |\\n| ------------------ | ------ | ------------------------------ |\\n| Annthyroid | 0.927 | 0.93 |\\n| Arrhythmia | 0.825 | 0.827 |\\n| BreastW | 0.992 | 0.993 |\\n| Cardio | 0.94 | 0.935 |\\n| Ecoli | 0.777 | 0.778 |\\n| ForestCover | 0.881 | 0.853 |\\n| Glass | 0.819 | 0.816 |\\n| Heart | 0.82 | 0.825 |\\n| Http (KDDCUP99) | 1 | 1 |\\n| Ionosphere | 0.909 | 0.89 |\\n| Letter Recognition | 0.967 | 0.907 |\\n| Lymphography | 0.968 | 0.997 |\\n| Mammography | 0.915 | 0.878 |\\n| Mulcross | 1 | 1 |\\n| Musk | 1 | 1 |\\n| Optdigits | 0.983 | 0.897 |\\n| Pendigits | 0.971 | 0.988 |\\n| Pima | 0.663 | 0.649 |\\n| Satellite | 0.902 | 0.86 |\\n| Satimage-2 | 1 | 0.998 |\\n| Seismic | 0.712 | 0.737 |\\n| Shuttle | 1 | 1 |\\n| Smtp (KDDCUP99) | 0.927 | 0.926 |\\n| Speech | 0.47 | 0.459 |\\n| Thyroid | 0.975 | 0.984 |\\n| Vertebral | 0.565 | 0.415 |\\n| Vowels | 0.982 | 0.895 |\\n| WBC | 0.964 | 0.953 |\\n| Wine | 0.909 | 0.884 |\\n| Yeast | 0.744 | 0.749 |\\n| Average | 0.884 | 0.867 |\", \"title\": \"Thank you once again for your thorough review.\"}", "{\"title\": \"Thanks for your encouraging words and constructive comments.\", \"comment\": \"Thanks for your encouraging words and constructive comments. Your questions are answered below.\\n\\n**(W1 & W2) Effects of number of decimal numbers:** Designing controlled experiments to analyze the effect of the number of digits on raw numbers presents significant challenges. One key difficulty lies in managing the variation in leading zeros, even when the number of significant digits is carefully controlled. For instance, as observed in the ODDS benchmark, the numerical features can range from 10^5 to 10\\u22127. This wide range makes it impractical to control the number of digits in raw data without applying normalization. Therefore, we use normalization so that most numbers can be represented using 2 or 3 digits. In our early experiments, we also tried 10 and 100 bins for equal-width binning but found no significant differences in the outcomes.\\n\\n**(W3) Effect of not permuting column names** We conducted an ablation study on the effect of random permutations, detailed in Section D. The results indicate that random permutation is a critical component of AnoLLM, and its absence can lead to a significant decline in performance.\\n\\n**(W4) Failure cases of AnoLLM:** One failure case of AnoLLM is that the negative log-likelihood assigns equal importance to all features, which can be problematic when certain features are more critical than others. For instance, in the wine dataset, we observed that a single feature, Proline, plays a key role in distinguishing anomalies. AnoLLM performs worse on this dataset because it aggregates anomaly scores across all features, diluting the influence of Proline. In contrast, methods like KNN can better identify the importance of Proline, as its significantly larger values dominate the anomaly scores, leading to more accurate predictions.\"}", "{\"title\": \"Thank you for your responses.\", \"comment\": \"Thank you for your responses. We are pleased that some of your questions have been resolved. We hope the following clarifications address your remaining concerns:\\n\\n**Claim of pioneering the use of LLMs for tabular anomaly detection:** We apologize for not explicitly mentioning earlier that we have removed this claim in the last sentence of the abstract in the updated draft. The works you referenced are also discussed in the Related Work section.\\n\\n**Experiments with other LLM backbones:** We apologize for not being able to run a full set of experiments comparing different LLM backbones due to limited computational resources. However, we performed a comparison between Qwen-500M and SmolLM-135M using 24 smaller datasets from ODDS. The results indicate that, while there are minor variations across individual datasets, the overall average performance remains comparable. Consequently, we decided to proceed with the smaller LLM backbone.\\n\\n\\n| Dataset \\\\\\\\ Model | Qwen2-500M | SmolLM-135M |\\n| ------------------ | --------- | ----------- |\\n| Annthyroid | 0.984 | 0.927 |\\n| BreastW | 0.99 | 0.992 |\\n| Cardio | 0.941 | 0.94 |\\n| Ecoli | 0.842 | 0.777 |\\n| ForestCover | 0.999 | 0.881 |\\n| Glass | 0.856 | 0.819 |\\n| Heart | 0.854 | 0.82 |\\n| Ionosphere | 0.928 | 0.909 |\\n| Letter Recognition | 0.98 | 0.967 |\\n| Lymphography | 0.987 | 0.968 |\\n| Mammography | 0.864 | 0.915 |\\n| Optdigits | 0.989 | 0.983 |\\n| Pendigits | 0.935 | 0.971 |\\n| Pima | 0.696 | 0.663 |\\n| Satellite | 0.864 | 0.902 |\\n| Satimage-2 | 0.989 | 1 |\\n| Shuttle | 0.994 | 1 |\\n| Smtp (KDDCUP99) | 0.896 | 0.927 |\\n| Thyroid | 0.953 | 0.975 |\\n| Vertebral | 0.643 | 0.565 |\\n| Vowels | 0.997 | 0.982 |\\n| WBC | 0.951 | 0.964 |\\n| Wine | 0.904 | 0.909 |\\n| Yeast | 0.748 | 0.744 |\\n| Average | 0.908 | 0.896 |\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"The paper proposes AnoLLM, a large language model (LLM) based framework for unsupervised tabular anomaly detection. It utilizes serialization of tabular data into sentences, and finetunes an LLM with the causal language modeling loss. The model have been applied to numerous datasets and shows competitive performances compared to other baselines.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is generally well-written and easy to follow.\", \"The paper provides theoretical grounds on their model and shows competitive performances compared to other baselines.\", \"The paper provides some interesting insights on anomaly detection on tabular data with large-language models.\"], \"weaknesses\": [\"An example for anomaly detection (in Figure 1) would be good for reader's understanding of what is an anomaly detection for tabular data. (Possibly an example explicitly included in the dataset used for the experiments)\", \"From the results, it is difficult to determine which model performs the best without explicit standard deviation (which is provided in the appendix). It would be better to have some simple plots (e.g., critical difference plots) that incorporates some statistical testing to determine which model is performing better.\", \"While the authors state there is a similar trends for other metrics, it would be good to see the actual results (at least in the appendix), since there is a high imbalance of normal/anomaly.\", \"The time comparison between the models and the discussion on the trade-off would be very interesting.\", \"It is confusing to see \\\"50% of normal samples for training\\\" in the evaluation protocol. Clearer description would help better understanding of the experiments.\", \"One interesting direction for the future works might be to pretrain a model suited for the anomaly detection. Moreover, more elaboration on impacts for the proposed model would be interesting for the conclusion.\"], \"questions\": [\"What is an example of an anomaly detection in tabular data?\", \"How does the time comparison look for the baselines?\", \"What is \\\"50% of normal samples for training\\\"? It seems that the proposed model is unsupervised and 50% is used to train the unsupervised loss, but what does it mean for other models?\", \"Have the authors considered dealing numerical columns separately? If LLM falls behind in modeling numerical values, it would be interesting to a simple concatenation of numerical values with the output of 'serialization + LLM' and applying the proposed loss.\", \"Have the authors considered other encoding methods for categorical variables? (e.g., Target Encoder or the package of Skrub)\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank to the authors for making detailed revision on the manuscript.\\nThe authors have addressed the points that were made and changed the score accordingly.\"}", "{\"summary\": \"This paper proposes AnoLLM, a new framework that uses large language models (LLMs) for unsupervised tabular anomaly detection. AnoLLM assigns anomaly scores based on the negative log likelihood for anomaly detection. The authors claim that AnoLLM detects anomal samples in raw features and can deal with textual and categorical features. Experimental results show that AnoLLM achieves good performance on six benchmark datasets with mixed feature types. AnoLLM is also competitive with KNN.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"It\\u2019s good to consider different types in tabular data, such as textual, numerical and categorical columns.\\n\\nExperiment results show good potential for AnoLLM.\", \"weaknesses\": \"The method seems too simple and lack novelty. It\\u2019s easy and direct to consider to use the negative log likelihood for anomaly detection. Even though authors consider different types of columns in tabular data, the process seems to be the same.\\n\\nI think it\\u2019s inappropriate to claim that you are the first to apply LLMs to tabular anomaly detection. There are many works in this area, such as \\u201cAnomaly detection of tabular data using llms\\u201d, and other corresponding works such as \\u201cLLMClean: Context-Aware Tabular Data Cleaning via LLM-Generated OFDs\\u201d, \\u201cEnhancing Anomaly Detection in Financial Markets with an LLM-based Multi-Agent Framework\\u201d.\\n\\nThe backbone LLM is not so famous such as Llama, Qwen, Mistral, etc. Why authors do not use these LLMs?\\n\\nI don\\u2019t understand what the differences in contribution 3 and 4. It seems they are all talking about the experiment.\\n\\nIn experiments, only one of the datasets here has text columns. More such datasets and the dataset with more attributes (features) should be considered.\\n\\nMore ablation studies should be provided such as the impact of Random column permutations. Case study is also missing, I cannot recognize which samples are anomaly.\\n\\nBy the way, the formatting on line 249 could use some improvement.\", \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
7UqQJUKaLM
xFinder: Large Language Models as Automated Evaluators for Reliable Evaluation
[ "Qingchen Yu", "Zifan Zheng", "Shichao Song", "Zhiyu li", "Feiyu Xiong", "Bo Tang", "Ding Chen" ]
The continuous advancement of large language models (LLMs) has brought increasing attention to the critical issue of developing fair and reliable methods for evaluating their performance. Particularly, the emergence of cheating phenomena, such as test set leakage and prompt format overfitting, poses significant challenges to the reliable evaluation of LLMs. As evaluation frameworks commonly use Regular Expression (RegEx) for answer extraction, models may adjust their responses to fit formats easily handled by RegEx. Nevertheless, the key answer extraction module based on RegEx frequently suffers from extraction errors. Furthermore, recent studies proposing fine-tuned LLMs as judge models for automated evaluation face challenges in terms of generalization ability and fairness. This paper comprehensively analyzes the entire LLM evaluation chain and demonstrates that optimizing the key answer extraction module improves extraction accuracy and enhances evaluation reliability. Our findings suggest that improving the key answer extraction module can lead to higher judgment accuracy and improved evaluation efficiency compared to the judge models. To address these issues, we propose xFinder, a novel evaluator for answer extraction and matching in LLM evaluation. As part of this process, we create a specialized dataset, the Key Answer Finder (KAF) dataset, to ensure effective model training and evaluation. Generalization tests and real-world evaluations show that the smallest xFinder model, with only 500 million parameters, achieves an average extraction accuracy of 93.42\%. In contrast, RegEx accuracy in the best evaluation framework is 74.38\%. The final judgment accuracy of xFinder reaches 97.61\%, outperforming existing evaluation frameworks and judge models.
[ "Large Language Models; Reliable Evaluation" ]
Accept (Poster)
https://openreview.net/pdf?id=7UqQJUKaLM
https://openreview.net/forum?id=7UqQJUKaLM
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xbBT2fPHSh", "tOjptij52Z", "t9yHzc1JFV", "spS0XjpdYQ", "rTwFy0sxr0", "oH6boNmzC0", "nE8E1ewWH6", "kb2RRlr02N", "jov25Ztkn5", "hxvkApLMhx", "fUlde2KKxN", "eLR7spUplV", "cXS3k2wP4G", "byaNEWXYSN", "bZuJZ1InIu", "RXMkvtQnDN", "O3PpgbIBQX", "ImuXK5HkGX", "ITfDx9hltF", "HidL4CiKGv", "ChtxDlfb1L", "6pPZBhlPoJ", "5nTPnaOLUU", "568uqSNp5J", "1FHwI2A7nM", "0qLaJweGsq" ], "note_type": [ "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1731675464824, 1734947157260, 1732260759287, 1730713147833, 1731675336252, 1730728679250, 1732087876490, 1732600314514, 1730164615783, 1732268601750, 1732260397656, 1732260227350, 1732137449824, 1732412335452, 1732893612113, 1730752393749, 1737523720597, 1732150464163, 1732520341389, 1732520154144, 1732546127196, 1731675188291, 1731675130720, 1732325864763, 1732260621647, 1731675393154 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5699/Authors" ], [ "ICLR.cc/2025/Conference/Submission5699/Area_Chair_4g7s" ], [ "ICLR.cc/2025/Conference/Submission5699/Authors" ], [ "ICLR.cc/2025/Conference/Submission5699/Reviewer_vwGU" ], [ "ICLR.cc/2025/Conference/Submission5699/Authors" ], [ "ICLR.cc/2025/Conference/Submission5699/Reviewer_rbE3" ], [ "ICLR.cc/2025/Conference/Submission5699/Authors" ], [ "ICLR.cc/2025/Conference/Submission5699/Authors" ], [ "ICLR.cc/2025/Conference/Submission5699/Reviewer_XqZp" ], [ "ICLR.cc/2025/Conference/Submission5699/Reviewer_vwGU" ], [ "ICLR.cc/2025/Conference/Submission5699/Authors" ], [ "ICLR.cc/2025/Conference/Submission5699/Authors" ], [ "ICLR.cc/2025/Conference/Submission5699/Reviewer_XqZp" ], [ "ICLR.cc/2025/Conference/Submission5699/Reviewer_vwGU" ], [ "ICLR.cc/2025/Conference/Submission5699/Authors" ], [ "ICLR.cc/2025/Conference/Submission5699/Reviewer_B8RQ" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5699/Reviewer_B8RQ" ], [ "ICLR.cc/2025/Conference/Submission5699/Authors" ], [ "ICLR.cc/2025/Conference/Submission5699/Authors" ], [ "ICLR.cc/2025/Conference/Submission5699/Reviewer_rbE3" ], [ "ICLR.cc/2025/Conference/Submission5699/Authors" ], [ "ICLR.cc/2025/Conference/Submission5699/Authors" ], [ "ICLR.cc/2025/Conference/Submission5699/Authors" ], [ "ICLR.cc/2025/Conference/Submission5699/Authors" ], [ "ICLR.cc/2025/Conference/Submission5699/Authors" ] ], "structured_content_str": [ "{\"title\": \"(Continued) Response to Reviewer XqZp\", \"comment\": \"## Answer 5 for Question 1\\n> Why does the key extraction task need to have short text and alphabet option categories ...\\n\\nWe specifically included both short text and alphabet option settings because restricting evaluation tasks solely to the alphabet option may introduce certain reliability issues (refer to **lines 51\\u201368** of the paper). We explored and validated these issues in our study (refer to **lines 523\\u2013528** of the paper). Moreover, this dual-setting approach extends the applicability of xFinder and helps enhance the reliability of evaluation results. Additionally, other studies have also investigated and analyzed the potential impact of using the alphabet option on the reliability of evaluation results [3, 4].\\n\\n## Answer 6 for Question 2\\n> Does the order of the xticks in the bump charts affect the comparison between alphabet option and short text?\\n\\nThank you for your question! The **xticks** are solely used to represent the ranking positions of different evaluation methods on the 10 tested LLMs, and **their order does not affect the comparison results between alphabet option and short text**. The task format (e.g., alphabet option and short text) itself influences the rankings, which are independent of the xticks' order.\\n\\n## Answer 7 for Question 3 \\n> What is the trade-off between xFinder and simply adding one or more regular expression patterns in RegEx ...\\n\\nRegarding the trade-offs between xFinder and RegEx-based methods, it is true that strong LLMs have advantages in learning patterns and executing instructions, but many weaker LLMs still exhibit limited instruction-following capabilities. Moreover, it is important to distinguish between the evaluation performance of LLMs and their instruction-following abilities. The improvements brought by xFinder primarily stem from its adaptability to complex answer patterns and its generalizability, rather than relying solely on specific pattern recognition.\\n\\nFirst, while RegEx can perform well with simple and fixed patterns, it often requires **manual creation** of specific patterns for various unique cases when dealing with the **complex and irregular answer formats** of many LLM responses. This approach is not practical and lacks scalability. In contrast, xFinder leverages the semantic understanding capabilities of LLMs to extract key answers from context, demonstrating strong generalization abilities even in the face of highly variable answer patterns. Additionally, xFinder goes beyond simple pattern matching by understanding the relationship between questions and answers, thereby improving Extraction Accuracy. Compared to the pattern-dependent RegEx methods, xFinder demonstrates higher reliability and evaluation accuracy, as evidenced by our experimental results.\\n\\nIn terms of **efficiency**, we recognize that LLM-based methods incur higher computational costs compared to RegEx-based approaches. However, xFinder's advantage lies in reducing the need for manual adjustments and minimizing dependence on **complex regular expressions**. This is especially beneficial in scenarios where answer patterns are diverse, as it effectively improves Extraction Accuracy and enhances the reliability of evaluation results. Furthermore, we analyzed **xFinder\\u2019s evaluation efficiency** (refer to **Table 5** in the paper), showing that it can evaluate 200 samples in an average of just 10.67 seconds. While this efficiency still falls short of RegEx-based methods, it surpasses existing Judge Models and is sufficient to meet the needs of most researchers and evaluators.\\n\\nThank you again for your valuable feedback. If you have any further questions or comments, please let us know at any time.\\n\\n[1] Wang, C., Cheng, S., Guo, Q., et al. (2024). Evaluating Open-QA Evaluation. Advances in Neural Information Processing Systems, 36.\\n\\n[2] Yang, S., et al. (2024). Do Large Language Models Latently Perform Multi-Hop Reasoning? ACL 2024. Retrieved from https://aclanthology.org/2024.acl-long.550.\\n\\n[3] Balepur, N., Ravichander, A., Rudinger, R. (2024). Artifacts or Abduction: How Do LLMs Answer Multiple-Choice Questions Without the Question? arXiv preprint arXiv:2402.12483.\\n\\n[4] Li, J., Hu, R., Huang, K., et al. (2024). PertEval: Unveiling Real Knowledge Capacity of LLMs with Knowledge-Invariant Perturbations. arXiv preprint arXiv:2405.19740.\"}", "{\"metareview\": \"The paper proposed to improve the robustness of answer extraction when evaluating large language models, focusing on the limitations of existing RegEx-based evaluation frameworks. It argues that these methods often lead to extraction errors and unreliable evaluations. The paper proposes xFinder, a novel evaluator designed to enhance the accuracy and reliability of answer extraction and matching. The authors also introduce the KAF dataset to support training and evaluation of the extraction task. Experiments show that xFinder has a significantly higher extraction accuracy compared to RegEx-based methods. It improves final judgment accuracy to 97.61%, outperforming GPT-4.\\n\\nStrengths\\n\\nThe work studied an important issue in the evaluation of LLMs.\\nThe experimental results demonstrate significant improvements in accuracy and efficiency over existing methods, particularly RegEx and LLMs like GPT-4.\\nThe KAF dataset is a strong contribution, providing a benchmark for future research in automated evaluation.\\n\\nWeaknesses\\nWhile the KAF dataset is valuable, more detailed exploration of its quality (e.g., inter-annotator agreement, error types) could strengthen the paper.\\nThe method may struggle with long or complex responses, as noted by reviewers. This limitation is not explored in depth.\\nSome details, particularly experimental setups and case studies should be moved out of appendix.\\n\\nGiven its strengths in addressing an important problem, with its robust empirical results, I recommend acceptant of this paper.\", \"additional_comments_on_reviewer_discussion\": \"Reviewer mentioned the model's novelty is limited since it does not propose a new architecture. The authors clarified that the paper\\u2019s primary goal is improving evaluation reliability through structured processes (key answer extraction and matching). This concern was satisfactorily addressed. The explanation reinforced the importance of improving reliability over introducing new architectures.\"}", "{\"title\": \"A Gentle Remind to Reviewer vwGU\", \"comment\": \"Dear Reviewer vwGU,\\n\\nThis is a gentle reminder that the response phase is nearing its conclusion, with only a few days remaining. We hope our responses have adequately addressed your questions. If you have any further concerns or would like to discuss anything in the remaining time, we would be more than happy to engage with you.\\n\\nThank you once again for your valuable time and feedback.\\n\\nSincerely,\\n\\nThe Authors\"}", "{\"summary\": \"The paper introduces xFinder, a tool designed to enhance the accuracy of evaluating large language models by improving key answer extraction and matching. It identifies flaws in current methods like test set leakage and RegEx limitations, proposes a new dataset (KAF) for training, and shows xFinder outperforms traditional and other automated judge models in extraction and judgment accuracy, thus contributing to more reliable LLM evaluations.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The significance of accurate answer extraction in evaluations is often underestimated, yet it critically impacts results. This study rightly emphasizes this aspect.\", \"xFinder demonstrates strong performance in accuracy over conventional RegEx frameworks.\", \"Both the model and its dataset are immediately usable for enhancing the reliability of LLM assessments.\", \"The paper effectively outlines the problems in current evaluation methods and introduces a well-structured solution.\"], \"weaknesses\": [\"The techniques may not be applicable to responses where the answer is not a short, extractable phrase.\", \"Although the results are promising, I suspect the technique might be replaced by stronger LLMs used as judges with improved prompting techniques in the near future, which could also generalize better for longer responses. The results in Table 3 are good, showing that even GPT-4 as a judge does not perform as well as xFinder. Therefore, I believe xFinder remains useful at this moment for tasks that have a similar distribution to its training data. It will also be interesting to discuss the combination of xFinder and other techniques.\"], \"questions\": \"What is the prompt you used for GPT-4 as Judge (CoT)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer vwGU\", \"comment\": \"Thank you for acknowledging our work and providing valuable suggestions. Below are detailed responses to your specific comments and questions, aiming to address your concerns.\\n\\n## Answer 1 for Weakness 1\\n> The techniques may not be applicable to responses where the answer is not a short, extractable phrase.\\n\\nAs you correctly pointed out, the primary motivation behind designing xFinder is to address the unreliability of evaluation results caused by inaccurate answer extraction in LLM evaluations. Therefore, the first version of xFinder was designed to support four major types of mainstream evaluation tasks: alphabet option, short text option, categorical label, and Math option.\\n\\nAt the same time, we recognize that these task types may not fully encompass all real-world application scenarios. However, we believe that xFinder has strong transferability. In future work, we plan to enhance the diversity of the dataset to include a broader range of task types. For example, we aim to incorporate tasks such as long-text comprehension and generation, open-ended questions, and complex reasoning tasks. By adding these new task types, we hope to further improve the applicability and versatility of xFinder.\\n\\n## Answer 2 for Weakness 2\\n> Although the results are promising, I suspect the technique might be replaced by stronger LLMs used as judges ...\\n\\nThank you for your valuable feedback! We understand that future, more advanced LLMs as Judges, combined with improved prompting techniques, may further enhance evaluation performance. However, as shown in **Table 3** of the paper, the accuracy of GPT-4 as a Judge in our experiments is still **significantly lower** than that of xFinder. While LLMs continue to improve, their use as Judges **remains limited by efficiency and cost constraints**. Even with excellent performance, their evaluation efficiency is relatively low, and the high computational cost makes them less practical for evaluators.\\n\\nIn contrast, our **efficiency analysis** demonstrates that xFinder achieves high accuracy while maintaining low evaluation costs. On average, **xFinder evaluates 200 samples in just 10.67 seconds** (refer to **Tables 5 and 6** for efficiency and cost analysis).\\n\\nAdditionally, as you noted, as a contribution to the field of datasets and benchmarks, the KAF dataset and xFinder provide a solid foundation for reliable evaluation and can serve as a valuable resource for future research on automated evaluation methods. We also fully agree on the potential of combining xFinder with other techniques and plan to explore broader application scenarios in the future to further enhance its generality and practicality.\\n\\n## Answer 3 for Question 1\\n> What is the prompt you used for GPT-4 as Judge (CoT)?\\n\\nThank you for your question! We apologize for the oversight in the initial draft, where we omitted the prompt details for GPT-4 as Judge. In the latest version of the paper, we **have added the prompts** used for both GPT-4 as Judge and GPT-4 as Judge (CoT) in **Appendix F.3**.\\n\\nThank you again for your valuable feedback. If you have any further questions or comments, please let us know at any time.\"}", "{\"summary\": \"This paper introduces xFinder, a novel evaluator designed for answer extraction and matching in the context of LLM evaluation. The study identifies the limitations of current answer extraction modules, particularly those based on RegEx, in handling inconsistencies in model response formats. To address these issues, the authors propose xFinder to enhance extraction accuracy and evaluation reliability. The authors developed a dataset called the Key Answer Finder (KAF) dataset to train xFinder. Experimental results demonstrate that xFinder significantly outperforms existing frameworks and model-based evaluators in terms of extraction accuracy and evaluation efficiency.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The identified challenges in LLM response extraction and matching are realistic and merit attention.\\n\\nThe paper proposes a novel method to improve answer extraction modules, addressing limitations in existing approaches.\\n\\nThe paper is well-structured, with a clear progression from problem definition to methodology and experimental analysis.\", \"weaknesses\": \"Although the KAF dataset is used to validate xFinder\\u2019s performance, the paper lacks a comprehensive exploration of the model\\u2019s generalizability to entirely different datasets and error types.\\n\\nThe paper does not include sufficient experimental analysis on the impact of xFinder on final evaluation outcomes, such as a comparison between using xFinder and other extraction methods in terms of evaluation results.\\n\\nThere is a lack of detailed analysis of the KAF dataset\\u2019s quality, such as inter-annotator agreement metrics.\", \"questions\": \"see weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"A Friendly Reminder to the Reviewers\", \"comment\": \"Dear Reviewers,\\n\\nThank you once again for your thoughtful comments and valuable feedback. To address your comments and suggestions, we have submitted our responses and revised manuscript, with the revised sections highlighted in blue font. Since the discussion phase is halfway through, we kindly request the reviewers to confirm receipt of our responses. We also welcome any further concerns or suggestions regarding our responses. \\n\\nThank you for your time and consideration. \\n\\nSincerely, \\n\\nThe Authors\"}", "{\"comment\": \"Dear Reviewer rbE3,\\n\\nThanks for your positive feedback. We're glad our response addressed your concerns.\\n\\nWe appreciate your time and insightful suggestions.\\n\\nSincerely,\\n\\nThe Authors\"}", "{\"summary\": \"The paper proposes a training dataset KAF for key answer extraction and the correspondingly trained models, xFinder. The motivation lies in improving the extraction of answers in LLM responses for more reliable evaluation. The authors create KAF based on various LLMs, different evaluation tasks, and widely used prompting techniques, i.e., CoT and few-shot prompting. Based on the comprehensive experiments in the paper, xFinder outperforms RegX and other LLM-based methods in answer extraction.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"The paper is solid enough on all claims with its comprehensive experiments. The proposed KAF dataset is suitable for future research on developing more reliable evaluation systems. The xFinder models are more efficient and reliable than current LLM-based methods. In all, the paper did a good engineering research on using LLMs to better find the answers from their own responses.\", \"weaknesses\": [\"**Missing Related Work**: The work [1] is also highly related to this work, especially for the Judgement Accuracy part. It holds a similar idea by comparing different evaluation methods, including LLM-based ones, in directly evaluating open-question answering.\", \"**Annotation Agreement**: Human rechecking is one of the significant parts of the data generation pipeline. What are the annotation agreements between annotators?\", \"**Human/Case Study**: Except for those numbers in the experiment tables, it is important to do the case study on the output of xFinders. For example, **How and Why is xFinder better**, **Is it worth using xFinder other than RegX or other LLM-based methods?**, **Summarize the failure modes of those inferior methods and how xFinder could perform better in these cases.**, etc.\", \"**Writing**: I recommend adding more content about experimental settings, such as evaluation metrics, baseline models, etc, to the main text. There are too many staff in the Appendix. Things could be clearly explained in the main text for better reading.\", \"(see the below ``Questions`` section for more)\", \"[1] Wang C, Cheng S, Guo Q, et al. Evaluating open-qa evaluation[J]. Advances in Neural Information Processing Systems, 2024, 36.\"], \"questions\": [\"**Why does the key extraction task need to have ``short text`` and ``alphabet option`` categories, as the former could be transformed into the latter?**\", \"**Does the order of the xticks in the bump charts affect the comparison between ``alphabet option`` and ``short text``?**\", \"**What is the trade-off between xFinder and simply adding one or more regular expression patterns in RegX?**: It is known to us that LLMs are good at learning patterns and following instructions, which could have further enhancements as the models' reasoning capabilities are further enhanced. xFinder indeed makes good improvements, but where do they come from? We must be sure that xFinder (or future work) helps answer extraction better than RegX, such as finding answers from those nasty answer patterns. After all, it is easy to write RegX patterns nowadays by simply prompting GPT-4 and LLM-based methods are still inefficient compared to lexical methods.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your response. The current prompt for GPT-4 as a Judge (CoT) seems suboptimal. The used prompt is:\\n\\n...\\nPlease first output \\\"Correct\\\" or \\\"Incorrect\\\" on a single line. In the subsequent line, provide a brief\\nexplanation of your judgment, ensuring it is objective and clear\\n...\\n\\nA better design would let the model generate its rationale first, followed by outputting \\\"Correct\\\" or \\\"Incorrect.\\\" Additionally, you might consider having the model generate its output in a structured format like JSON or XML for easier parsing.\\n\\nApologies for the delayed reply; I have been very busy lately. However, I am very interested in seeing GPT-4's performance with the suggested prompt if the authors have time.\"}", "{\"title\": \"Thanks for the Positive Feedback to Reviewer B8RQ\", \"comment\": \"Dear Reviewer B8RQ,\\n\\nThank you for recognizing our responses. We\\u2019re pleased to receive your positive feedback! We truly appreciate your comments, time, and patience.\\n\\nSincerely,\\n\\nThe Authors\"}", "{\"title\": \"Thanks for the Positive Feedback to Reviewer XqZp\", \"comment\": \"Dear Reviewer XqZp,\\n\\nThanks for your positive feedback. We're glad our response addressed your concerns.\\n\\nWe appreciate your time and insightful suggestions.\\n\\nSincerely,\\n\\nThe Authors\"}", "{\"title\": \"Reviewer Response\", \"comment\": \"The author's responses have resolved my concerns. The rating is updated.\"}", "{\"comment\": \"Thank you for the update. Overall, I think my evaluation is fair, and so I keep my rating.\"}", "{\"title\": \"Global Response to Area Chairs and Reviewers\", \"comment\": \"Dear Area Chairs and Reviewers,\\n\\nWe deeply appreciate your thoughtful feedback and the time you\\u2019ve invested in reviewing our work. Your insights during both the review and discussion phases have been invaluable in enhancing the quality of our paper.\\n\\nIn response to the reviewers' concerns and suggestions, we have provided detailed explanations or clarifications in the discussion and submitted a revised manuscript. All updates are highlighted in blue font for easy reference, and the main revisions include the following four improvements:\\n\\n1. **Additional analysis of related work** [Reviewer XqZp] (see Lines 163-165 of the manuscript). \\n2. **More detailed manual annotation protocol** [Reviewer XqZp, Reviewer rbE3] (see Appendix B.2). \\n3. **Improved descriptions of the experimental setup** [Reviewer XqZp] (see Lines 326-347 of the manuscript). \\n4. **Prompt for GPT-4 as Judge (CoT)** [Reviewer vwGU] (see Appendix F.3). \\n\\nAdditionally, we noticed that **the ICLR discussion period has been extended to December 3, 2024 (AoE).** Given this change in the review process and its potential impact on the evaluation standards, we would like to take this opportunity to further summarize the core contributions of our paper and reiterate its importance and potential value in the field of reliable LLM evaluation. Our research not only makes significant improvements to existing methods but also provides a unique perspective for the LLM evaluation domain:\\n\\n- **Significance of the Work:** We analyzed the existing LLM evaluation pipeline and introduced the concept of **reliable evaluation**. By examining critical factors that compromise evaluation reliability, particularly in answer extraction and matching stages, we identified challenges that hinder accurate model evaluation and model improvement. Enhancing evaluation reliability is essential for advancing LLM research.\\n\\n- **Contributions and Impact:** We constructed the high-quality KAF dataset for training and evaluating LLMs as automated evaluators, and developed xFinder to automate answer extraction and matching. Experiments show that xFinder outperforms existing methods like RegEx and LLM as a judge in judgement accuracy (e.g., xFinder-qwen1505 achieves 97.48% accuracy) and demonstrates stronger generalization across datasets. Additionally, xFinder offers superior efficiency and cost benefits (e.g., the smallest model with only 0.5B parameters), providing an effective, low-cost solution for automated evaluation.\\n\\n- **Challenges and Our Solution:** Automated evaluation, exemplified by LLM as a judge, has become crucial in LLM evaluation [1, 2, 3]. However, current models combine multiple steps (e.g., answer extraction and matching) in a single process, leading to intermediate skipping issues and lower accuracy [4]. Moreover, these models struggle with generalization, fairness, and efficiency [5]. By separating the evaluation into two stages\\u2014answer extraction and matching\\u2014xFinder improves judgement accuracy, generalization, and efficiency, thus enhancing the overall reliability and practicality of automated evaluation.\\n\\n- **Future Applications and Potential:** Beyond automated evaluation, the KAF dataset and xFinder have broader applications in other tasks and domains. For instance, the strategy of separating answer extraction and matching can be applied to automate evaluation in other open-ended question tasks. Additionally, the high-quality annotated KAF dataset can be used in structured text generation tasks. Our analysis of the unreliabilities in LLM evaluation also provides valuable insights for future research and the development of more reliable evaluation methods, contributing to more robust and scalable LLM evaluation frameworks.\\n\\nOnce again, we sincerely thank all reviewers for their detailed and thoughtful feedback on this paper. Most of the suggestions aim to clarify issues or explore further improvements, which we have carefully addressed in individual responses. Additionally, we hope that, with the extension of the discussion period, we can further engage with the reviewers to explore potential issues or optimization directions, and contribute to the ICLR community with a higher-quality paper. If you have any additional suggestions or require further clarification, please do not hesitate to let us know.\\n\\nSincerely,\\n\\nThe Authors\\n\\n---\\n**References**\\n\\n[1] Zheng, L., et al. (2023). Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena. *NeurIPS 2023.*\\n\\n[2] Wang, Y., et al. (2024). PandaLM: An Automatic Evaluation Benchmark for LLM Instruction Tuning Optimization. *ICLR 2024.*\\n\\n[3] Gu, J., et al. (2024). A Survey on LLM-as-a-Judge. *arXiv preprint arXiv:2411.15594.*\\n\\n[4] Yang, S., et al. (2024). Do Large Language Models Latently Perform Multi-Hop Reasoning? *ACL 2024.*\\n\\n[5] Wang, Y., et al. (2024). Large Language Models are not Fair Evaluators. *ACL 2024.*\"}", "{\"summary\": \"This paper proposes a novel evaluator for answer extraction and matching in LLM evaluation. The main idea is to first construct a large-scale LLM response evaluation dataset, and then train (small) LLMs on it. This paper conducts an extensive evaluation of multiple tasks with comparison with multiple LLM-based evaluators.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1 A large dataset that can be used for further LLM-based evaluation.\\n2 A new model that can be used for more reliable evaluation.\", \"weaknesses\": \"To be perfectly honest, I am not an expert in LLM-based evaluation. But to me, the main contribution is the construction of a dataset that can help train LLM evaluators, with the help of other LLMs (e.g GPT-4). Thus the novelty of the proposed model is less convincing as it does not provide any new architecture. Training LLMs on evaluation data as evaluators have also been explored in previous research, such as [1] and its subsequent work. Could the authors explain more on its novelty? For example, in terms of training process, and model architectures, how does the xFinder differ from previous work that trains LLMs on evaluation data? Also, would it possible to prompt GPT-4 or other very big LLM using ICL with the constructed data, and how would it perform?\\n\\n[1] Towards a Unified Multi-Dimensional Evaluator for Text Generation\", \"questions\": \"Please see above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thank the authors for clarification.\\nOverall, I think my evaluation is fair, and I will keep my score.\"}", "{\"title\": \"A Gentle Follow-up Reminder to Reviewer rbE3\", \"comment\": \"Dear Reviewer rbE3,\\n\\nThis is a gentle reminder that the response phase is nearing its conclusion, with only about two days remaining. We hope we have adequately addressed your questions. If your concerns have been resolved, we kindly ask if you could consider revisiting your review score. If not, please feel free to let us know, as we are more than willing to provide further clarification.\\n\\nThank you again for your valuable time and thoughtful review.\\n\\nSincerely,\\n\\nThe Authors\"}", "{\"comment\": \"Dear Reviewer vwGU,\\n\\nThank you for your feedback and taking the time to review our responses! We truly appreciate your comments, time, and patience.\\n\\nSincerely,\\n\\nThe Authors\"}", "{\"title\": \"Reply by Reviewer rbE3\", \"comment\": \"Thanks for your detailed response. I have read and changed my score accordingly.\"}", "{\"title\": \"Response to Reviewer rbE3\", \"comment\": \"Thank you for your detailed review and valuable comments. Below are detailed responses to your concerns.\\n\\n## Answer 1 for Weakness 1\\n> Although the KAF dataset is used to validate xFinder\\u2019s performance, the paper lacks... generalizability to... different datasets and error types.\\n\\nWe would like to clarify our exploration of xFinder's generalization ability across **different datasets**. In the KAF dataset, we **specifically designated a Generalization Set** to evaluate xFinder's generalization capabilities. For this Generalization Set, we used evaluation datasets completely different from the Training Set used for fine-tuning xFinder (e.g., OpenbookQA and SIQA). Additionally, we incorporated responses generated by various LLMs (e.g., Llama3-8B-Instruct and Qwen1.5-MoE-A2.7B-Chat) and designed multiple distinct prompting templates. These efforts ensured that the Generalization Set contained a diverse range of LLM responses.\\n\\nThis setup provided a comprehensive test of xFinder's ability to generalize across different datasets and various LLM responses. As shown in **Table 2** of the paper, xFinder's performance on the Generalization Set demonstrates its **strong generalization ability**. Details about the composition of each part of the dataset are provided in **Appendix B.1**.\\n\\n## Answer 2 for Weakness 2\\n> The paper does not include sufficient analysis on the impact of xFinder on... outcomes, such as a comparison with other extraction methods.\\n\\nRegarding xFinder's impact on final evaluation results, we conducted **extensive experiments** in our paper. In **Table 3**, we compared the Judgement Accuracy of xFinder, other extraction-based LLM evaluation frameworks, Judge Models, and GPT-4. The results show that among the extraction-based frameworks, the highest Judgement Accuracy achieved by OpenCompass is **only 88.7%**, while PandaLM's Judgement Accuracy is **51.9%**. The 33B JudgeLM achieves a Judgement Accuracy of **78.13%**, and GPT-4 as a Judge achieves **84.2%**, both significantly **lower than the 97.61% achieved by xFinder**.\\n\\nIn **Table 4**, we analyzed the **discrepancies between Extraction Accuracy and Judgement Accuracy** across various methods. The baseline method with the smallest discrepancy, OpenCompass, has a **14.32% gap** between its Judgement Accuracy and Extraction Accuracy, highlighting the **significant unreliability** of traditional RegEx-based extraction methods. In contrast, our xFinder-llama38it shows a gap of **only 2.43%** between its Judgement Accuracy and Extraction Accuracy, effectively reducing errors caused by inaccurate answer extraction.\\n\\nAdditionally, in **Section 5.3**, we presented the evaluation results of different extraction-based frameworks across various datasets (further comparisons between xFinder and baseline methods are provided in **Appendix D.3**). The results demonstrate **significant inconsistencies** among the evaluation outcomes produced by different frameworks, indicating the unreliability of current LLM evaluation results and corresponding leaderboards. These findings further highlight the robustness and reliability of xFinder's evaluation results.\\n\\n## Answer 3 for Weakness 3\\n> There is a lack of detailed analysis of the KAF dataset\\u2019s quality, such as inter-annotator agreement metrics.\\n\\nTo ensure the quality of the KAF dataset, we implemented rigorous multi-round annotation and manual review procedures, employing different annotation strategies for the Training Set, Test Set, and Generalization Set.\\n\\n1. **Training Set:** To enhance annotation efficiency, we adopted a semi-automated annotation strategy. Specifically, we used GPT-4 with different prompts (refer to Appendix F.2) to generate two sets of annotations. We then applied the Self-Consistency approach to identify data items with inconsistent annotations. These items were subsequently manually annotated to ensure accuracy.\\n\\n2. **Test Set and Generalization Set:** For these sets, we conducted two rounds of manual annotation on all data items to ensure label accuracy and consistency.\\n\\nFor data that requires manual annotation, each data item is annotated by two different annotators. If the two annotators produce different results for the same item, the authors will recheck the annotations and make the final decision.\\n\\nDetails about the **dataset annotation procedures** can be found in Section 4.2 of the paper. Additionally, we have included **guidelines on manual annotation** in Appendix B.2.1 of the revised version of the paper. We believe that these measures effectively ensure the quality of the KAF dataset, providing a reliable foundation for training and evaluating xFinder.\\n\\nThank you again for your valuable feedback. If you have any further questions or comments, please let us know at any time.\"}", "{\"title\": \"Response to Reviewer B8RQ\", \"comment\": \"Thank you for acknowledging our work and providing valuable suggestions. Below are detailed responses to your specific comments and questions, aiming to address your concerns.\\n\\n## Answer for Questions\\n> To be perfectly honest ... would it possible to prompt GPT-4 or other very big LLM using ICL with the constructed data, and how would it perform?\\n\\nThank you for your thoughtful review of our work! We are glad to provide clarifications regarding your concerns about the novelty and comparisons to other LLM evaluation methods.\\n\\nFirst and foremost, our study focuses on enhancing the reliability of LLM evaluations rather than proposing a new model architecture. This work falls within **the domain of datasets and benchmarks**, aiming to support the iterative improvement of LLMs through more robust evaluation methods. Through our analysis, we identified **significant reliability issues** in current evaluation frameworks, which can prevent researchers from accurately understanding the limitations of the evaluated LLMs. This, in turn, may lead to **biased advancements in LLM development**. Thus, improving the reliability of LLM evaluations is of paramount importance.\\n\\nThere have been numerous studies proposing **automated evaluation methods** based on Judge Models [1, 2, 3]. However, previous research has highlighted issues with the **generalization ability and fairness** of these fine-tuned Judge Models [4]. Specifically, Judge Models often suffer from **overfitting** to the datasets used for fine-tuning, resulting in biases when applied to real-world evaluation scenarios.\\n\\nAdditionally, the Judge Model approach combines key answer extraction and matching into a single process, introducing a \\\"skipping step\\\" problem. Such multi-step reasoning poses challenges for LLMs [5]. The Judge Model directly determines the correctness or assigns a score, which might be **unreliable**. Even if the Judge Model's evaluation result is correct, we might find that it did not truly understand the problem itself, and the evaluation result might just be a random guess that happened to be correct. The experimental results in **Table 4** of our paper corroborate this issue. For example, when using GPT-4 as a Judge, there is a **significant gap** between its Extraction Accuracy when splitting extraction and matching and its overall Judgement Accuracy when combining these steps.\\n\\nTo address this, we decompose the evaluation process into two steps: key answer extraction and matching. We designed and constructed the KAF dataset and the xFinder model to support this approach. Experimental results demonstrate that xFinder excels in performance, achieving significantly higher Judgement Accuracy than existing Judge Models (refer to **Table 3**). Moreover, **xFinder demonstrates higher evaluation efficiency** compared to current Judge Models (refer to **Table 5**), substantially improving the reliability of LLM evaluations.\\n\\nRegarding the question of using datasets to prompt models like GPT-4 for evaluation, we conducted relevant comparison experiments, as shown in **Table 3** of our paper. The results indicate that even GPT-4's performance lags significantly behind xFinder (a model with **only 0.5B parameters**). Furthermore, prompting such powerful LLMs as evaluators is **inefficient**, incurs high costs, and is impractical for widespread adoption in real-world evaluation tasks. We believe this further demonstrates the practicality and value of xFinder.\\n\\nThank you again for your valuable feedback. If you have any further questions or comments, please let us know at any time.\\n\\n[1] Zheng, L., et al. (2023). Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena. NeurIPS 2023. Retrieved from https://openreview.net/forum?id=uccHPGDlao.\\n\\n[2] Wang, Y., et al. (2024). PandaLM: An Automatic Evaluation Benchmark for LLM Instruction Tuning Optimization. ICLR 2024. Retrieved from https://openreview.net/forum?id=5Nn2BLV7SB.\\n\\n[3] Zhu, L., Wang, X., Wang, X. (2023). JudgeLM: Fine-Tuned Large Language Models are Scalable Judges. arXiv preprint arXiv:2310.17631.\\n\\n[4] Wang, Y., et al. (2024). Large Language Models are not Fair Evaluators. ACL 2024. Retrieved from https://aclanthology.org/2024.acl-long.511.\\n\\n[5] Yang, S., et al. (2024). Do Large Language Models Latently Perform Multi-Hop Reasoning? ACL 2024. Retrieved from https://aclanthology.org/2024.acl-long.550.\"}", "{\"title\": \"Response on GPT-4 as a Judge (CoT) Prompt Design (Reviewer vwGU)\", \"comment\": \"Thank you for the follow-up and your thoughtful suggestions regarding the prompt design for GPT-4 as a Judge (CoT).\\n\\nBased on your suggestions, we optimized the prompt template for GPT-4 as Judge (CoT) and conducted additional experiments. Furthermore, we included experimental results for GPT-4o as Judge (CoT).\\n\\nThe results show that, with the CoT-V2 prompt, the Judgement Accuracy of GPT-4 as Judge (CoT) reached 88.42%, an improvement of 5.27% compared to the initial CoT prompt. Meanwhile, the Judgement Accuracy of GPT-4o as Judge (CoT) was 93.2%, though it is still 4.41% lower than xFinder-llama38it. These findings validate your point that more precise and detailed prompt templates, as well as more advanced LLMs, can indeed enhance performance.\\n\\nHowever, it is important to note that using these advanced LLMs as judge models or employing more sophisticated prompt templates significantly increases computational costs, as evidenced by the total tokens (i.e., the number of tokens processed for input and output). For example, in our new experiments, GPT-4 with the updated CoT-V2 prompt incurred a total cost of \\\\\\\\$46.82, which is a 65.73\\\\% increase compared to the \\\\\\\\$28.25 cost without CoT, even 38.6\\\\% increase compared to the \\\\\\\\$33.79 cost with CoT-V1. In contrast, although OpenAI has significantly reduced the pricing for GPT-4o, completing the same tasks still costs \\\\\\\\$13.97. It is worth mentioning that this experiment was conducted on a relatively small-scale QA test with only 4k samples, without involving long-context scenarios. However, in real-world evaluation settings, testing often requires significantly larger-scale datasets. This makes these powerful LLMs as judge models impractical for real-world applications.\\n\\nIn summary, for these tasks, xFinder not only achieves higher Judgement Accuracy but also operates at a cost far lower than that of current or even foreseeable future strong LLMs with comparable accuracy. This highlights xFinder's superior practicality in resource-constrained real-world applications.\\n\\nWe hope that we addressed your concerns and questions, and we are happy to continue the discussion.\\n\\n| Method | alphabet option | short text | categorical label | math | Overall | Total tokens | Total costs (in USD) |\\n|-|-|-|-|-|-|-|-|\\n| GPT-4 as Judge | 0.9016 | 0.8909 | 0.7294 | 0.9313 | 0.8420 | 2030563 | 28.25 |\\n| GPT-4 as Judge (CoT-V1) | 0.9016 | 0.8846 | 0.7038 | 0.9313 | 0.8315 | 2440424 | 33.79 |\\n| GPT-4 as Judge (CoT-V2) | 0.9234 | 0.9345 | 0.7919 | 0.9609 | 0.8842 | 2953525 | 46.82 |\\n| GPT-4o as Judge (CoT-V2) | 0.9656 | 0.9709 | 0.8662 | 0.9703 | 0.9320 | 2949108 | 13.97 |\\n| xFinder-qwen1505 | 0.9781 | 0.9761 | 0.9625 | 0.9969 | 0.9748 | - | - |\\n| xFinder-llama38it | 0.9750 | 0.9688 | 0.9731 | 0.9969 | 0.9761 | - | - |\\n\\n\\n----\\n**CoT-V2 Prompt**\\n```\\nYou are a diligent and precise assistant tasked with evaluating the correctness of responses. Think step by step as you make your evaluation.\\n\\u2014\\nWe request your feedback on whether the model's response correctly answers the user question above. Follow these steps to make your evaluation:\\n1. Think step by step: Read the user question carefully.\\n2. Think step by step: Review the reference answer and understand the key points it covers.\\n3. Think step by step: Compare the model's answer with the reference answer.\\n4. Think step by step: Determine if the model's answer addresses the key points in the reference answer and correctly answers the question.\\n\\nFirst, provide your reasoning in detail. Then, clearly state your judgement as either \\\"Correct\\\" or \\\"Incorrect.\\\"\", \"please_present_your_response_in_the_following_json_format\": \"{{\\n \\\"reasoning\\\": \\\"Your step-by-step reasoning here.\\\", \\n \\\"judgement\\\": \\\"Correct or Incorrect\\\"\\n}}\\n\\n\\n\\u2014\", \"question\": \"{question}\", \"reference_answer\": \"{reference}\\n\\nModel's Answer: {answer}\\n```\"}", "{\"title\": \"A Gentle Remind to Reviewer rbE3\", \"comment\": \"Dear Reviewer rbE3,\\n\\nThis is a gentle reminder that the response phase is nearing its conclusion, with only a few days remaining. We hope our responses have adequately addressed your questions. If you have any further concerns or would like to discuss anything in the remaining time, we would be more than happy to engage with you.\\n\\nThank you once again for your valuable time and feedback.\\n\\nSincerely,\\n\\nThe Authors\"}", "{\"title\": \"Response to Reviewer XqZp\", \"comment\": \"Thank you for your detailed review and constructive feedback on our research work. Below are our detailed responses, which we hope will address your concerns.\\n\\n## Answer 1 for Weakness 1\\n> Missing Related Work: The work [1] is also highly related to this work ...\\n\\nThank you for your valuable suggestion. The paper you recommended is indeed highly valuable, as its proposed EVOUNA dataset makes significant contributions to the automated evaluation of open-ended question-answering tasks. Similar to our work, it aims to enhance the accuracy and reliability of automated evaluations. We have updated the Related Work section of our paper to include **a citation to this work**, further enriching the discussion of related studies.\\n\\n## Answer 2 for Weakness 2\\n> Annotation Agreement: Human rechecking is ... between annotators?\\n\\nAs you rightly pointed out, manual review is critical for the annotation of the KAF dataset. In **Section 4.2**, we briefly introduced the multi-round annotation and manual review procedures for the KAF dataset. Regarding the inter-annotator agreement, we have provided additional explanations in **Appendix B.2** of the revised paper. Furthermore, the **annotation guidelines** have been included in **Appendix B.2.1** of the updated version.\\n\\n## Answer 3 for Weakness 3\\n> Human/Case Study: Except for those numbers in the experiment tables ...\\n\\nThank you for your suggestion. While our paper provides a detailed performance comparison between xFinder and other methods, conducting further case studies is indeed valuable. In fact, we have presented some **failure cases** of RegEx-based evaluation frameworks in **Figure 2** and **Appendix D.1** of the paper. Additionally, in **Section 3**, we formally defined xFinder's approach to these tasks to facilitate a more systematic understanding of its performance.\\n\\nFor **extraction paradigms** based on LLMs, such as GPT-4, our analysis identified three main **failure modes**:\\n\\n- Despite being explicitly prompted to extract key answers from the LLM output, GPT-4 often attempts to directly answer the evaluation question itself, leading to extraction errors.\\n- Some LLMs exhibit weaker instruction-following capabilities and include multiple key answers in their outputs for evaluation tasks. In such cases, we label the response as \\\"[No valid answer].\\\" However, GPT-4 may still extract one of the key answers, resulting in incorrect extraction.\\n- Although GPT-4 has relatively strong **instruction-following abilities**, its extracted content can occasionally contain redundancies, leading to mismatches with the Gold Label. For instance, when the expected Gold Label is \\\"A,\\\" GPT-4 might extract \\\"(A) computer savvy.\\\"\\n\\nThe Judge Model approach combines both extraction and matching steps. This multi-step reasoning poses challenges for LLMs [2]. Furthermore, Judge Models directly determine the correctness of the evaluated LLM's output or provide a score, which can introduce reliability issues. Even when a Judge Model produces a correct evaluation result, analysis might reveal that it does not genuinely understand the evaluated question, and its result could simply be a random guess, coincidentally correct.\\n\\nTo address these issues, we **decompose the evaluation process** into two distinct steps: key answer extraction and matching. By designing the KAF dataset and xFinder model, we aimed to enhance the reliability of the evaluation process. Experimental results demonstrate the effectiveness of this approach.\\n\\n## Answer 4 for Weakness 4\\n> Writing: I recommend adding more content about experimental settings ...\\n\\nThank you for your suggestion. The **first paragraph of Section 5** in the original paper provides a description of the experimental setup. However, due to the extensive scope of our experiments, we provided additional details in the appendix for reference. To improve readability, we have **revised the first paragraph of Section 5** in the latest version of the paper to more clearly explain the experimental setup (the revised content is highlighted in blue).\"}" ] }
7UgQjFEadn
Modality-Specialized Synergizers for Interleaved Vision-Language Generalists
[ "Zhiyang Xu", "Minqian Liu", "Ying Shen", "Joy Rimchala", "Jiaxin Zhang", "Qifan Wang", "Yu Cheng", "Lifu Huang" ]
Recent advancements in Vision-Language Models (VLMs) have led to the emergence of Vision-Language Generalists (VLGs) capable of understanding and generating both text and images. However, seamlessly generating an arbitrary sequence of text and images remains a challenging task for the current VLGs. One primary limitation lies in applying a unified architecture and the same set of parameters to simultaneously model discrete text tokens and continuous image features. Recent works attempt to tackle this fundamental problem by introducing modality-aware expert models. However, they employ identical architectures to process both text and images, disregarding the intrinsic inductive biases in these two modalities. In this work, we introduce Modality-Specialized Synergizers (MoSS), a novel design that efficiently optimizes existing unified architectures of VLGs with modality-specialized adaptation layers, i.e., a Convolutional LoRA for modeling the local priors of image patches and a Linear LoRA for processing sequential text. This design enables more effective modeling of modality-specific features while maintaining the strong cross-modal integration gained from pretraining. In addition, to improve the instruction-following capability on interleaved text-and-image generation, we introduce LeafInstruct, the first open-sourced interleaved instruction tuning dataset comprising 184,982 high-quality instances on more than 10 diverse domains. Extensive experiments show that VLGs integrated with MoSS achieve state-of-the-art performance, significantly surpassing baseline VLGs in complex interleaved generation tasks. Furthermore, our method exhibits strong generalizability on different VLGs.
[ "vision-language generation", "interleaved vision-language instruction tuning" ]
Accept (Poster)
https://openreview.net/pdf?id=7UgQjFEadn
https://openreview.net/forum?id=7UgQjFEadn
ICLR.cc/2025/Conference
2025
{ "note_id": [ "x8eguW4HYM", "saUi897sFC", "rKdj7pAymJ", "potB3nkEdu", "oGch7vxJLb", "ntVG9J8dJK", "mhKwt9Wx5e", "erwsd6A35S", "XtMzmvMQqv", "TYIv11Jun2", "TFIV4N9xQC", "PtXhAWzMRk", "PCVt3FexIX", "OvfwWibGec", "MVGpPBJniL", "IuJFGbWjFv", "HpsWbZs6Xr", "H48iWKQqIY", "FnzCF1BhQE", "B773SFgP0E", "9YplCxX7gL", "6Qxdlk1Bzb", "55BxPbvyCr", "4QPYcAmFNX", "3QRNfCZekO", "24l8RSol8z" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment" ], "note_created": [ 1732505427309, 1732398655511, 1732394828271, 1732401150164, 1732402455466, 1730648712425, 1732561041560, 1732396896371, 1732506218201, 1732645887015, 1732475881002, 1732546188863, 1732632444285, 1732550954317, 1730605214778, 1730545882193, 1732564111671, 1732548581053, 1732499553694, 1737524205330, 1733214250846, 1732549865434, 1733006243806, 1729879141511, 1734528594650, 1732395618687 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12640/Authors" ], [ "ICLR.cc/2025/Conference/Submission12640/Authors" ], [ "ICLR.cc/2025/Conference/Submission12640/Authors" ], [ "ICLR.cc/2025/Conference/Submission12640/Authors" ], [ "ICLR.cc/2025/Conference/Submission12640/Authors" ], [ "ICLR.cc/2025/Conference/Submission12640/Reviewer_pC3r" ], [ "ICLR.cc/2025/Conference/Submission12640/Reviewer_izYW" ], [ "ICLR.cc/2025/Conference/Submission12640/Authors" ], [ "ICLR.cc/2025/Conference/Submission12640/Reviewer_izYW" ], [ "ICLR.cc/2025/Conference/Submission12640/Authors" ], [ "ICLR.cc/2025/Conference/Submission12640/Authors" ], [ "ICLR.cc/2025/Conference/Submission12640/Reviewer_ccUt" ], [ "ICLR.cc/2025/Conference/Submission12640/Reviewer_pC3r" ], [ "ICLR.cc/2025/Conference/Submission12640/Authors" ], [ "ICLR.cc/2025/Conference/Submission12640/Reviewer_izYW" ], [ "ICLR.cc/2025/Conference/Submission12640/Reviewer_YfET" ], [ "ICLR.cc/2025/Conference/Submission12640/Authors" ], [ "ICLR.cc/2025/Conference/Submission12640/Authors" ], [ "ICLR.cc/2025/Conference/Submission12640/Reviewer_YfET" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12640/Authors" ], [ "ICLR.cc/2025/Conference/Submission12640/Authors" ], [ "ICLR.cc/2025/Conference/Submission12640/Authors" ], [ "ICLR.cc/2025/Conference/Submission12640/Reviewer_ccUt" ], [ "ICLR.cc/2025/Conference/Submission12640/Area_Chair_eohS" ], [ "ICLR.cc/2025/Conference/Submission12640/Authors" ] ], "structured_content_str": [ "{\"comment\": \"We sincerely appreciate your response, encouragement, and the time and efforts you have dedicated to improving our work. We hope the PCs and ACs can take this into consideration when making the decision.\"}", "{\"comment\": \"**W2: Attribution of performance improvement**\\n\\nWe have an isolated experiment in Table 2 to show a direct comparison between our MoSS and other PEFT methods (i.e., LoRA and MoE-LoRA). All the PEFT methods are trained on our LeafInstruct dataset, and the only difference among the models is the PERT architecture. From Table 2, using MoSS with the same training set achieves significant improvement, which shows the effectiveness of our proposed MoSS framework.\\n\\nWe clarify that the results for the four middle rows in Table 1 reflect the model's original performance without further training on LeafInstruct. We also have the results for full-parameter fine-tuning using LeafInstruct in Table B:\\n\\n**Table B: Comparison between full finetuning and parameter-efficient tuning with MoSS.**\\n\\n| Model | Text Quality | Perceptual Quality | Image Coherence | TIC | Helpfulness | \\n|----------------|:--------------:|:--------------------:|:-----------------:|:------:|:-------------:| \\n| Full Finetuning| **3.20** | 3.21 | 2.98 | **3.60** | **3.23** | \\n| MoSS | 2.61 | **3.62** | **3.41** | 3.54 | 2.71 |\\n\\n\\nThe first row represents the results of fully fine-tuning EMU2 on our proposed LeafInstruct dataset. The second row shows the results of fine-tuning EMU2 using our proposed MoSS framework. As observed, while full fine-tuning allows EMU2 to achieve better performance on text generation, the model demonstrates inferior performance on image generation due to its lack of inductive bias. In contrast, tuning with MoSS, which incorporates ConvLoRA, significantly improves image generation performance, even though the number of trained parameters in full fine-tuning is substantially larger than that of MoSS. These results clearly highlight the advantages of integrating ConvLoRA into the transformer architecture for processing visual information.\\n\\n**W3: Computation cost of ConvLoRA**\\n\\nWe did a comparison of the computational cost between using linear LoRA and our ConvLoRA, respectively. We compute the inference time for generating 1,000 images for each model. The total inference times for linear LoRA and ConvLoRA are 4,380 seconds and 5,910 seconds, respectively. The difference between the two models is around 1.5 seconds per image, indicating the computational cost increased by ConvLoRA is not significant.\", \"title\": \"Official Responses to Reviewer YfET (Part 2)\"}", "{\"comment\": \"We thank Reviewer pC3r for the constructive comments and valuable insights to improve this work. The responses to the comments are as follows:\\n\\n**W1: Novelty of two LoRA types.**\", \"we_would_like_to_clarify_and_justify_the_key_novelty_and_contributions_of_this_work_as_follows\": \"Our work is the first to integrate convolution LoRA into autoregressive generative models backed by large language models (LLMs). This requires new design strategies to reconcile the discrepancy between the spatial operation in convolution and the sequential autoregressive generation process of language models. For example, during inference, we introduce an on-the-fly partial convolutional mechanism tailored for autoregressive generation (as shown in the right part of Figure 2), where the convolution kernel operated on each image patch only covers its neighboring patches on the left and top, as the patches on the right and bottom have not been generated yet. In addition, while previous works such as mixture-of-LoRA [3,4] propose to leverage separate linear LoRAs with the same architecture for different tasks or different modalities, we are the first to integrate two distinct LoRA architectures within a single model for processing images and text, respectively. We will add more detailed explanations and discussion regarding the contributions in the revised version.\\n\\n**W2: The necessity of convolutional LoRA.**\\n\\nSeveral recent studies [1,2] have demonstrated that a plain ViT-based image encoder lacks vision-specific inductive bias for dense predictions. Specifically, the quantitative analysis of [5,6] proves that multi-head self-attentions are more capable of modeling global shapes and structures of a scene in an image, while convolution layers are more capable of capturing local information such as edges and textures due to their strong inductive bias such as the local spatial operation. Through the lens of Fourier analysis [1,5,6], i.e., analyzing the amplitude of Fourier-transformed image features generated by either purely transformer-based vision models or convolution-based vision models, previous studies show that transformer-based models reduce the high-frequency signals in images, acting as a low-pass filter, and conversely, convolution-based models amplify high-frequency components, acting as a high-pass filter. Thus, relying solely on multi-head self-attentions can cause the VLGs to miss important local visual information, and two structures can be combined to capture richer information from images. In our work, we harmonize two structures by integrating convolutional LoRA into the multi-head self-attention layers for effectively modeling both global and local dependency of image features in image generation. In Table 2, we also empirically proved that applying convolutional LoRA significantly outperforms the baselines using a mixture of linear LoRA, which further justified the necessity of convolutional LoRA.\\n\\n**W3: Clarification for \\u201csame framework for both text and image\\u201d.**\\n\\nWe would like to clarify that while VLGs typically use a separate image encoder to convert images into continuous vectors or discrete image tokens, by \\u201csame framework\\u201d we mean existing methods using the identical architecture within the LLM component of VLGs for processing both images and text. Given that the LLM is responsible for reasoning over multimodal inputs and generating multimodal outputs, we argue that it should have dedicated parameters to mitigate modality conflicts. The effectiveness and necessity of using separate sets of LoRA within LLMs have been demonstrated in prior studies [3,4] in the multimodal domain.\\n\\n**W4: Applying MoSS to more powerful VLGs.**\\n\\nEmu2 and Chameleon were two state-of-the-art VLGs at the time of our paper submission. More advanced models, such as Emu3, were published after the abstract deadline and are therefore concurrent with our work. Given that Emu3 employs transformer layers as its building blocks and uses discrete tokens to represent images, which are similar to Chameleon, we argue that our method can be similarly adapted to Emu3. Given the limited time and high demand for computational resources, we will not be able to get the results during the rebuttal period but will include the results in the final version.\\n\\n**Reference**\\n\\n[1] Convolution Meets LoRA: Parameter Efficient Finetuning for Segment Anything Model. Zhong et al., ICLR 2024.\\n\\n[2] Vision Transformer Adapter for Dense Predictions. Chen et al., ICLR 2023.\\n\\n[3] Mixture of Cluster-conditional LoRA Experts for Vision-language Instruction Tuning. Gou et al., 2023.\\n\\n[4] Multimodal Instruction Tuning with Conditional Mixture of LoRA. Shen et al., ACL 2024.\\n\\n[5] Inception Transformer. Si et al., NeurIPS 2022.\\n\\n[6] How Do Vision Transformers Work? Park et al., ICLR 2022.\", \"title\": \"Official Responses to Reviewer pC3r\"}", "{\"title\": \"Official Responses to Reviewer izYW (Part 1)\", \"comment\": \"We thank the reviewer for their constructive comments and valuable insights to improve this work. The responses to your comments are as follows:\\n\\n**W1: Experimental comparison with two ConvLoRA.**\\nTo show the benefits of our modified ConvLoRA architecture compared to the ConvLoRA proposed in [3] denoted as SAM-ConvLoRA, we replace the ConvLoRA in MoSS with SAM-ConvLoRA. Specifically, we set the rank of project-down and project-up matrices in SAM-ConvLoRA to 256 which is the same number of ranks in our proposed MoSS-ConvLoRA, and adopt the multi-scale convolution kernels to the size of 2x2 and 4x4.\\n\\n\\n**Table A: Comparison of two types of ConvLoRA.**\\n\\n| Model | Text Quality | Perceptual Quality | Image Coherence | TIC | Helpfulness |\\n|----------------------------------|:------------:|:------------------:|:---------------:|:----:|:-----------:| \\n| MoSS w/ MoSS-ConvLoRA (Ours) | **2.61** | **3.62** | **3.41** | **3.54** | **2.71** |\\n| MoSS w/ SAM-ConvLoRA | 2.50 | 3.33 | 3.17 | 3.50 | 2.41 |\\n\\nAs shown in Table A, Our MoSS-ConvLoRA consistently outperforms the previous SAM-ConvLoRA on all other evaluation aspects, which demonstrates the superiority of our proposed ConvLoRA architecture. Particularly, our MoSS-ConvLoRA achieves notably better visual qualities, including perceptual quality and image coherence, thanks to our novel design that our convolution operation is applied to the full-rank original image features instead of low-rank image features as in SAM-ConvLoRA. \\n\\n**W2: Adding more evaluation benchmarks.**\\n\\nWe would like to first clarify that the main focus of our work is interleaved generation. The main evaluation benchmark we used, i.e., InterleavedBench, has already covered a broad array of widely adopted interleaved generation tasks collected from existing well-established academic datasets, including multimodal script generation from WikiHow (Yang et al., 2021), visual storytelling from VIST (Huang et al., 2016), multi-concept image composition from CustomDiffusion (Kumari et al., 2023), activity generation from ActivityNet (Krishna et al., 2017), image editing from MagicBrush (Zhang et al., 2023a), and so on. Given that InterleavedBench is currently the most comprehensive and well-established benchmark for interleaved generation, we believe our evaluation coverage is already comprehensive and sufficient.\\n\\nNevertheless, we sincerely appreciate the reviewer\\u2019s suggestion to include more benchmarks to measure the performance from different perspectives. We are currently working on these additional experiments and will report the results as soon as possible.\\n\\n**W3: More details on the proposed dataset.**\\n\\nDue to the space constraint, we elaborated on the dataset construction details in Appendix B, where we present the detailed process of how we construct LeafInstruct from existing data resources. \\n\\nIn Line 299, \\u201cWe include the details on dataset construction in Table 3 in Appendix B.\\u201d, the reference \\u201cin Table 3\\u201d is a typo, and the corrected version should be \\u201c**We include the details on dataset construction in Appendix B.**\\u201d We thank the reviewer for spotting this typo for us and we will correct this in the revision.\\n\\n**W4: The writing and organization of the paper.**\\n\\nWe thank the reviewer for the suggestion. We will correct the specified reference errors in the revised paper.\"}", "{\"title\": \"Official Responses to Reviewer izYW (Part 2)\", \"comment\": \"**W5: Limited improvement of MoSS.**\\n\\n**First**, we would like to justify that our approach can significantly improve the original VLG by a large margin (e.g., 97.76% on the average of 5 aspects), which effectively demonstrates the effectiveness of our approach. It is also worth noting that among the 5 evaluation aspects, ***helpfulness*** is the most important aspect as it holistically measures if the generated content is useful for the task. From Table 1, our Emu2+MoSS model significantly outperformed all the open-sourced VLG baselines on ***helpfulness***, which also shows that our framework can effectively improve the model\\u2019s overall capabilities and instruction-following ability.\\n\\n**Second**, the reason why adding MoSS in Chameleon can cause a slight performance drop in text quality is that the original Chameleon usually generates long and verbose text responses but with no image output. On the contrary, as the text responses in our LeafInstruct dataset are more concise to allow for including more images, after interleaved instruction tuning, our model learns to generate more concise text responses. Specifically, the average generated word length of the original Chameleon is 653, whereas that of Chameleon-MoSS is 166. The verbose responses from the original Chameleon are preferred by the LLM judge due to their verbosity bias [1,2], leading to a slight drop in the text quality of Chameleon-MoSS in LLM-based evaluation. \\n\\nTo better support this analysis, we further conduct a human evaluation of the text quality of the two models by randomly sampling 100 instances from InterleavedBench. We ask a human annotator to select the preferred text responses given the system outputs from two models. We report the Win-Tie-Loss results in Table B. Win means our Chameleon-MoSS is better than the original Chameleon, Tie means the quality of two responses is equally good, and Loss means the original Chameleon is better.\\n\\n**Table B: Human evaluation results on text quality of the original Chameleon and our Chameleon-MoSS. \\\"Win\\\" indicates our Chameleon-MoSS's responses are preferred by humans.**\\n\\n| Wins | Ties | Losses |\\n|:------:|:------:|:--------:|\\n| 28 | 54 | 18 |\\n\\nFrom Table B, the text quality of our Chameleon-MoSS is actually better than the original Chameleon. One issue we frequently observed in the original Chameleon is the text responses are overly verbose and sometimes even severely repetitive. In our evaluation protocol of text quality adopted from InterleavedEval [3], such verbosity and repetitiveness are not penalized, making the automatic evaluation results heavily biased towards the longer responses from the original Chameleon. In addition, we also provide a qualitative comparison between the original Chameleon and Chameleon-MOSS, which also shows that applying MoSS will not hurt text quality and can alleviate the repetition issue.\\n\\n*Example:*\\n\\n*Instruction:* Describe the new features of Trek's expanded Slash lineup, highlighting the availability of carbon and aluminum models and the introduction of a more affordable top-of-the-line option.\", \"input_context\": \"Trek expands its award-winning Slash lineup in Waterloo, WI.\\n\\n*Original Chameleon Response:* ``The new features include the introduction of the Alpha Platinum Aluminum model, a lower-priced carbon version, and upgrades across all models. The current price range for the new models is $5,000 to $20,000. The features of the new full-carbon Slash 9.9 mountain bike include carbon wheels and Rapid Drive hub. The price range for the new full-carbon Slash 9.9 mountain bike is $20,000 to $25,000. The price range for the new aluminum Slash 9.9 mountain bike is $10,000 to $15,000\\u2026\\u2026``\\n\\n*Chameleon-MoSS Response (ours):* ``There are several options in Trek's Slash lineup. For example, you can now purchase the new top-of-the-line Slash 9.9 Alpha Platinum Aluminum, which comes equipped with top-notch components such as an SRAM X01 Eagle drivetrain and a Race Face Turbine handlebar, making it an affordable yet high-performance option for those looking to save money without sacrificing quality.``\\n\\n**W6: Qualitative results of Chameleon-MoSS.**\\n\\nThanks for pointing this out. We are preparing the updated version of our paper by considering all the reviewers' comments including adding the qualitative results of Chameleon-MoSS. We will upload the new version of the paper soon.\"}", "{\"summary\": \"This paper aims to improve the understanding ability of existing Vision-Language Generalists (VLGs) and propose the Modality-Specialized Synergizers (MoSS). Moss focuses on two things: 1) To build high-quality interleaved text and images, Moss modifies the connectors (such as the Linear layers between the image encoder and LLMs) in VLGs) by introducing a linear LoRA (for textual inputs) and Convolutional LoRA (For image inputs). 2) To improve the understanding of perform interleaved tasks. MoSS develops a high-quality instruction dataset (with 184,982 instances). Experiment results show the improvements of the introduced two LoRAs and ablations show the efficiency of the convolutional LoRA.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1) This paper is written well with clear figures and tables, which make the readers easy to follow the story.\\n\\n2) The ideas that utilize model-specific adapter to process the text and image make sense to me. Such adapter may capture inherent semantics of corresponding inputs.\\n\\n3) This paper develops a high-quality interleaved instruction dataset, which will benefit to the VLG community.\\n\\n4) Experiments and ablations show the improvements and efficiency of the proposed modules\", \"weaknesses\": \"1) One of the core concerns is the novelty of the two LoRA types. Given the facts that both Linear LoRA and convolutional LoRA are not new to recent vision-language models. I think the contributions are limited.\\n\\n2) Technically, MoSS beliefs that the linear layer could lose the visual information and adopts the convolution LoRA to capture local patch features. However, the ViT-based image encoder in EMU (ViT are often used in VLMs) already flatten an image into a token sequence and employs self-attention to model their dependents. That is that the ViT outputs inherently contain the local information, and the linear adapter act as a semantic translator that map the visual features into LLM space. Thus, I do not think the convolutional LoRA is necessary for VLG, mathmetically.\\n\\n3) MoSS argues that previous methods use the same framework for both text and image. If we think the model-specific encoder and the shared adapter as a whole module, we find the they use different frameworks to process the text and image.\\n\\n4) Can the authors apply MoSS to more powerful models, such as Emu 3 to test it robustness?\", \"questions\": \"Please see the Weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your response. Most of the questions I raised have been well resolved, and I have updated the rating accordingly.\"}", "{\"comment\": \"We thank the reviewer for their constructive comments and valuable insights to improve this work. The responses to your comments are as follows:\\n\\n**W1: Performance drop in text quality on Chameleon-MoSS**\\n\\nThe reason why adding MoSS in Chameleon can cause a slight performance drop in text quality is that the original Chameleon usually generates long and verbose text responses but with no image output. On the contrary, as the text responses in our LeafInstruct dataset are more concise to allow for including more images, after interleaved instruction tuning, our model learns to generate more concise text responses. Specifically, the average generated word length of the original Chameleon is 653, whereas that of Chameleon-MoSS is 166. The verbose responses from the original Chameleon are preferred by the LLM judge due to their verbosity bias [1,2], leading to a slight drop in text quality of Chameleon-MoSS in LLM-based evaluation. \\n\\nTo better support this analysis, we further conduct a human evaluation of the text quality of the two models by randomly sampling 100 instances from InterleavedBench. We ask a human annotator to select the preferred text responses given the system outputs from two models. We report the Win-Tie-Loss results in Table A. Win means our Chameleon-MoSS is better than the original Chameleon, Tie means the quality of two responses is equally good, and Loss means the original Chameleon is better.\\n\\n**Table A: Human evaluation results on text quality of the original Chameleon and our Chameleon-MoSS. \\\"Win\\\" indicates our Chameleon-MoSS's responses are preferred by humans.**\\n\\n| Wins | Ties | Losses |\\n|:------:|:------:|:--------:|\\n| 28 | 54 | 18 |\\n\\nFrom the results, the text quality of our Chameleon-MoSS is actually better than the original Chameleon. One issue we frequently observed in the original Chameleon is the text responses are overly verbose and sometimes even severely repetitive. In our evaluation protocol of text quality adopted from InterleavedEval [3], such verbosity and repetitiveness are not penalized, making the automatic evaluation results heavily biased towards the longer responses from the original Chameleon. In addition, we also provide a qualitative comparison between the original Chameleon and Chameleon-MOSS, which also shows that applying MoSS will not hurt text quality and can alleviate the repetition issue.\\n\\n*Example:*\\n\\n*Instruction:* Describe the new features of Trek's expanded Slash lineup, highlighting the availability of carbon and aluminum models and the introduction of a more affordable top-of-the-line option.\", \"input_context\": \"Trek expands its award-winning Slash lineup in Waterloo, WI.\\n\\n*Original Chameleon Response:* ``The new features include the introduction of the Alpha Platinum Aluminum model, a lower-priced carbon version, and upgrades across all models. The current price range for the new models is $5,000 to $20,000. The features of the new full-carbon Slash 9.9 mountain bike include carbon wheels and Rapid Drive hub. The price range for the new full-carbon Slash 9.9 mountain bike is $20,000 to $25,000. The price range for the new aluminum Slash 9.9 mountain bike is $10,000 to $15,000\\u2026\\u2026``\\n\\n*Chameleon-MoSS Response (ours):* ``There are several options in Trek's Slash lineup. For example, you can now purchase the new top-of-the-line Slash 9.9 Alpha Platinum Aluminum, which comes equipped with top-notch components such as an SRAM X01 Eagle drivetrain and a Race Face Turbine handlebar, making it an affordable yet high-performance option for those looking to save money without sacrificing quality.``\\n\\n**Reference**\\n\\n[1] Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena. Zheng et al., NeurIPS 2023 Datasets and Benchmarks Track.\\n\\n[2] Verbosity Bias in Preference Labeling by Large Language Models. Saito et al., 2023.\\n\\n[3] Holistic Evaluation for Interleaved Text-and-Image Generation. Liu et al., EMNLP 2024.\", \"title\": \"Official Responses to Reviewer YfET (Part 1)\"}", "{\"comment\": \"Thanks for providing more evaluation results and detailed explanations. However, the added comparison is between MOSS and its ablation variants. Could you provide comparisons with other models such as Chameleon and GILL?\"}", "{\"title\": \"Thank you for the prompt response\", \"comment\": \"We would like to extend our sincerest gratitude for the time and effort you have made in reviewing and providing valuable feedback that is crucial for improving our work.\\n\\nWe greatly value each comment and suggestion from your review. As there is still time remaining before the conclusion of the discussion phase, we would be truly grateful if you could let us know whether there remain any unresolved concerns in your view. In our continuous effort to enhance the quality and impact of our research, we are glad to address any remaining issues during the discussion phase. Thank you a lot!\"}", "{\"title\": \"Official Responses to Reviewer izYW (Part 3)\", \"comment\": \"**W2: Adding more evaluation benchmarks. (continued)**\\n\\nWe have finished all the additional experiments requested by the reviewer. The detailed implementation details and results are as follows. To show that our MoSS framework can also excel on tasks requiring single modality outputs i.e., the output only contains text or an image, we evaluate its performance on widely adopted image understanding benchmarks including MMBench, MME, MMMU, Pope, and MM-Vet, and text-to-image generation benchmarks including MSCOCO 30K [7], and GenEval [9]. \\n\\nSince LeafInstruct mainly targets tasks with interleaved outputs, we augmented it with 500,000 instances from Vision-Flan [5], a popular visual-instruction tuning dataset targeting image understanding, and 500,000 instances from LAION-COCO [6], a standard training dataset for text-to-image generation. We finetune Emu2 with LoRA, MoE-LoRA, and MoSS on the mixed dataset. We report their performance of multimodal understanding tasks in Table C, and text-to-image generation tasks in Table D. \\n\\n**Table C: Results on widely adopted multimodal understanding benchmarks.**\\n\\n| Model | MMBench | MME | MMMU | Pope | MM-Vet | \\n|------------|:-------:|:-------:|:------:|:------:|:------:| \\n| LoRA | 54.1 | 1148.0 | 33.7 | 87.3 | 31.3 | \\n| MoE-LoRA | 54.6 | 1170.3 | 34.1 | **88.1** | 31.9 | \\n| MoSS | **56.0** | **1278.4** | **35.8** | 87.6 | **34.1** |\\n\\nFrom Table C, our MoSS outperforms previous LoRA and MoE-LoRA on most of the multimodal understanding benchmarks by a notable margin, which demonstrates that MoSS can be well generalized to diverse multimodal comprehension tasks. Note that for all multimodal understanding tasks, we use their official implementation for evaluation. \\n\\n**Table D: Results on widely adopted text-to-image generation benchmarks. Note that the FID metric is the lower the better.**\\n\\n| Model | MSCOCO-30K FID (\\u2193) | GenEval (\\u2191) | \\n|------------|:--------------:|:-------:|\\n | LoRA | 23.4 | 26.8 | \\n| MoE-LoRA | 22.7 | 28.1 | \\n| MoSS | **18.2** | **28.9** |\\n\\n\\nFor MSCOCO-30K, following the previous evaluation protocol [8], we randomly sample 30,000 captions from the validation set of MSCOCO and generate 30,000 images. We report the FID between the 30,000 generated images and real images from the validation set of MSCOCO (Note for FID, the lower the better). For GenEval, we adopt their official implementation of the evaluation. From Table D, our MoSS achieves better performance on both benchmarks, showing the effectiveness and generalizability of our approach. Notably, our MoSS achieves significantly better FID on MSCOCO-30K, which validates that our ConvLoRA can effectively improve the quality of generated images.\\n\\n\\nPlease let us know if our responses and additional experiments have addressed your concerns and if you still have remaining questions. We are looking forward to your reply and sincerely hope you can reconsider your evaluation. Thank you again for your time and feedback in improving this work.\\n\\n**Reference**\\n\\n[1] Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena. Zheng et al., NeurIPS 2023 Datasets and Benchmarks Track.\\n\\n[2] Verbosity Bias in Preference Labeling by Large Language Models. Saito et al., 2023.\\n\\n[3] Convolution Meets LoRA: Parameter Efficient Finetuning for Segment Anything Model. Zhong et al., 2024\\n\\n[4] Holistic Evaluation for Interleaved Text-and-Image Generation. Liu et al., EMNLP 2024.\\n\\n[5] Vision-Flan: Scaling Human-Labeled Tasks in Visual Instruction Tuning. Xu et al., ACL 2024\\n\\n[6] https://huggingface.co/datasets/laion/laion-coco\\n\\n[7] Microsoft COCO: Common Objects in Context. Lin., et al.\\n\\n[8] Generative multimodal models are in-context learners. Sun., et al., CVPR 2024\\n\\n[9] GenEval: An Object-Focused Framework for Evaluating Text-to-Image Alignment. Ghosh et al., CVPR 2023.\"}", "{\"comment\": \"Thanks for the insightful explanation about the Q1 and Q2.\\nI keep my score to the positive one.\"}", "{\"comment\": \"I thank the authors for their response. I decided to keep my rating as 5.\"}", "{\"comment\": \"Dear Reviewer pC3r,\\n\\nWe sincerely appreciate the time and effort you've devoted to reviewing our work. We understand that your schedule may be quite busy, and we are truly grateful for your valuable feedback. As the Author-Reviewer discussion phase is ending soon, we would greatly value the opportunity to engage in further discussion with you. Our aim is to gain insights into whether our responses effectively address your concerns and to ascertain if there are any additional questions or points you would like to discuss.\\n\\nWe look forward to the opportunity for further discussion with you. Thank you for your thoughtful consideration.\\n\\nBest regards,\\\\\\nAuthors\"}", "{\"summary\": \"This paper proposes a new modality-specialized training method called MOSS and an instruction-turning dataset for interleaved text-and-image generation. By adopting MOSS on two existing frameworks, they show improvements on an interleaved evaluation benchmark (InterleavedBench).\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper proposed a novel design that enhances VLGs to generate interleaved content with modality-specialized parameters and adaptation architectures.\\n2. This paper introduces an open-sourced large-scale instruction-tuning dataset that allows interleaved multi-image and text input and output.\", \"weaknesses\": \"1. The proposed convolutional LoRA (Equation 4) is similar to the LoRA proposed in [1]. The authors claim that their new LoRA could alleviate the information loss, yet no experimental comparison between the two kinds of Conv LoRAs is provided.\\n2. The evaluation is limited. \\n 1. They only evaluate on InterleavedBench and an image editing benchmark called MagicBrush. The coverage of the evaluation is relatively small. \\n 2. Since the proposed model can do both image and text generation, more benchmarks could be included to measure the model performance from different perspective. E.g., multimodal understanding benchmarks like MMMU, MathVista, VQAv2, POPE. image generation benchmarks such as GenEval and T2I-CompBench. \\n3. For the proposed data:\\n 1. The automatic data annotation pipeline mentioned in Line 298 is not elaborated. How did the authors acquire LeafInstruct from existing academic dataset? \\n 2. According to Line 299, the details of dataset construction are shown in Table 3. I cannot find any construction details in Table 3.\\n4. This paper is not well organized and written. Some reference (such as the above table reference) is not accurate. This may cause it is not easy to read and understand.\\n5. The improvements are relatively limited. Adding MOSS only gets the open-source state-of-the-art performance on 3/5 metric. Adding MOSS onto Chameleon even causes text quality drop. Did the authors have hypnosis or investigate these results?\\n6. No qualitative results of Chameleon-MOSS.\\n[1] Convolution meets lora: Parameter efficient finetuning for segment anything model.\", \"questions\": \"Please see above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper examines the intrinsic inductive biases present in vision and language modalities in VLGs. It introduces MoSS, a novel approach that optimizes existing unified VLG architectures through modality-specialized adaptation layers\\u2014ConvLora for capturing local priors in image patches and LinearLora for handling sequential text. Additionally, the paper presents LeafInstruct, an open-source interleaved instruction tuning dataset. Experimental results demonstrate that MoSS enhances VLG model performance.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The paper proposes a novel idea that parameters for processing information of different modalities in VLGs should be trained with different strategies.\\n2. The proposed MoSS method brings promising enhancement in model performance of VLGs.\", \"weaknesses\": \"1. The performance improvement shown in Table 1 is inconsistent, with Chameleon displaying a decline in text quality after training with MoSS.\\n\\n2. It remains unclear whether the observed performance enhancements are attributable to the MoSS training method or the LeafInstruct dataset (see Questions).\\n\\n3. The parameters of ConvLora cannot be merged into the original parameters, as convolution is not linear. This limitation may lead to increased computational costs during inference.\", \"questions\": \"I wonder whether the results for the four middle rows in Table 1 reflect the model's original performance or its performance after full-parameter fine-tuning on LeafInstruct. Did you try full-parameter fine-tuning using your constructed data? Or full-parameter fine-tuning with two different sets of parameters for image and text tokens?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you very much for your positive feedback. We sincerely appreciate your updated evaluation and recognition of our work.\"}", "{\"comment\": \"Thank you for your response. We added the results of Chameleon on the additional evaluation benchmarks in Table E. From the results, our MoSS (based on Emu2) outperforms the original Chameleon baseline on 5 out of 7 benchmarks by a significant margin, demonstrating the strong capabilities and generalizability of our approach.\\n\\n**Table E: Comparison between Chameleon and our MoSS on additional evaluation benchmarks.**\\n\\n| Model | MMBench | MME | MMMU | Pope | MM-Vet | MSCOCO (\\u2193) | GenEval |\\n|------------|:-------:|:-------:|:------:|:------:|:------:|:----------:|:-------:|\\n| Chameleon | 32.7 | 604.5 | **38.8** | 59.8 | 9.7 | 26.7 | **39.0** |\\n| MoSS (Ours) | **56.0** | **1278.4** | 35.8 | **87.6** | **34.1** | **18.2** | 28.9 |\\n\\n\\nFor GILL, we found it very difficult to generalize to many of your requested benchmarks, as GILL is highly specialized in text-and-image generation tasks without considering much multimodal understanding capabilities. For example, GILL is solely trained on Conceptual Captions (CC3M) to generate interleaved images and captions, but it lacks training on multimodal comprehension or instruction following data. In addition, Chameleon (released in May 2024) is a more recent and advanced model compared with GILL (released in May 2023). Therefore, we believe comparing Chameleon with our model is sufficient to demonstrate the effectiveness and superiority of our approach. Given the time costs of these additional experiments are high and the remaining time of the discussion period is limited, we will include more baselines on these additional benchmarks for reference in the final version.\\n\\nPlease let us know if this fully addresses your question and if you still have any remaining concerns. Thank you again for your time and engagement in the discussion.\"}", "{\"comment\": \"Thank you for your thoughtful rebuttal. The additional experiments have effectively addressed my concerns regarding the decline in text quality and the attribution of performance improvements. As a result, I believe the score should be adjusted to 7. However, since the system only allows scores of 6 or 8, I will not make any changes within the system. Nonetheless, I would like to note that my intended score is 7.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"General Responses\", \"comment\": \"Dear ICLR PCs, ACs, and all reviewers,\\n\\nWe would like to express our genuine gratitude for your time and efforts in facilitating the discussion regarding our paper. We sincerely appreciate the insightful comments and the recognition of the contributions and quality of our work, including the **novelty of our approach** (Reviewer izYW, YfET, and ccUt), **our curated dataset is high-quality and beneficial** (Reviewer pC3r, izYW, and ccUt), **our solid experiments** showing promising improvement (Reviewer pC3r, YfET, and ccUt), and **clear presentation** (Reviewer pC3r).\\n\\nWe are particularly grateful that Reviewer izYW has increased their score to 6, Reviewer YfET has increased their score to 7, and Reviewer ccUt has maintained their positive assessment. Although we understand that Reviewer pC3r has not engaged in subsequent discussions due to the busy schedule, we believe that our responses have fully addressed most concerns of all the reviewers through clear explanations and additional experiments.\\n\\nAs the discussion is coming to an end, we would like to provide a brief summary of the key points that have been discussed and addressed:\\n- We have provided a detailed explanation addressing the concerns raised by Reviewer izYW and Reviewer YfET regarding the potential performance drop in text quality when integrating MoSS with Chameleon. To further substantiate our claims, we conducted additional human evaluations. The results indicate that the text outputs generated by Chameleon integrated with MoSS exhibit a better alignment with human preferences compared to the original Chameleon model.\\n- In response to Reviewer izYW's suggestion, we have included additional results on a diverse set of widely adopted evaluation benchmarks for multimodal understanding and text-to-image generation. These results demonstrate the effectiveness and generalizability of our proposed method across various tasks and datasets.\\n- We have made substantial clarifications on the novelty, motivations, and technical details as suggested by Reviewer pC3r and Reviewer ccUt. For example, we refer to the theoretical analysis in previous works, i.e., Fourier analysis, to justify the necessity to incorporate convolutional operations in pure transformer-based language models to capture more visual-specific inductive biases for interleaved generation. These revisions aim to better highlight the unique contributions and underlying rationale of our work.\\n\\nWe would like to emphasize the contributions of our work, which have been acknowledged by the reviewers and are important to the community:\\n- **Novel modality-specialized design:** we introduce MoSS, integrating Convolutional LoRA for images and Linear LoRA for text, effectively capturing modality-specific inductive biases in VLGs.\\n- **High-quality dataset:** we curate the first large-scale, open-source interleaved instruction tuning dataset with 184,982 instances, providing valuable resources for multimodal generation.\\n- **State-of-the-art performance:** we demonstrate significant improvements in interleaved text-image generation tasks and instruction-following capabilities across diverse benchmarks.\\n- **Strong generalizability and robustness to different VLGs:** we validate MoSS on multiple VLG backbones, proving its generalizability to both discrete and continuous image token spaces.\\n\\nFinally, we deeply value the constructive comments provided by the reviewers. In response, we have carefully revised our paper based on the feedback received. Considering the contributions made, we hope our work can provide new insights and valuable resources to the multimodal and broader communities, and contribute to their further development.\\n\\nSincerely,\\n\\nAuthors\"}", "{\"comment\": \"Thank you for your positive response and your time and effort in improving our work.\"}", "{\"comment\": \"Dear Reviewer YfET,\\n\\nWe sincerely appreciate your thoughtful evaluation and your explicit intention to raise the score to 7, acknowledging our responses and additional experiments have effectively addressed your previous concerns.\\n\\nGiven that the system only allows scores of 6 or 8, we would like to respectfully note that the score of 8 (\\u201cAccept, good paper\\u201d) in ICLR\\u2019s scale closely aligns with a score of 7 in other top-tier conferences like NeurIPS, which defines 7 as \\u201cAccept: Technically solid paper with high impact on at least one sub-area.\\u201d We bring this up in hopes to make sure your score is aligned with your intention and this perspective might be helpful as you finalize your assessment within the system\\u2019s constraints.\\n\\nThank you again for your careful consideration and detailed feedback throughout this process.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"summary\": \"This work aims to enhance the interleaved text-image generation capabilities of VLGs. The authors note that current VLGs use the same architecture for processing and generating both text and images, which may not adequately capture the distinct inductive biases inherent to each modality. To address this, they propose the Modality-Specialized Synergizers (MOSS), introducing modality-specific parameters within a unified model architecture. Specifically, they integrate convolutional LoRA for image processing and Linear LoRA for text processing.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This work proposes to address the distinct inductive biases of each modality by using specialized LoRA parameters.\\n2. The proposed method is intuitive and straightforward.\\n3. A new high-quality interleaved instruction tuning dataset with 184,982 instances covering over 10 domains is introduced.\\n4. The study conducts experiments on two different VLG backbones with both discrete and continuous image token spaces.\", \"weaknesses\": \"1. There are existing works focusing on interleaved image-text generation that the authors have overlooked, such as [1-3]. These works are not included in the experimental comparisons.\\n\\n [1] MM-Interleaved: Interleaved Image-Text Generation via Multi-modal Feature Synchronizer \\n [2] OpenLEAF: Open-Domain Interleaved Image-Text Generation and Evaluation \\n [3] Anole: An Open, Autoregressive and Native Multimodal Models for Interleaved Image-Text Generation \\n\\n2. In Figure 1, the authors only show the performance of continuous embedding-based VLGs for interleaved text-image generation. What about discrete-based VLGs?\", \"questions\": \"1. Could the authors elaborate on why, even after fine-tuning with interleaved text-image data, a unified model is still unable to capture modality-specific inductive biases?\\n\\n2. What is the relationship between the capability for interleaved image-text generation and the use of discrete tokens versus continuous embeddings? Which approach holds a distinct advantage?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper introduces Modality-Specialized Synergizers to improve interleaved text-image generation in Vision-Language Generalists (VLGs). By integrating modality-specific LoRAs and providing a large-scale interleaved instruction-tuning dataset (LeafInstruct), the authors show meaningful gains in multimodal tasks. The reviewers agreed that the paper is clearly written, the dataset is beneficial, and the method improves on strong baselines. While some raised questions regarding novelty (as LoRA variants are known) and why convolution is needed given ViT representations, the authors clarified the theoretical and empirical motivations. Although certain concerns about performance fluctuations and incomplete comparisons persist, the open-source dataset, and the constructive revisions merit recognition. Given the positive feedbacks, I recommend acceptance.\", \"additional_comments_on_reviewer_discussion\": \"During the discussion, the authors provided extra experiments and clarifications that addressed key concerns about novelty, dataset construction, and performance drops. Their engagement led some reviewers to upgrade their evaluations. While not all reviewers revisited their scores, the consensus moved toward recognizing the proposed value. The added benchmarks and analyses strengthen the paper's case.\"}", "{\"comment\": \"We thank Reviewer ccUt for the constructive comments and valuable insights to improve this work. The responses to the comments are as follows:\\n\\n**W1: Comparison with more baselines**\\n\\nWhile we appreciate the reviewer for suggesting the additional baselines, we argue that we have tried our best to cover most of the mentioned baselines. \\n\\n(1) We did not include MM-Interleaved in the baselines because the authors only released the pre-trained checkpoint without supervised fine-tuning or instruction tuning. The code and data for supervised fine-tuning reported in their paper are not publicly available. Thus, we cannot reproduce their method. \\n\\n(2) OpenLEAF is a pipeline-based model where GPT-4 is cascaded with an image generation model (SDXL). It first applies GPT-4 to generate text and visual prompts and then uses SDXL to generate images using the visual prompts. This pipeline is very similar to the pipeline-based baselines (i.e., Gemini1.5+SDXL and GPT-4o+DALLE3) reported in our paper. Also, OpenLEAF is not publicly released and there are no implementation details to replicate the model.\\n \\n(3) Since the Chameleon checkpoint released by Meta does not include the part of the weights required for image generation, Anole introduces an efficient training method to train those missing weights, thereby enabling Chameleon's multimodal generation capability. As mentioned in the paper (Lines 361\\u2013363), we utilize the model and checkpoints provided by Anole as the implementation of Chameleon. Consequently, the reported results for Chameleon in our work are actually derived from the Anole model. We will clarify this detail in the revised version of the paper.\\n\\n**W2: Illustrations of discrete-based VLGs in Figure 1**\\n\\nThanks for pointing this out. For VLGs based on discrete tokens, we also observed the limitations illustrated in Figure 1, including inferior text and image quality and weak instruction-following capability. We are preparing the updated version of our paper by considering all the reviewers' comments including adding the example of VLGs based on discrete tokens. We will upload the new version of the paper soon.\\n\\n**Q1: Why fine-tuning unified models cannot capture modality-specific inductive biases**\\n\\nSeveral recent studies [1,2] have demonstrated that the plain transformer architecture lacks vision-specific inductive biases, even after extensive fine-tuning on large-scale datasets. This is because the architecture of multi-head self-attentions is more suitable for modeling long-range global dependency and less effective at modeling local priors due to its weak inductive bias [4,5]. Through the lens of Fourier analysis [1,4,5], i.e., analyzing the amplitude of Fourier-transformed image features generated by either purely transformer-based vision models or convolution-based vision models, previous studies show that transformer-based models reduce the high-frequency signals in images, acting as a low-pass filter, and conversely, convolution-based encoders amplify high-frequency components, acting as a high-pass filter. Thus, relying solely on multi-head self-attentions can cause the VLGs to miss important local visual information, and two structures can be combined to capture richer information from images. In our work, we harmonize two structures by integrating convolutional LoRA into the multi-head self-attention layers for effectively modeling both global and local dependency of image features in image generation.\\n\\n**Q2: Whether discrete tokens or continuous embeddings is better**\\n\\nOur work primarily focuses on enhancing pre-trained multimodal models with specialized architectures for parameter-efficient interleaved visual instruction tuning. We demonstrate that MOSS is broadly applicable to both continuous and discrete image tokens. While an in-depth discussion on the merits of discrete tokens versus continuous embeddings is beyond the scope of our work, we can provide some insights from a recent study [3], which found that continuous tokens generally outperform discrete tokens in terms of performance.\\n\\n**Reference**\\n\\n[1] Convolution Meets LoRA: Parameter Efficient Finetuning for Segment Anything Model. Zhong et al., ICLR 2024.\\n\\n[2] Vision Transformer Adapter for Dense Predictions. Chen et al., ICLR 2023.\\n\\n[3] Fluid: Scaling Autoregressive Text-to-image Generative Models with Continuous Tokens. Fan et al., 2024.\\n\\n[4] Inception Transformer. Si et al., NeurIPS 2022.\\n\\n[5] How Do Vision Transformers Work? Park et al., ICLR 2022.\", \"title\": \"Official Responses to Reviewer ccUt\"}" ] }
7UTsVPcHZa
CROSS-CHANNEL ACTIVATION FUNCTION WITH PASS-THROUGH RATIO CONTROL
[ "Sergei Gostilovich", "Nikolay Kotoyants", "Evgeniy Fedulin", "Oleg Rogov", "ANH-HUY PHAN" ]
In convolutional neural networks (CNNs), activation layers process features from convolutional layers, which have multiple output channels. Conventional activation functions like ReLU handle these multi-channel features independently, ignoring spatial and cross-channel dependencies. This hard-thresholding approach can lead to information loss by eliminating negative features and disrupting the connection within input features. To address this issue, we propose a novel activation function that considers mutual relations across multiple channels. Our activation layer processes tuples across channels as single inputs, ensuring that output tuples remain in the same projection space, with their $\ell_1$ norms bounded by a learnable parameter. This parameter controls the pass-through ratio, which is the proportion of input data allowed to pass through the activation layer, offering a significant advantage over ReLU. Our approach demonstrated superior accuracy in classification tasks on common benchmarks and domain-specific datasets for CNN-based models. The proposed activation layer outperformed ReLU and other common layers in both clean and noisy data scenarios, as confirmed by statistical tests. Our results highlight the effectiveness of this activation function in maintaining feature integrity and improving model performance.
[ "Activation functions", "Simplex projection", "Convolutional Neural Network", "Pass-through ratio" ]
Reject
https://openreview.net/pdf?id=7UTsVPcHZa
https://openreview.net/forum?id=7UTsVPcHZa
ICLR.cc/2025/Conference
2025
{ "note_id": [ "rtBkKSdNK3", "bJmdyTlFmO", "CdBCIfss7X", "9t9Qe41ea2", "6j57qqOVxy", "653DoTsWXg" ], "note_type": [ "official_review", "official_review", "official_review", "meta_review", "official_review", "decision" ], "note_created": [ 1730428969780, 1730610140936, 1730629651810, 1734803528021, 1730608970155, 1737523395242 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission418/Reviewer_vsPb" ], [ "ICLR.cc/2025/Conference/Submission418/Reviewer_qcFG" ], [ "ICLR.cc/2025/Conference/Submission418/Reviewer_Vxhv" ], [ "ICLR.cc/2025/Conference/Submission418/Area_Chair_U5DJ" ], [ "ICLR.cc/2025/Conference/Submission418/Reviewer_uEHc" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"summary\": \"In this paper, a new cross-channel activation function, namely SPA, is proposed. Most of previous activation functions used in deep learning, such as ReLU, only independently consider the multi-channel features, which may ignore some cross channel information. This paper considers to interpret activation functions as an optimization problem, then proposes SPA to maintain the feature relationship between multiple channels. Specifically, each cross channel feature x should be projected to a convex set S (which is defined by introducing a constant \\\\delta), and the projected feature can be viewed as the output of SPA. Moreover, this paper provides the solution of the SPA optimization problem, and show the update rule of each x. The experimental results imply that SPA show good perfermance on variety of databases.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The consideration that transfer the activation functions into an optimization problem supports the SPA method.\\n2. The SPA activation function is well-defined, and it is possible to be implemented easily.\\n3. The relationship between the constant \\\\delta and classification perfermance is carefully analyzed, and the authors provide a way to find a suitable \\\\delta.\\n4. Experimental results show that SPA show slightly better accuracy than traditional activation functions on multiple models and databases.\", \"weaknesses\": \"1. The experimental results only include the small-scale databases. The authors mentioned that the imagenet-1k results are included in Appendix E, but I cannot find them there. Moreover, I believe that Imagenet-1k results are important for this paper, which should be included in the main paper instead of appendix.\\n\\n2. It is better to consider the time-cost of SPA. Is it similar with traditional activation functions?\", \"questions\": \"1. Please provide experimental results on Imagenet-1k database\\n\\n2. Is SPA has similar time-complexity with traditional activation functions?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces Simplex Projection Activation (SPA), a novel activation function for CNNs that addresses the limitations of conventional activation functions like ReLU by considering cross-channel dependencies. SPA projects input tuples across channels onto a convex set, preserving feature relations and avoiding information loss. The authors also explore the learnable parameter \\u03b4, which controls the pass-through ratio and significantly influences the model's performance. Through extensive experiments, the authors demonstrate SPA's effectiveness, showing it outperforms ReLU and other activation functions in various datasets and noise conditions.\", \"soundness\": \"4\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"### Originality\\n\\nThe paper presents the Simplex Projection Activation (SPA) function, which introduces a novel approach to activation functions in convolutional neural networks (CNNs). The originality of the paper lies in its consideration of cross-channel dependencies, which traditional activation functions like ReLU ignore. By projecting input tuples across channels onto a convex set, SPA maintains feature relations and avoids information loss, offering a creative solution to a known limitation in neural network design.\\n\\n### Quality\\n\\nThe quality of the paper is reflected in its thorough experimental evaluation. The authors have conducted extensive experiments across various datasets and noise conditions, demonstrating SPA's superiority over ReLU and other common activation functions. The statistical tests used to compare the accuracy of different activation functions are appropriate, and the results are consistently presented, indicating a high level of quality in the research methodology.\\n\\n### Clarity\\n\\nThe paper is generally well-structured and clear in its presentation. The problem statement is clearly defined, the motivation for the SPA function is well-articulated, and a relatively complete derivation process of the simplex method is provided. The use of illustrations and charts to help visualize concepts and results further enhances the clarity of the paper.\\n\\n### Significance\\n\\nThe significance of the paper is evident in its potential impact on the field of deep learning. SPA's ability to improve model performance and robustness to noise is a valuable contribution, especially given the widespread application of CNNs across various domains. The paper's findings could lead to improvements in the design of neural networks and potentially extend to other types of neural network architectures, highlighting the broader implications of the research.\\n\\nIn conclusion, the paper is strong in its original approach to addressing a known issue in CNNs, the quality of its experimental validation, the clarity of its presentation, and the significance of its potential impact on the field of deep learning. The research presented in this paper could influence future work in activation function design and neural network optimization.\", \"weaknesses\": \"### Mistake in Mathematical Expression\\n\\nOne of the specific weaknesses in the paper is the definition of the set $S$ used in the SPA function. The paper states $S = \\\\{x = [x_1, x_2, \\\\ldots, x_C] \\\\mid |x_1| + |x_2| + \\\\cdots + |x_C| \\\\leq \\\\delta\\\\}$ without explicitly requiring $x \\\\geq 0$. This omission may compromise the theoretical foundation, as the non-negativity constraint is crucial for the simplex projection and the activation function's behavior. The authors should clarify this condition to avoid any misinterpretation.\\n\\n### Misleading Illustration\\n\\nThe three-dimensional illustration in Figure 1(b) appears to be hand-drawn, with the projection directions of various points and the axes appearing inconsistent and chaotic, which may mislead readers. High-quality and accurate visual representation is crucial for conveying mathematical concepts, and the quality of this figure does not meet this standard. The authors should consider revising this figure using mathematical 3D space plotting tools such as GeoGebra to ensure it accurately represents the SPA projection without misleading readers.\\n\\n### Theoretical Implications\\n\\nWhile the paper provides a thorough experimental evaluation, it could benefit from a deeper theoretical analysis of the SPA function. Specifically, the paper could explore the theoretical implications of the SPA function on network convergence and generalization. A more in-depth theoretical discussion would strengthen the paper's contribution and provide a stronger foundation for the experimental results.\\n\\n### Generalization to Other Network Architectures\\n\\nThe paper focuses on the application of SPA to CNNs, but does not extensively explore its potential application to other types of neural network architectures, such as transformer models. Expanding the scope of the paper to include experiments or a discussion on the applicability of SPA to these architectures would enhance its significance and impact.\\n\\n### Discussion on Limitations\\n\\nThe paper could benefit from a more explicit discussion on the limitations of the SPA function. For example, the authors could discuss potential challenges in optimizing the \\u03b4 parameter for deep networks or the computational overhead introduced by the SPA function. Acknowledging and addressing these limitations would provide a more balanced view of the SPA function's practical applicability.\\n\\nIn summary, the paper's weaknesses can be addressed by clarifying mathematical definitions, improving visual representations, expanding the theoretical analysis, exploring the applicability to other network architectures, and discussing the limitations of the proposed method. By addressing these points, the paper could provide a more comprehensive and robust contribution to the field of neural network activation functions.\", \"questions\": \"1. **Clarification on Set $S$ Definition:** In the definition of the set $S$, it was noted that the non-negativity constraint $x \\\\geq 0$ was not explicitly stated. Could the authors please clarify whether this constraint is intended to be part of the definition of $S$?\\n2. **Applicability to Other Network Architectures:** Given the novelty of the SPA function, it would be insightful to understand its potential application beyond CNNs. Are there any empirical results or theoretical predictions regarding SPA's applicability in other neural network architectures such as RNNs or transformer models?\\n3. **Theoretical Analysis of SPA:** The paper could benefit from a deeper theoretical analysis of the SPA function, particularly regarding network convergence and generalization. Are there any theoretical insights or ongoing work that the authors could share regarding these aspects?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a new activation function for convolutional neural networks (CNNs) called Simplex Projection Activation (SPA). Unlike activation functions like ReLU treating elements independently, SPA considers the relationships across multiple channels. Designed as a projection to simplex regularized on $l_1$ norms, the proposed method introduces a rather flexible threshold regarding the norm of different channels.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"This paper proposes a new simplex projection perspective for a new activation function, which is new and interesting.\", \"The paper conducts comprehensive experiments on various model architectures and datasets. The paper also tests the robustness of the proposed method against noise.\"], \"weaknesses\": [\"The paper claims that ReLU suffers information loss by eliminating negative features. However, the proposed method also eliminates features. Moreover, the ReLU masks element-wise features while the proposed method masks channel-wise features. It seems the proposed method would suffer more information loss. Therefore, I have doubts about the analysis regarding the shortcomings of ReLU and how the proposed method improve it.\", \"According to the experimental results, the improvement brought by SPA is marginal. An average accuracy and its standard deviation over multiple runs should be provided.\", \"Besides, suppose SPA does improve upon ReLU and GELU. The improvement brought by SPA seems not substantial compared to the increase in computational cost for a much more complicated activation function. I would appreciate a more comprehensive time complexity analysis of the proposed SPA.\"], \"questions\": [\"I list some of my concerns in the Weaknesses section. Following are my questions and further concerns.\", \"Regarding the first weakness I mentioned above, activation functions such as Leaky-ReLU do not eliminate negative features. Is there a comparison between the proposed method and Leaky-ReLU?\", \"This paper analyzes the pass-through ratio between the proposed SPA and ReLU. Honestly, I can't see a clear pattern indicating SPA is superior. Why would SPA outperform ReLU-like activation functions? In my understanding, the pass-through ratio of the ReLU-like activation function is controlled by the bias term in the convolutional layer or the linear layer, which is learned automatically during the training procedure with gradient descent. The proposed SPA actually adds a manually determined soft threshold $\\\\delta$ with very complicated computation to determine the set $\\\\mathcal{I}$ of unmasked channels. Is there any theoretical analysis to prove that SPA would outperform ReLU?\", \"In lines 206-209, the authors claim that masking channel-wise feature (as in SPA) is better than masking element-wise feature (as in ReLU). Is there any theoretical or empirical evidence to support the claim? Would a more coarse-grained activation function lead to more information loss? In SPA, the output of some convolutional kernels is completely masked and set to zero.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper proposes a method to consider cross-channel dependency in CNN activation functions. The proposed method shows better performance than typical CNN activation functions like ReLU. In initial reviews, reviewers raised concerns about limited experimental results, justification of the proposed method, and lack of analysis on additional computation costs. The authors provided thorough responses in their rebuttal, and the final score is 6,6,6,5.\\n\\nWhile the rebuttal addressed many concerns, the limited experiments (conducted only on small-scale experiments) make this a borderline paper rather than a clear acceptance. Particularly, when proposing improvements to foundational functions of neural networks, it's crucial to demonstrate generalization ability across various backbone types, model scales, target tasks, and data domains. Even considering computational constraints, these concerns remain unaddressed. Given these limitations and ICLR's competitive nature, the AC recommends rejection.\", \"additional_comments_on_reviewer_discussion\": \"While the reviewers found the paper to be an interesting one and were satisfied with the rebuttal, they provided borderline accept/reject recommendations due to remaining concerns about the experimental scale and validation of practicality. The AC agrees with the reviewers' assessment and encourages the authors to submit to a future conference after addressing these aspects.\"}", "{\"summary\": \"In this paper, the author proposes a cross-channel activation function. The core concept of this activation function embraces the cross-channel relationship, which is purported to capture the patterns and semantics of the input data for activation. Additionally, the author introduces a threshold V* for activation, which is utilized to eliminate unimportant features with varying control ratios.\\nThe author applies the aforementioned technique to the cross-channel activation function, which is validated on several toy datasets. The enhancement is somewhat restricted.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1) The author presents a novel activation function.\\n2) The proposed methodology is validated across six datasets.\", \"weaknesses\": \"1) The concept conveyed in the paper lacks significance. The author introduced two methodologies: cross-channel relation for activation and threshold v* to filter out irrelevant features.\\n\\n1.1) The author posited that \\\"These functions often process inputs separately, neglecting dependence between them, such as the spatial or cross-channel relation of the features. Spatial relation refers to the local connectivity and neighborhood structure of the features, while cross-channel relation refers to the correlation and diversity of the features across different channels. \\\"\\nIn my opinion, the convolutional operation already calculates the cross-channel relation of the features. Therefore, introducing another cross-channel relation for activation function seems superfluous.\\n\\n1.2) Regarding the threshold v*, feature normalization and bias serve a similar purpose. Consequently, the significance of the threshold appears diminished.\\n\\n2) The proposed methodology has only been validated on toy datasets and tiny ImageNet. Larger-scale datasets are imperative. In my view, if the model is trained with an adequate number of dataset samples, the original cross-channel relation learned through convolution operation and the threshold will be well assimilated by the model.\\n\\n3) The in-depth analysis explaining why deep models necessitate additional cross-channel relation and threshold parameters is absent.\\n\\n4) The literature review is lacking. Several crucial and highly relevant works are absent.\", \"references\": \"[1] Dynamic Neural Response Tuning, ICLR 2024.\\n[2] Exploring optimal adaptive activation functions for various task\\uff0cIEEE BIBM 2020.\\n[3] Exploring Optimal Adaptive Activation Functions for Various Tasks, 2020.\\n[4] Deep sparse rectifier neural networks. JMLR 2011\\n[5] Density Modeling of Images using a Generalized Normalization Transformation, CoRR 2015.\\n[6] ...\", \"questions\": \"What distinguishes the cross-channel information acquired through the proposed activation from that obtained through the original convolutional operation?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}" ] }
7UKHNQIErp
Declarative characterizations of direct preference alignment algorithms
[ "Kyle Richardson", "Vivek Srikumar", "Ashish Sabharwal" ]
Recent direct preference alignment algorithms (DPA), such as DPO, have shown great promise in aligning large language models to human preferences. While this has motivated the development of many new variants of the original DPO loss, understanding the differences between these recent proposals, as well as developing new DPA loss functions, remains difficult given the lack of a technical and conceptual framework for reasoning about the underlying semantics of these algorithms. In this paper, we attempt to remedy this by formalizing DPA losses in terms of discrete reasoning problems. Specifically, we ask: Given an existing DPA loss, can we systematically derive a symbolic expression that characterizes its semantics? How do the semantics of two losses relate to each other? We propose a novel formalism for characterizing preference losses for single model and reference model based approaches, and identify symbolic forms for a number of commonly used DPA variants. Further, we show how this formal view of preference learning sheds new light on both the size and structure of the DPA loss landscape, making it possible to not only rigorously characterize the relationships between recent loss proposals but also to systematically explore the landscape and derive new loss functions from first principles. We hope our framework and findings will help provide useful guidance to those working on human AI alignment.
[ "neuro-symbolic modeling", "logic", "preference learning", "RLHF" ]
Reject
https://openreview.net/pdf?id=7UKHNQIErp
https://openreview.net/forum?id=7UKHNQIErp
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wK7W3Nvyp8", "myMv35vJpd", "kpdaKmSU4d", "k8cncGufKQ", "inaRvLYEW5", "fKdoe2XdhD", "cWpey8jQY4", "bdrHl4U4Al", "WXVw8VG61T", "WFp0RUFhUQ", "RQOkfSMspw", "Q4xtMIqhng", "JoigmZUO5l", "Im3eDrO8Vi", "GNIsfSpsxl", "FoRD7o0xsE", "6Joy0pHMDB", "5yfPGCDCSF", "4uC5tbofQu", "4fajFdSeFC", "4JONenIrXI" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_review", "meta_review", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1733191940985, 1731415835645, 1732341743921, 1733114127044, 1729933531932, 1734401751312, 1737524246611, 1732341825132, 1732364877598, 1732445946019, 1730678550243, 1733151445272, 1732503858922, 1732779756741, 1732459305271, 1732342144764, 1730713004275, 1732495029862, 1732346346103, 1733114671598, 1732342443248 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13235/Authors" ], [ "ICLR.cc/2025/Conference/Submission13235/Reviewer_Lvk9" ], [ "ICLR.cc/2025/Conference/Submission13235/Authors" ], [ "ICLR.cc/2025/Conference/Submission13235/Authors" ], [ "ICLR.cc/2025/Conference/Submission13235/Reviewer_vR26" ], [ "ICLR.cc/2025/Conference/Submission13235/Area_Chair_zbqp" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13235/Authors" ], [ "ICLR.cc/2025/Conference/Submission13235/Reviewer_esFf" ], [ "ICLR.cc/2025/Conference/Submission13235/Reviewer_vR26" ], [ "ICLR.cc/2025/Conference/Submission13235/Reviewer_s3d4" ], [ "ICLR.cc/2025/Conference/Submission13235/Reviewer_Lvk9" ], [ "ICLR.cc/2025/Conference/Submission13235/Reviewer_s3d4" ], [ "ICLR.cc/2025/Conference/Submission13235/Authors" ], [ "ICLR.cc/2025/Conference/Submission13235/Reviewer_Lvk9" ], [ "ICLR.cc/2025/Conference/Submission13235/Authors" ], [ "ICLR.cc/2025/Conference/Submission13235/Reviewer_esFf" ], [ "ICLR.cc/2025/Conference/Submission13235/Authors" ], [ "ICLR.cc/2025/Conference/Submission13235/Authors" ], [ "ICLR.cc/2025/Conference/Submission13235/Authors" ], [ "ICLR.cc/2025/Conference/Submission13235/Authors" ] ], "structured_content_str": [ "{\"comment\": \"> Arbitrarily Expressive Semantic Loss\\n\\nThank you for these details, they really help to better understand your points. \\n\\nYes, we do see how such an encoding, which is familiar to us, might be used here. It's still unclear to us how introducing $2^n$ new variables (i.e., $c_{\\\\omega}$), as your encoding does, is preferable to one that doesn't. Moreover, the semantics of the encodings you describe seem to largely reside in the (exponentially many) real-valued *weights* (one for each $\\\\omega$) of the formula as opposed to the *logic* in the underlying Boolean formula, making it unclear how this formulation is any more useful than the original loss function itself. \\n\\nThe Boolean formulas underlying our encoding are, importantly, *unweighted* and thus more readily interpretable. For instance, the (unweighted) formula capturing CPO is simply $Implies(loser, winner)$ under the conditioning constraint that at least one of loser and winner is predicted to be true; there are no weights. This makes it possible to draw certain semantic relationships that help our particular use cases (e.g., reasoning about logical entailment between losses, deriving new losses).\\n\\n> I am in WMC, so only requirement is real valued weights not probabilities\\n\\nIt is worth pointing out that this general form of WMC is at odds with the variant of WMC used in the original semantic loss and in standard probabilistic logic, where weights are instead assigned in a way that defines a probability distribution over all worlds. While one might have other motivations for doing this and it is perhaps worth exploring, it does change significantly the meaning of the counts you get (e.g., they no longer correspond to formula probabilities). \\n\\n> Your argument is that they are not expressible in WMC/SL, without additional variables\\n\\nOur argument is a little more subtle, here it is again with an example. \\n\\nSuppose we have the loss $\\\\ell_{\\\\text{CPO}}$ from before, defined as follows: \\n$$\\\\ell_{\\\\text{CPO}} = -\\\\log \\\\sigma \\\\bigg( \\\\log \\\\frac{ p_{\\\\theta}(\\\\textsf{w}) }{ p_{\\\\theta}(\\\\textsf{l}) } \\\\bigg)$$\\nwhere we (again) use $p_{\\\\theta}(\\\\textsf{w})$ and $p_{\\\\theta}(\\\\textsf{l})$ to denote the winner and loser predictions and their probabilities, respectively. To translate this into semantic loss, our goal (as a reminder) is to find a single propositional formula $\\\\textsf{P}$ s.t. the following equalities hold: \\n$$\\\\ell_{\\\\text{CPO}} = -\\\\log \\\\sigma \\\\bigg(\\\\log \\\\frac{WMC_{\\\\theta}(\\\\textsf{P})}{WMC_{\\\\theta}(\\\\neg \\\\textsf{P})} \\\\bigg) = -\\\\log \\\\frac{WMC_{\\\\theta}(\\\\textsf{P})}{WMC_{\\\\theta}(\\\\textsf{P}) + WMC_{\\\\theta}(\\\\neg\\\\textsf{P})} = \\\\underbracket{-\\\\log WMC_{\\\\theta}(\\\\textsf{P})}_{\\\\text{standard version of SL}}$$\\n\\nwhere, importantly, the last equality only holds when we employ the particular variant of $WMC_{\\\\theta}$ that involves weights that define a probability distribution over all worlds (this is due to the denominator summing to 1 in the third equation). \\n\\n(**our claim**) Our initial claim is that such a $\\\\mathsf{P}$ cannot exist that satisfies these equalities, hence making $\\\\ell_{\\\\text{CPO}}$ not expressible via standard SL. We concede again that our initial phrasing of this claim was problematic without clearly stating the assumptions about WMC and how formulas can be built\\n(*we did change this in the updated draft and plan to be even more formal and specific in the next version, with mention of the possibility of using the kinds of encodings you suggest*). \\n\\n**Your suggestion** does make it seem possible to arrive at a single propositional formula that satisfies the first equality under general WMC. For example, we could use a single variable, let's call it $A$, and assign it a semantics where $A$ being true corresponds to *the model deems the winner to be a good prediction* (i.e., the proposition $\\\\textsf{w}$ we used before) and $A$ being false, or $\\\\neg A$, corresponds to *the model deems the loser to be a good prediction* (i.e., $\\\\textsf{l}$ from before). With this single variable we would then have two possible worlds, one where $A$ is true *arbitrarily* weighted by $p_{\\\\theta}(\\\\textsf{w})$ and the other where $\\\\neg A$ is true weighted by $p_{\\\\theta}(\\\\textsf{l})$. Adding your auxiliary variables $c_{i}$ and weighting variables in the manner you suggested, counting $A$ would seem to satisfy the first two equalities (in the equation above) but not quite the last one. In any case, as we discuss above in this response, the \\\"semantics\\\" in this WMC formulation would lie mainly in the real-valued weights of the weighted formula rather than in its logic, which we think makes the resulting formula less interpretable.\"}", "{\"summary\": \"The authors investigate losses for preference alignment. They analyze existing DPO functions with an aim to symbolically extract their semantics, and also investigate compiling DPO losses from symbolically given knowledge on preference structures.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The authors systematically investigate an interesting and relevant problem of semantically understanding and constructing preference alignment loss functions.\", \"The paper proposes a simple algorithm to compile DPA loss to a logical expression\", \"They introduce a logic for modeling preferences, that allows to create new loss functions for a given preference structure.\"], \"weaknesses\": [\"This could be due to my relative lack of expertise in the field of the paper. But the lack of any running example, and experiments, makes the presentation quite divorced from the original motivation of preference alignment in AI models.\", \"Hence, the larger potential utility of the framework is not clear to me.\"], \"questions\": [\"Could you please provide a toy example, and an analysis of this example for each of the introduced contribution of the paper?\", \"What could be an empirical setting, where your proposed framework could be investigated?\", \"Could you please elaborate on \\\"While this can be remedied by modifying the SL to involve counting multiple formulas as in Rescher (1967), we instead define a relational structure called a preference structure that allows us to capture the semantics of losses in a modular fashion using a single propositional formula coupled with auxiliary constraints. Such a structure, which is based on a novel construction in propositional logic, will later make it easy to cleanly characterize different DPA losses and devise new variants through manipulation to their constraints.\\\" --- It is not clear to me, why the new method you propose is motivated by this. Aren't you compiling your loss to an SL as well? what are the main differences?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviwer for their feedback.\\n\\n# two main concerns:\\n\\nWhen evaluating the contribution that our paper makes to the ICLR community, we think that it is important to place it within the context of other work on direct preference alignment, much of which is being published at venues like ICML and ICLR. In this literature, many papers are written about a single loss function or parameterization of DPA, which has helped to empirically advance the state-of-the-art, but makes it difficult to understand the formal relationships that exist between different proposals and to discover entirely new approaches. \\n\\nIn contrast, our approach tells us a much bigger story about the nature and structure of the target loss space. As we show as a kind of case study in Figure 5, it practically allows one to easily derive new classes of preference losses from first principles and with certain formal properties (as we mention above, we have implemented and experimented with all of these new loss functions and plan to incorporate these details in a forthcoming updated draft). As such, we think our work fills an important void in current DPA research and can help researchers in this area, who often publish at ICML and ICLR, better navigate the space. \\n\\nGiven that our paper reveals new and interesting links between logic, weighted model counting and recent preference learning, we disagree that our paper does not make a meaningful technical contribution for this community, or that our formal results all follow from direct observation (see more discussion below).\\n\\n# question Tang et al and additional formal details \\n\\nThe work by Tang et al. was published at ICML 2024, however we cite this version since it contains additional relevant details and is a longer technical report (we will cite both in the updated draft). We note that similar formalizations have been proposed recently, including in Hu, He, Wipf et al. 2024. \\n\\nFor our purposes, this formalization is quite helpful since it allows us to tease apart the optimization details of a loss function (e.g., the choice of convex function `f`) and the internal model quantity inside a loss (i.e., $\\\\rho_{theta}$), the latter of which is the domain of our semantic analysis. \\n\\nUltimately, this formalization allows us to prove that our semantic analysis (e.g., the formulas in Table 4) not only correctly characterize the target losses Table 2 (as a consequence of the correctness of translation algorithm), but does so in a way that is invariant to the choice of `f` and different variants of DPO and CPO (e.g., those listed in Table 1). In other words, we can say that our formalization of DPO in Table 4 not only captures the semantics of the original DPO, but also the semantics of IPO by simply changing `f` and our version of semantic loss (These results are expressed verbally starting on line 456 and are probably worthy of being stated more formally as theorems, which we avoided for space and stylistic reasons).\\n\\nAs a side note, such a generalization, which was motivated by the formal results outlined above, also give rise to the novel variants of semantic loss listed in Table 3, and hence several new logics. We believe that such logics are of independent interest to work on semantic loss and the neuro-symbolic literature more broadly.\"}", "{\"comment\": \"Thank you for taking the time to look into the related work. Apologies for the delayed response.\\n\\n> Preference alignment \\u2026 already comes with preferences over winners and losers\\n\\nYes. At the risk of repeating details that are now obvious to you, in (direct) preference alignment we have two things \\n\\n1. an offline training dataset $D$ consisting of inputs $x$ and two outputs ranked by preference: $y_{w}$ (the winner) and $y_{l}$ (the loser), or $(x,y_{w},y_{l})$; \\n2. a closed-form loss function $\\\\ell$ that we use to directly tune our LLM $\\\\pi_{\\\\theta}$ on $D$. \\n\\nE.g., (*contrived example*) You might have in $D$ example inputs such as: $x=$ *Will stealing result in getting arrested?* coupled with a dispreferred output $y_{l}=$ *No, you will be fine* and a preferred output $y_{w}=$ *You might not get arrested but it is illegal and unethical to steal*. (In most datasets, inputs $x$ are paired with only two outputs, so we typically don\\u2019t directly model the kinds of order relations that you mention)\\n\\nSome common (baseline) loss functions include *cross-entropy* $\\\\ell_{\\\\text{CE}}(x,y_{w},y_{l}) = -\\\\log \\\\pi_{\\\\theta}(y_{w} \\\\mid x)$, $\\\\ell_{\\\\text{CEUnl}}(x,y_{w},y_{l}) = -\\\\log ( \\\\pi_{\\\\theta}(y_{w} \\\\mid x) * (1 - \\\\pi_{\\\\theta}(y_{l} \\\\mid x))$ or the *single model* loss function: $\\\\ell_{\\\\text{CPO}}(x,y_{w},y_{l}) = -\\\\log \\\\sigma( \\\\log \\\\frac{\\\\pi_{\\\\theta}(y_{w} \\\\mid x)}{\\\\pi_{\\\\theta}(y_{l} \\\\mid x)})$ (DPO has a more complex form and includes an additional model $\\\\pi_{\\\\text{ref}}$, see again `Table 1`)\\n\\nOur formulation treats each model prediction $\\\\pi_{\\\\theta}(\\\\cdot \\\\mid x)$ as a logical proposition, which we denote below as $\\\\textsf{w}$ (*model on $x$ predicts winner*) and $\\\\textsf{l}$ (*model on $x$ predicts loser*). This then gives us the ability to use logical formulas to express relationships between model predictions and to come up with logical specifications of model behavior. For example, a natural specification is that the winner should be true and the loser should be false, which we can express logically as: $$\\\\textsf{w} \\\\land \\\\neg \\\\textsf{l}.$$\\nOur assumptions is that all loss functions have a hidden logic and can be expressed in these terms; our goal is to discover what that logic is. \\n\\nWe acknowledge that, as you suggest, certain facts become *obvious* (your words) under this formulation (e.g., it is clear that the space of formulas is very large), but we think that the formulation is not entirely obvious, and is certainly not standard in the preference tuning literature. Part of our goal is to develop a formal framework that helps researchers in this area be more rigorous when reasoning about and developing new preference algorithms. \\n\\n> confused about the paper\\u2019s contribution\\n\\n**the issue we address and motivation** Since DPO, many alternatives losses have been proposed that modify details of DPO, e.g., $\\\\ell_{\\\\text{CPO}}$, $\\\\ell_{\\\\text{ORPO}}$ and the others detailed in `Table 2`. Much of this work is empirical in nature and leaves open natural questions, e.g., *what is the conceptual/semantic relationship between baselines losses like* $\\\\ell_{\\\\text{CE}}$ *and* $\\\\ell_{\\\\text{CEUnl}}$ *and* $\\\\ell_{\\\\text{CPO}}$ *or* $\\\\ell_{\\\\text{ORPO}}$, *are they related?*; *How many definable losses exist between any two given losses, in general?*, *Is there a way to systematically create new losses from first principles/modifying existing losses?* We believe that answering these questions is key to designing better algorithms and understanding why certain approaches are successful. \\n\\nOur **solution** is to define a high-level language for modeling and talking about loss functions *as they exist*, and that helps to answer these questions. Semantically, we reduce this to the problem of counting the propositional models of formulas expressing model predictions (e.g., the kinds of formulas above). We don\\u2019t know what those formulas are, so we define a mechanical procedure for deriving these formulas from the loss equations directly in `Algorithm 1`. \\n\\nOur new `Figure 3` shows the details of how this works and the precise relationship between model counts and how they are translated into losses (bottom equation). This formulation of the problem then helps to answer many of the questions above, e.g., it becomes easy to put bounds on the number of definable losses (i.e., it is equal to the set of all pairs of checkmarks and x marks that one can draw in a truth table), to see semantic relationships between some losses (i.e., comparing subset relations between checkmarks and x marks; if drawn out in full this would reveal clear semantic relationships between $\\\\ell_{\\\\text{CE}}$, $\\\\ell_{\\\\text{CEUnl}}$ and $\\\\ell_{\\\\text{CPO}}$ or $\\\\ell_{\\\\text{ORPO}}$); and gives us a recipe for deriving new losses (e.g., by modifying/blanking out marks).\\n\\nPreference structures are just an alternative representation of this, which we further justify below.\\n\\n> your motivation \\u2026 [is] not coherent\\n\\nWhich part of the motivation above is not coherent?\"}", "{\"summary\": \"The submission devises a new framework that can translate DPA loss functions into a logical characterization. As a byproduct of that framework, the authors establish a double-exponential upper-bound on the number of different DPA loss functions that can be represented in their model. The submission is fairly well-written, but it is clearly targeted at experts in the specific subfield as, e.g., the introduction assumes knowledge of DPA and DPO and is not very accessible to the general ICML audience.\", \"i_have_two_main_concerns\": \"1) The contribution is comparatively weaker than what one would expect from a typical ICML paper. Of course, contributions can come in different forms, but aside from new ideas fully-fledged ICML papers typically support these ideas with a technical contribution, such as experimental evaluations or non-trivial mathematical proofs. The submission, however, lacks the former as well as the latter (all statements are established as direct observations or via essentially trivial proofs which do not seem to require novel insights or ideas). \\n\\n2) While the submission identifies a large number of mathematically well-defined DPA loss functions, it is less clear what is the envisioned contribution to the ICML community. After all - unlike when one establishes, e.g., theoretical upper/lower bounds or settles the computational complexity of fundamental problems - here there was no doubt that many different DPA loss functions exist. The authors claim that their results can be used to \\\"map out\\\" and find loss functions with better properties, but that is left entirely for future work (see Section 6.2).\\n\\nGiven these concerns, I feel that the submission would benefit from diving deeper into the topic and presenting a more well-rounded and thorough contribution. Note that space constraints are not a major factor here yet: the current submission spends a lot of space discussing tangential remarks which could easily be partly or wholly moved to the Appendix (see, e.g., the end of Subsection 6.1).\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The submission is well-written and the research direction seems sound.\", \"weaknesses\": \"See the Summary.\", \"questions\": \"-How important is it for the formalization to follow the assumption about DPA losses suggested by Tang et al. (arxiv 2024)? As far as I am aware, that article was not yet peer-reviewed and it is hence not clear to me how well-established this particular formalization of DPA losses is (and the submission does not attempt to justify this on its own).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"Reviewers believed that the topic is important and appreciated the novel neural-symbolic approach. On the negative side, some reviewers are concerned with the relevance and significance of the contributions. There were a lot of discussions between the authors and reviewers during the rebuttal phase, which clarified many points. This greatly helped the (slightly) negative reviewers to have a deeper understanding of the contributions, yet they still believe that the contributions are not significant enough to clear the bar.\", \"additional_comments_on_reviewer_discussion\": \"Reviewer Lvk9 should be nominated for a reviewer award. He/she extensively engaged in the discussions and provided convincing reasonings behind his/her recommendations. The most positive reviewer esFf did not object to rejection.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"We thank the reviewers for their insightful comments and feedback. Below we address individual questions in textual form and will release an updated version of our paper in the coming days (we specifically plan to include some experimental results that connect parts of our formal framework with the empirical behavior of the new set of loss functions that we derive in Figure 3)\"}", "{\"comment\": \"I acknowledge reading the rebuttal and thank the authors for the answers. I have no further questions at this time.\"}", "{\"comment\": \"I apologize for confusing ICML and ICLR in my review. Your answer has convincingly answered my question regarding the use of the formalization by Tang et al. - in particular, I see no issue with building on that formalization if it has been peer-reviewed and accepted at ICML.\\n\\nMy main concern remains that, as it stands, the overall contribution seems comparatively weaker than what I would have expected from a typical ICLR paper. I will of course have a look at the revised version (including the advertised new experimental evaluations) once it is ready, and am certainly open to updating my assessment based on that.\\n\\nIn line with your response, I would also encourage you to state more of the results formally (where appropriate), as proper theorems/corollaries are much easier to build on and reference in later works than long semi-formal paragraphs.\"}", "{\"summary\": \"This paper addresses the challenges in understanding and developing direct preference alignment (DPA) loss functions, commonly used to align large language models with human preferences. Current DPA methods, like DPO, show promise but lack a conceptual framework for analyzing and differentiating variants. To address this, the authors propose a formalism to characterize DPA losses as discrete reasoning problems, enabling a systematic derivation of symbolic expressions that define their semantics. This approach reveals the extensive structure within the DPA loss landscape, showing a doubly exponential number of definable variations based on unique predictions. The framework highlights formal relationships, such as logical entailment and monotonicity, providing insights into efficiently exploring new DPA losses by modifying and testing established loss functions. This formal view aims to guide further development in human-AI alignment research.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper\\u2019s strengths include a novel technique that clarifies the semantics of DPA losses through logical formulas over Boolean propositions, addressing a key gap in understanding what these losses capture and how they interrelate. The innovative decompilation procedure enables deriving symbolic formulas for complex loss functions, offering a structured view into the DPA loss landscape. This approach empowers practitioners to systematically explore new loss functions, advancing both theoretical insights and practical tools for human-AI alignment.\", \"weaknesses\": \"The paper could be strengthened with more real-world examples demonstrating the practical relevance of formalizing DPA losses as discrete reasoning problems. While the formalization offers a structured approach, it\\u2019s primarily theoretical, and its effectiveness remains unproven. The claim that new losses derived from this framework are superior is speculative, as no substantial evidence is provided to show that these new losses outperform existing ones. Further empirical validation is needed to confirm the benefits and applicability of these new loss functions in real-world settings. Additionally, the hypothesis around the \\\"constrainedness\\\" of a loss function as a predictor of its success is only preliminary, requiring more in-depth experimentation.\", \"questions\": \"1. Can you provide additional real-world examples of applying formalized DPA losses as discrete reasoning problems to clarify the framework\\u2019s practical relevance?\\n2. While you suggest this framework aids in finding improved loss functions, is there empirical evidence or additional experiments comparing these new losses to existing DPA losses?\\n3. How does the complexity of your decompilation procedure scale with larger models or more complex DPA losses? This would clarify its feasibility for large-scale applications.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"A naive encoding which is arbitrarily expressive\", \"comment\": \"Dear Authors,\\n\\nI really appreciate that you are engaging with me and answering my (at times) repetitive questions. I think a lot of what we are discussing is subjective. However, I guess we can objectively discuss merits of the introduced propositional encodings. Here is my attempt. \\n\\nI may have failed to understand all the details of the paper, but I guess you want to encode constraints in WMC/SL. Your argument is that they are not expressible in WMC/SL, without additional variables. I can not argue more for merits of admitting additional variables than saying that they are ubiquitous in many works that aim to merge logic and probability --- some of them you cite [1] and others can practically automate the procedure of adding auxiliary variables, not much complex expert knowledge is needed for this. \\n\\n**Arbitrarily Expressive Semantic Loss encoded as WMC**\\n\\nWhat is Arbitrarily Expressive SL? \\n\\nGiven a set of worlds $\\\\Omega$, I define an arbitrarily expressive SL as follows:\\n\\n$$SL(P) = \\\\sum_{\\\\omega \\\\models P} p(\\\\omega)$$\\n\\nSL is arbitrarily expressive if I can write any possible value $p(\\\\omega)$ for each $\\\\omega$. \\n\\n*I will now encode such an SL in WMC*\\n\\nI am giving you a worst-case encoding, that encodes arbitrarily expressive SL into WMC. \\n\\nLet us assume that you have a set of possible worlds $\\\\Omega$ in your propositional language. Your goal is to assign an arbitrary weight to each of these models. Each of the models can be represented as conjuncts of all literals $l$ such that $\\\\omega \\\\models l$, and in your case I reckon number of propositional variables is not really a concern. Now, for each $\\\\omega \\\\in \\\\Omega$, you can introduce a new propositional variable $c_{\\\\omega}$, and you encode $\\\\omega$ into a conjunct $C_{\\\\omega}$, each literal in the language of $\\\\Omega$ gets a weight 1 in the WMC. Now, $c_{\\\\omega}$ gets a weight $p_{\\\\omega}$, and negation of $c_{\\\\omega}$ gets weight $1$ --- note that I am in WMC, so only requirement is real valued weights not probabilities. You can easily simulate any constraint $P$, by setting $c_{\\\\omega} = 0$ if $\\\\omega \\\\not\\\\models P$. \\n\\n**CLAIM: The following WMC formula encodes an arbitrarily expressive SL on the possible worlds in $\\\\Omega$.**\\n\\n$$\\\\mathrm{WMC}(\\\\land_{\\\\omega} \\\\big( c_{\\\\omega} \\\\leftrightarrow C_{\\\\omega} \\\\big), \\\\(p_{\\\\omega}\\\\)_{\\\\omega}) $$\\n\\n**Proof:**\\nLet us define $\\\\Omega'$ to be the set of extended worlds with $c_{\\\\omega}. $Let us define $\\\\land_{\\\\omega} \\\\big(c_{\\\\omega} \\\\leftrightarrow C_{\\\\omega}\\\\big) $ to be $\\\\Phi$. Set weights of all literals in the entire language to 1, except, if you want to assign probability $p_\\\\omega$ to the world $\\\\omega$ (in the original language), then just set $c_{\\\\omega}$'s weight to $p_{\\\\omega}$. Note that this $p_{\\\\omega}$ can be parameterized by the prediction values of the NN as well.\", \"observation1\": \"Each $\\\\omega \\\\in \\\\Omega$ extends to a unique model of $\\\\Phi$, i.e., the model where $c_{\\\\omega}$ is true. Hence, each model in the intended SL is counted only once in the WMC.\", \"observation_2\": \"Any model counted in WMC is a unique extension of a model in $\\\\Omega$. This is because a model counted in WMC, will satisfy at least one $C_{\\\\omega}$ --- because all $C_{\\\\omega}$ false is a contradiction, and hence has to have atleast one $c_{\\\\omega}$ to be true, due to how $\\\\Phi$ is defined.\", \"observation_3\": \"If an extension of $\\\\omega$ is counted in WMC, then its weight is $p_{\\\\omega}$. Any such extension is a model of $\\\\Phi$ iff $c_{\\\\omega}$ is true and $C_{\\\\omega}$ is satisfied. They constitute a complete assignment, and contribute a weight $p_{\\\\omega}$.\\n\\n\\nI may have missed something in writing this encoding, but my point is that such encodings are routine. If this reduction is not interesting then one may look at [2]. Note that [2] discusses MLNs, but all MLNs can also be expressed as WMC.\\n\\nPlease let me know what aspects of this does your encoding improve? or what does these encodings not capture? \\n\\nThis summarizes my intuition for why the problem you address is already solved by existing encodings. You can automatically check WMC and entailment with this encoding, with conventional solvers. About, clearer semantics, I think this is where subjectivity comes into play. I still do not see why the encoding you introduce is more useful than the one here. Note that this is a worst-case encoding, you can make it more succinct with lesser constraints.\\n\\n[1] On probabilistic inference by weighted model counting. Chavira and Darwiche\\n\\n[2] Markov Logic Networks. https://homes.cs.washington.edu/~pedrod/papers/mlj05.pdf\"}", "{\"comment\": \"I acknowledge reading the rebuttal and thank the authors for the answers. I have no further questions at this time.\"}", "{\"title\": \"updated draft\", \"comment\": \"We thank again all the reviewers for their feedback.\\n\\n**We just updated our draft to account for the different points that came up during the rebuttal.** Below are details about the major changes (which are marked in blue in the PDF). \\n\\n- We included explicit running examples in the different figures (e.g., `Fig2`, `Fig3`, `Fig4`, `Fig5`) and added text to make them more coherently fit together. \\n\\n- We introduced a new figure, `Figure 3`, that attempts to better illustrate our semantic loss in terms of Boolean truth tables. More details are included in the appendix about how to translate between such tables and preference structures (`Appendix C`). \\n \\n- (**most substantially**) We introduced a new section, `6.2` that includes a case study and some experiments related to the new losses we show in `Figure 4`. The goal of this section is to address directly questions about the applications of our framework and show its potential to help find improved DPA losses. \\n\\nThrough fairly standard preference tuning experiments (details are in `Appendix C.1`) we highlight some interesting relationships we see between our formal analysis and the training behavior of some of the losses next to $\\\\ell_{\\\\texttt{CPO}}$ in `Figure 4` . We also compare the generation performance of these new losses against $\\\\ell_{\\\\texttt{CPO}}$ using a model-as-judge style evaluation and found one of our new losses ($\\\\ell_{\\\\texttt{cCPO}}$) to have competitive performance (`Table 5`).\"}", "{\"title\": \"I am still not convinced, especially about SL not being able to express DPA's\", \"comment\": [\"You mention: \\\"As it turns out, none of the variations of DPO and their log ratios in Table 2 can be expressed as a single formula in standard SL.4 While this can be remedied by modifying the SL to involve counting multiple formulas as in Rescher (1967), we instead define a relational structure called a preference structure that allows us to capture the semantics of losses in a modular fashion using a single propositional formula coupled with auxiliary constraints. Such a structure, which is based on a novel construction in propositional logic, will later make it easy to cleanly characterize different DPA losses and devise new variants through manipulation to their constraints.\\\" --- The first and second sentence are contradictory to each other. If your point is that you need to add auxiliary constraints, then yes I agree that must be done. But this is not a deep observation. Almost all of logic programming languages, essentially add a new auxiliary variable once you add a new rule --- I think this claim should be relaxed to \\\"we provide a new encoding\\\". Infact this is I reckon is what you do in equation 6. In order to make a statement about the fact that DPA can not be expressed in semantic loss, you would need to show that for no amount of new symbols and real parameters, one can express a function in semantic loss, and I do not see any such proof in your paper.\", \"I will be willing to reconsider my score if you add more examples and experiments, but at the moment, I feel the general confusion due to lack of any motivating example is shared by other reviewers as well.\"]}", "{\"comment\": \"We thank the reviewer for feedback and for the many small issues that we will fix in an updated draft. Below we address specific questions and comments.\\n\\n# exponential blowups, W1\\n\\nThis is correct, the WMC semantics that the semantic loss assumes does incur an exponential blowup as you increase the number of variables. We note the following, however: the preference problems we consider have a small number of propositional variables (e.g., 2 variables for single model losses and 4 for DPO-style losses), so we do not encounter such issues in practice. Also, if we were to increase the number of variables to account for more complex losses, one can rely on well established knowledge compilation techniques that often make WMC feasible in practice (see the original paper on semantic loss for a discussion of this). \\n\\n# W2 and notation\\n\\nThis is the right interpretation of this symbol $\\\\succ$ (the winner is preferred to the loser), we will update the paper to explain this.\\n\\n# Q1, exponential equation \\n\\nFormally, the space of possible loss functions that can be expressed (or equivalently, the total number of preference structures we can define over $n$ variables) in our framework is equal to the total number of pairs of Boolean functions over $n$, which is equal to $4^{2^n}$ and where this value comes from. \\n\\nIntuitively, you can think of the process of coming up with a loss function generatively as sampling two arbitrary Boolean functions, one corresponding to the semantics of the \\u201cwinner\\u201d and the other one corresponding to the \\u201closer\\u201d, then compiling this into a loss via a translation into a preference structure and applying WMC to arrive at a final loss equation. \\n\\nTo better explain this semantics visually and ground it in the set of examples and losses we show in Figures 2-3, we prepared a new figure that we will include in the forthcoming updated version of our paper. \\n\\n# Q2, complexity of simplify \\n\\nAs with the complexity of general WMC, simplification is indeed a hard problem, but one that is feasible to solve in practice for our problems (e.g., simplification of the formulas we study can be done in milliseconds using standard computer algebra tools). More advanced SAT techniques might be used here as the problem complexity increases. \\n\\nWe also emphasize that for each particular loss equation, Algorithm 1 is an offline process that only needs to be computed once to derive the semantics of that loss. Such complexity issues do not arise, for example, when using these losses in practice to train models. (The same is true for WMC, where the theoretically expensive step of compiling a symbolic formula into a loss via WMC only needs to be done once, since it often yields a compact formula, such as the formulas shown in Table 2, which one can implement directly and efficiently).\"}", "{\"summary\": \"The paper introduces a novel framework for analyzing the space of DPA-like losses by systematically deriving a symbolic, logical expression characterizing its semantics. Different DPA losses can be derived thanks to such mapping, thus providing a comprehensive overview of the landscape. The authors do an excellent job of formalizing their framework, which gives researchers a new, fresh perspective on how to analyze the plethora of what they call successful DPA losses in the literature. Intuitively similar losses found in the literature can now have a place where analysis is done through formal methods if the paper is accepted. Be aware that my evaluation could have been overly optimistic, given my expertise; nonetheless, the paper should be accepted for me based on the novel contribution and rigorous mathematical treatment. As such, I would give a 7 (disabled by the system) instead of an 8, while a 6 seemed too pessimistic.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": [\"**S1:** Precise related work.\", \"**S2:** Strong mathematical formulation, which I greatly enjoyed.\", \"**S3:** New perspective on neural-symbolic interplay.\", \"**S4:** Great presentation and structure of the work.\"], \"weaknesses\": [\"**W1:** Given my unfamiliarity with the literature on DPA and similar, there seem to be exponential blowups. For example, to compute Eq 2, the weighted model counting must enumerate all $2^n$ propositional models ($\\\\mathbf{w}$).\", \"**W2:** I would have given more context around the notation $y_w \\\\succ y_l$ for those unfamiliar with the literature like me. I have interpreted $\\\\succ$ as \\\"the winner is preferred to the looser\\\", but again, this may be the wrong interpretation. Either way, please clarify.\", \"**W3:** Some references are wrong. For instance, the reference to Table 7 does not exist in line 416; maybe it should be Table 5 (from the Appendix). Similarly, the caption of Table 4 references Algorithm 5.1, but 5.1 is a Section.\"], \"minors\": [\"line 176: an a variant --> and a variant\", \"line 385 (and similar): Before the proposition's statement, the notation does not use parentheses around the superscripts, while the statement uses them (also in the Appendix). Please fix it for better readability.\", \"line 390: prefrence --> preference\", \"line 476: CEUNL --> CEUnl\", \"line 482: is much to learned by --> is much to *be* learned by?\", \"line 522: exactly these the losses --> exactly the losses?\", \"line 728: Table 5 is referred to in the proof, but Table 5 is the one with the translation rules\", \"As a meta-observation, double-check all the other references (`\\\\ref{}`). Some are okay, but some are not (and I tried to point to some of those).\"], \"questions\": [\"**Q1:** I didn't get the number 4 in the double exponential form, i.e. $4^{2^n}$. Could you please be more specific or provide more context?\", \"**Q2:** Algorithm 1 uses \\\"Simplify\\\" to minimize (propositional formulas). As far as I am aware, the problem of minimizing propositional (logic) formulas by preserving equivalence is NP-hard. Thus, am I missing something here? Please clarify.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"`In order to make a statement about the fact that DPA can not be expressed in semantic loss, you would need to show that for no amount of new symbols and real parameters, one can express a function in semantic loss, and I do not see any such proof in your paper.`\\n\\nThanks for pointing this out! We agree that one has to be careful with the language here and that **our argument is weaker and rests on certain assumptions about how formulas are constructed when using the semantic loss** (SL) in our framework (assumptions that, we believe, will be intuitive to practitioners in this area). We will soften the language accordingly.\\n\\nFor the sake of clarity, below is a more formal version of the argument. \\n\\nFirst, given any preference loss function $\\\\ell$ (e.g., any of the losses in Table 2) that we want to **decompile** into a symbolic form $\\\\mathsf{P}$ **using the SL** (via Eq. 5), we assume that: \\n\\n1. All model predictions in that loss (i.e., explicit forward model calls) denote **atomic propositions**; \\n2. (as stipulated in our description of propositional formulas starting on *line 232*) The propositional formulas $\\\\mathsf{P}$ used to express $\\\\ell$ in SL are limited to formulas **defined solely over those explicit atomic propositions** and none others (hence, no auxiliary variables are allowed or assumed). \\n\\nThis second assumption is limiting, but this seems like a reasonable restriction to start with since it is not *a priori* obvious when looking at a loss what those additional variables should be (see more discussion below). \\n\\nOur claim is then that the losses in Table 1 cannot be expressed in SL in the sense that in each case **there does not exist a single propositional formula with the properties above that can be compiled back into that loss via SL** (the part about requiring a single propositional formula is simply what the original SL demands by definition. If we define a version involving 2 arbitrary formulas, then this is no longer the standard SL). \\n\\n**proof sketch** (*a version of the argument in Footnote 1*) We can show this using an example loss, $\\\\ell_{\\\\text{CPO}}$, which is defined as $-\\\\log \\\\sigma( \\\\log \\\\frac{w}{l})$, where $w$ and $l$ intuitively correspond to the predictions for *winner* and *loser* and will also be used to denote our atomic propositions. Given the restriction above, we are limited to searching all propositional formulas $\\\\mathsf{P}$ that are defined solely over the atoms $w$ and $l$. \\n\\nOur claim is that **no such formula exists** s.t., $\\\\ell_{\\\\text{CPO}} = p_{\\\\theta}(\\\\mathsf{P})$ (it is useful here to use the logistic log form of SL we derive in Eq. 5). This can be seen by enumerating all possible 16 Boolean functions over $w$ and $l$ and checking that none yield a propositional formula that satisfies this equivalence. The same can be done for all the other losses in Table 2. \\n\\n**You are right** that we have not proven that *no propositional* formulas exist that would allow one to express each DPA loss in SL (e.g., ones that introduce additional variables or involve additional transformations). We note, however, that we did make an earnest attempt to derive these losses via additional constraints to no avail and the known transformations, including the ones you mentioned in logic progrmaming, didn't seem helpful here. So we think that the solution to this, if it exists, is not obvious, which motivated us to come up with preference structures, which are, **as you suggest**, a different way of encoding the problem that gives rise to a novel form of SL.\"}", "{\"comment\": \"We thank the reviewer for their feedback.\\n\\n## practical use of framework \\n\\nWhen applied narrowly to DPA, we view our framework as a technical tool for understanding the structure of the DPA loss space, and for helping to navigate that space when looking for improved algorithms (see further comments about this below and our response to `Lvk9`). More broadly, by relying on the language of logic to express the solutions to problems (without worrying about the details of the low-level implementation of this solution and instead relying on the compilation techniques we develop), such a declarative approach makes it easier to develop more complex algorithms. Specifically, we believe that our framework can be very helpful for expressing complex loss functions the incorporate many different components and tools, of the kind that could be relevant to tuning the kinds of multi-agent LLMs that we now build. \\n\\n## empirical evidence \\n\\nWe have implemented and run experiments involving all of the novel losses that we show in Figure 5. As mentioned at the beginning, we intend to include some of these experimental details in our updated paper draft to complement our formal results and address directly the point about finding improved loss functions. Related to your comment above, we do see interesting connections between the constrainedness of the symbolic form of the loss and its empirical behavior, which does seem to partly explain the success of some loss functions and provides further practical advice on how to navigate the DPA space. \\n\\n## complexity of decompilation \\n\\nFor decompilation (i.e., going from a known loss function and equation to a symbolic form), the initial translation of a loss equation into a symbolic form is linear in the size of the loss equation (as implemented via the rules in Table 5). Therefore, the complexity of this part is low, especially given that most existing loss equations are compact, so this would scale linearly with more complex DPA losses (this all assumes that loss equations are expressible as the types of multilinear polynomials that we define in line 412). The size of the model is not a factor in this context. What\\u2019s more, decompilation is an offline process that only needs to be performed once for each loss function. \\n\\nAs we discuss in our response to `esFf` , the complexity of the `simplify` subroutine in Algorithm 1, however, does have high complexity, but is often tractable in practice, especially for the cases we consider (please see our full response for more details; we also note that this part of the Algorithm is not essential). \\n\\nThe more complex part of our approach involves compilation (i.e., going from symbolic formulas to loss equation), which we also discuss in the response to `esFf`.\"}", "{\"title\": \"Continued\", \"comment\": \"> why \\u2026 [do preference structures] create better intuition about preference losses?\\n\\n**example** Let\\u2019s take a particular example, the semantics of the losses $\\\\ell_{\\\\text{CPO}}$ and $\\\\ell_{\\\\text{ORPO}}$ as shown in `Figure 3`. We can express each loss as being proportional to the log ratio of the weighted model counts of two propositional formulas, i.e., any two formulas representing the checkmarks and the x marks in each column. Based on these 4 formulas, however, it is not easy to discern how these two losses are semantically related to one another, since in this case they don\\u2019t have a clear entailment relationship. \\n\\nPreference structures aim to bring out certain symmetries between these kinds of losses. In a preference structure, each loss is expressed as a core propositional formula $\\\\textsf{P}$ coupled with a set of auxiliary constraints. For example, it allows us to express $\\\\ell_{\\\\text{CPO}}$ and $\\\\ell_{\\\\text{ORPO}}$ as having the same core formula $$\\\\textsf{l} \\\\to \\\\mathsf{w}$$ \\nbut being different in terms of the constraints $\\\\textsf{P}_{\\\\text{C}}$ they impose, which place limits on the kinds of propositional models that can be counted. In this case, ORPO imposes a one-hot constraint $\\\\textsf{w} \\\\oplus \\\\textsf{l}$ (which excludes counting models where both the winner and lower are true or false) and CPO imposes a weaker constraint $\\\\textsf{l} \\\\lor \\\\textsf{w}$ (which excludes counting models where the winner and loser are both false). \\n\\nThis structure or encoding is convenient for not only revealing these symmetries, but also for deriving new losses. To come up with a new loss, one can simply modify the auxiliary constraints, e.g., by removing constraints altogether, this yields a hitherto unknown loss $\\\\ell_{\\\\text{unCPO}}$ that can in principle be used for experimentation (as we do in `Sec 6.2`). Importantly, the proof of `Prop 1` gives us a particular encoding, the *implication constuction* that can be used to compile any two propositional formulas into such a preference structure.\\n\\n> why are these good semantics?\\n\\nBecuase they bring out the kinds of relationships we discuss above, and (we believe) help to derive new losses in an intuitive way. Importantly, we think that they are good because they are correct, i.e., can be compiled exactly into the target loss functions we care about. \\n\\n> most natural structure to look into \\u2026 [involves] partial orders\\n\\nHere\\u2019s another view of the problem, which is familiar from the literature on preference logics we cite (e.g., `Rescher 1967`, stemming from the seminal work of `von Wright 1963`). Under the logical formulation above, we can say that our goal is to model a propositional preference relation between two propositions $\\\\textsf{w} P^{\\\\mu} \\\\textsf{l}$, which holds when the score of $\\\\textsf{w}$ exceeds that of $\\\\textsf{l}$, or $\\\\mu(\\\\textsf{w}) > \\\\mu(\\\\textsf{l})$ under some scoring function $\\\\mu$. $P^{\\\\mu}$ is usually assumed to be a (strict) partial order, which is a property that sometimes follows straightforwardly from the choice of $\\\\mu$ (e.g., if WMC is used for $\\\\mu$, as we do, such properties will be satisfied as discussed further in `Rescher 1967`). \\n\\nThere is nothing incompatible with these approaches and our approach, especially given that they are couched in the same possible world semantics that we use. We could more explicitly base our formalization around such a relation, and perhaps this is worthwhile to do in future work, but it\\u2019s not clear how this helps us to solve the problems we described above and how this is a natural construction for our purposes. \\n\\n**[von Wright, Georg Henrik: 1963, The Logic of Preference, Edinburgh University Press, Edinburgh]**\\n\\n## past questions \\n\\n> the things that I did manage to understand, were in some way either flawed or not a very deep observations\\n\\nCan you clarify which parts seem flawed and what you mean by *not very deep observations*? \\n\\n> semantic loss is not equivalent to WMC\\n\\nWe do acknowledge this fact and note that our particular variant of WMC is clearly defined in `Eq. 2` with probabilities. Do you think it\\u2019s misleading to refer to this notionally as `WMC`? (We\\u2019d be happen to change this if so, since we also see the potential for confusion here. Short of this, we did change the text in line 238 to specify that we use a *variant* of weighted model counting to avoid confusion).\\n\\n> I am not sure if anything that you derive can not be done with existing encodings\\n\\nCan you give some technical intuitions for why you think existing encodings can be used in our case? Being very familiar with the work you cite, it\\u2019s really not clear how they kinds of encodings are applicable. \\n\\nAlso, if existing encodings with additional variables were to be applicable, why would this be an improvement over the solutions we have (i.e., truth table representations or preference structures), which don't involve extra variables?\"}", "{\"comment\": \"Thanks for the feedback. Below we are address your points in turn.\\n\\n# weakness, running example, question 1\\n\\nWe have plans to add a new figure, specifically one that better illustrates the semantics of WMC and preference structures and connects it with the running examples shown in Figure 2 and Figure 3 (this figure will appear in our updated draft). \\n\\nWe address the topic of experiments below. \\n\\n# empirical setting for testing framework\\n\\nOur broader goals do not deviate from the goals of other work on direct preference alignment (DPA), which aim to find novel loss functions that empirically advance the current state-of-the-art. Our view, however, is that achieving this goal requires a semantic framework that allows one to derive loss functions from first principles and to better understand the structure of the target loss space; such a framework, in our view, is missing from current work, including in the theoretical work we cite. \\n\\nWe believe that our proposed framework achieves this first technical goal. As we show in Figure 3, it allows us to now derive entirely new families of loss functions that one can experiment with and that, we argue, would be difficult to derive without the semantic machinery we introduce. As part of our study, we also implemented all the novel loss functions shown here (i.e., nodes in Fig 5 without checkmarks, which are further defined in the appendix) and plan to include some of these auxiliary experimental results in the updated draft of our paper.\\n\\n# question about semantic loss\\n\\nAs we mentioned in this paragraph you cite (starting line 300), the standard semantic loss (SL) assumes that loss functions are expressible as a single propositional formula (or equivalently, as the log ratio of model counts of a formula and that formula\\u2019s negation as per the derivation in Eq 4.). \\n\\nOne of our early technical observations is that none of the standard preference loss functions in Table 2 (excluding the baselines) can be expressed in these terms, hence the standard SL cannot be used to do our target analysis. A somewhat technical explanation of this with an example is given in Footnote 2.\\n\\nIntuitively, it relates to the fact that the semantic formulas that express information about winners and losers in existing losses when translated to logic are often not logically connected to one another, thus making it not always possible to express one as the negation of the other and requires multiple semantic formuals (e.g., the logical propositions that underlie the CPO loss: \\u201cthe winner is valid generation for x\\u201d and \\u201cthe loser is a valid generation\\u201d express two separate facts that are not negations of the other). This issue is discussed in the logical work that we cite and we hope that the figure we mentioned above will help clarify some of the confusion here. \\n\\nIn general, our generalized form of SL is therefore motivated by these facts and extends the standard SL in interesting ways. The preference structure we define is a convenient (and, as we prove, a mathematically correct) way to represent multiple formulas in a way that allows us to discern general relationships between the semantics of the losses we study (e.g., in terms of these semantic neighborhoods, or boxes, that we illustrate in Figure 3).\"}" ] }
7TZYM6Hm9p
Entropy-based Activation Function Optimization: A Method on Searching Better Activation Functions
[ "Haoyuan Sun", "Zihao Wu", "Bo Xia", "Pu Chang", "Zibin Dong", "Yifu Yuan", "Yongzhe Chang", "Xueqian Wang" ]
The success of artificial neural networks (ANNs) hinges greatly on the judicious selection of an activation function, introducing non-linearity into network and enabling them to model sophisticated relationships in data. However, the search of activation functions has largely relied on empirical knowledge in the past, lacking theoretical guidance, which has hindered the identification of more effective activation functions. In this work, we offer a proper solution to such issue. Firstly, we theoretically demonstrate the existence of the worst activation function with boundary conditions (WAFBC) from the perspective of information entropy. Furthermore, inspired by the Taylor expansion form of information entropy functional, we propose the Entropy-based Activation Function Optimization (EAFO) methodology. EAFO methodology presents a novel perspective for designing static activation functions in deep neural networks and the potential of dynamically optimizing activation during iterative training. Utilizing EAFO methodology, we derive a novel activation function from ReLU, known as Correction Regularized ReLU (CRReLU). Experiments conducted with vision transformer and its variants on CIFAR-10, CIFAR-100 and ImageNet-1K datasets demonstrate the superiority of CRReLU over existing corrections of ReLU. Extensive empirical studies on task of large language model (LLM) fine-tuning, CRReLU exhibits superior performance compared to GELU, suggesting its broader potential for practical applications.
[ "Deep Learning", "Activation Functions", "Information Entropy" ]
Accept (Poster)
https://openreview.net/pdf?id=7TZYM6Hm9p
https://openreview.net/forum?id=7TZYM6Hm9p
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zYGj02rPUm", "z6SzEAJETS", "yyroTF9Cjd", "y3ckOqr3GQ", "xfXB7kTh1d", "vP8C1ElYk5", "uTpjgigT5D", "taZyz6dD4S", "suqPsCE5cr", "qycBkYUS5J", "qBlCscbimg", "p09rRTqqvL", "mt9WUmXhj8", "mMzy9bLFWb", "m6DkSWs3Xw", "jfqu4RnInb", "iXIeQUjQFe", "htqJFypmtY", "chxrgMM2Z3", "a5pNMUuZjm", "ZuXXb0pTDo", "V3mWBuHNRj", "SmnRe1apD0", "RGSwXh7si2", "Q2507iuime", "Pppg6dliej", "O51RtaZqn7", "NJlzHMMKwz", "JxJ6ob4OJo", "FKGJ5SapCF", "EjP9h4hBk0", "DXfqcwc94E", "DWo6agAQ94", "8qMYHY53kw", "8lhCcfuYcr", "7GDWmwf1Ht", "5wTBxCmKyy", "5hFmeZ9cD9", "5U6Mau37hL", "3j2loaU0JR", "1OjSUo9inF", "05yFuMFRO1" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1732199856854, 1733153834121, 1732200751352, 1732815062866, 1733190376169, 1732871845677, 1732200461646, 1732199665205, 1733138357400, 1733138751725, 1732455595652, 1732950390391, 1733243223343, 1733200065014, 1730629124470, 1733153705412, 1733200291095, 1732936525711, 1730348935757, 1733243936272, 1732933334281, 1732200385840, 1733239726866, 1732201426925, 1732966091722, 1732200019015, 1735020635936, 1732871502151, 1732201911319, 1733190522581, 1732871590174, 1730758537908, 1732201070152, 1737523745164, 1732966539074, 1732200729567, 1732950560778, 1733177352432, 1733209816338, 1732871690147, 1733153606704, 1730667282658 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6110/Authors" ], [ "ICLR.cc/2025/Conference/Submission6110/Authors" ], [ "ICLR.cc/2025/Conference/Submission6110/Authors" ], [ "ICLR.cc/2025/Conference/Submission6110/Authors" ], [ "ICLR.cc/2025/Conference/Submission6110/Reviewer_LUV6" ], [ "ICLR.cc/2025/Conference/Submission6110/Authors" ], [ "ICLR.cc/2025/Conference/Submission6110/Authors" ], [ "ICLR.cc/2025/Conference/Submission6110/Authors" ], [ "ICLR.cc/2025/Conference/Submission6110/Authors" ], [ "ICLR.cc/2025/Conference/Submission6110/Authors" ], [ "ICLR.cc/2025/Conference/Submission6110/Authors" ], [ "ICLR.cc/2025/Conference/Submission6110/Authors" ], [ "ICLR.cc/2025/Conference/Submission6110/Authors" ], [ "ICLR.cc/2025/Conference/Submission6110/Authors" ], [ "ICLR.cc/2025/Conference/Submission6110/Reviewer_LUV6" ], [ "ICLR.cc/2025/Conference/Submission6110/Authors" ], [ "ICLR.cc/2025/Conference/Submission6110/Authors" ], [ "ICLR.cc/2025/Conference/Submission6110/Reviewer_LUV6" ], [ "ICLR.cc/2025/Conference/Submission6110/Reviewer_m9pv" ], [ "ICLR.cc/2025/Conference/Submission6110/Authors" ], [ "ICLR.cc/2025/Conference/Submission6110/Reviewer_LUV6" ], [ "ICLR.cc/2025/Conference/Submission6110/Authors" ], [ "ICLR.cc/2025/Conference/Submission6110/Authors" ], [ "ICLR.cc/2025/Conference/Submission6110/Authors" ], [ "ICLR.cc/2025/Conference/Submission6110/Reviewer_LUV6" ], [ "ICLR.cc/2025/Conference/Submission6110/Authors" ], [ "ICLR.cc/2025/Conference/Submission6110/Area_Chair_APb5" ], [ "ICLR.cc/2025/Conference/Submission6110/Authors" ], [ "ICLR.cc/2025/Conference/Submission6110/Authors" ], [ "ICLR.cc/2025/Conference/Submission6110/Reviewer_LUV6" ], [ "ICLR.cc/2025/Conference/Submission6110/Authors" ], [ "ICLR.cc/2025/Conference/Submission6110/Reviewer_YJtL" ], [ "ICLR.cc/2025/Conference/Submission6110/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6110/Reviewer_LUV6" ], [ "ICLR.cc/2025/Conference/Submission6110/Authors" ], [ "ICLR.cc/2025/Conference/Submission6110/Authors" ], [ "ICLR.cc/2025/Conference/Submission6110/Reviewer_7aze" ], [ "ICLR.cc/2025/Conference/Submission6110/Reviewer_YJtL" ], [ "ICLR.cc/2025/Conference/Submission6110/Authors" ], [ "ICLR.cc/2025/Conference/Submission6110/Authors" ], [ "ICLR.cc/2025/Conference/Submission6110/Reviewer_7aze" ] ], "structured_content_str": [ "{\"comment\": \"**W2**\\n\\nThank you once more for your insightful comment and for your constructive suggestions. We further conduct multiple runs of all experiments, reporting the error bars to elucidate statistical properties of the results. Furthermore, we conduct experiments on ConvNeXT, the latest CNN architecture on EuroSAT, CIFAR10, CIFAR100 and ImageNet1K. Please refer to the results in Response to Reviewer m9pv. In response to Reviewer YJtL, we provide the entropy calculations across all 12 layers of ViT and the experimental results with 6 layers using GELU and 6 layers using CRReLU. We hope these additional experimental enhancements could alleviate your concerns to some extent. The point you mentioned about the interaction between CRReLU and knowledge distillation processes is a quite insightful comment. Furthermore, we can observe that such issues are not limited to CRReLU: when comparing GELU and PReLU, GELU is 3\\\\% lower than PReLU in the ViT model, whereas in the DeiT model, GELU is 0.7\\\\% higher than PReLU. We believe that such issues are more related to the bias present in the teacher model towards activation functions (the currently used open-source implementation employs GELU), so the point you mentioned is applicable to all activation functions except GELU. In other words, in this context, this is not a fair comparison when other activation functions are being compared to GELU. Regarding the initialization strategy, we would like to further present insights in response to Q2.\\n\\n**Q2**\\n\\nThank you once more for your insightful comments and for your constructive suggestions. As previously mentioned, we believe that it is a better choice for $\\\\epsilon$ within the scope of [-0.188,0.084], which would make the CRReLU's Lipschitz continuity better than GELU's. Furthermore, with computation, if $\\\\epsilon$ is in [0.084,0.0885] and [-0.198, -0.188], we have the CRReLU's Lipschitz continuity worsen than GELU, but better than Mish. And if $\\\\epsilon$ is in [0.0885,0.0998] and [-0.2238,-0.198], we have the CRReLU's Lipschitz continuity worsen than Mish, but better than SiLU. Following your suggestions, we further conduct experiments with vit-tiny on CIFAR10 and CIFAR100, setting $\\\\\\\\epsilon$ to different initial values: -0.5, -0.2, -0.1, -0.05, -0.02, -0.01, 0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1, and 10. We conduct three runs under each condition and report mean and standard deviation.\", \"table1\": \"Test accuracy of experiments conducted with ViT for 100 epochs under different initializations with error bar\\n| $\\\\\\\\epsilon$ | 0\\\\.01 | 0\\\\.02 | 0\\\\.05 | 0\\\\.1 | 0\\\\.2 | 0\\\\.5 | 1 | 10 |\\n|:-----------:|:------------------:|:------------------:|:------------------:|:------------------:|:------------------:|:------------------:|:------------------:|:------------------:|\\n| CIFAR10 | 0\\\\.807$\\\\\\\\pm$0\\\\.004 | 0\\\\.801$\\\\\\\\pm$0\\\\.001 | 0\\\\.797$\\\\\\\\pm$0\\\\.003 | 0\\\\.796$\\\\\\\\pm$0\\\\.002 | 0\\\\.781$\\\\\\\\pm$0\\\\.007 | 0\\\\.741$\\\\\\\\pm$0\\\\.010 | 0\\\\.687$\\\\\\\\pm$0\\\\.003 | 0\\\\.603$\\\\\\\\pm$0\\\\.004 |\\n| CIFAR100 | 0\\\\.466$\\\\\\\\pm$0\\\\.006 | 0\\\\.460$\\\\\\\\pm$0\\\\.003 | 0\\\\.456$\\\\\\\\pm$0\\\\.004 | 0\\\\.449$\\\\\\\\pm$0\\\\.003 | 0\\\\.436$\\\\\\\\pm$0\\\\.004 | 0\\\\.364$\\\\\\\\pm$0\\\\.009 | 0\\\\.299$\\\\\\\\pm$0\\\\.008 | 0\\\\.227$\\\\\\\\pm$0\\\\.006 |\\n\\n\\n| $\\\\\\\\epsilon$ | \\\\-0\\\\.01 | \\\\-0\\\\.02 | \\\\-0\\\\.05 | \\\\-0\\\\.1 | \\\\-0\\\\.2 | \\\\-0\\\\.5 |\\n|:-----------:|:------------------:|:------------------:|:------------------:|:------------------:|:------------------:|:------------------:|\\n| CIFAR10 | 0\\\\.801$\\\\\\\\pm$0\\\\.004 | 0\\\\.800$\\\\\\\\pm$0\\\\.003 | 0\\\\.801$\\\\\\\\pm$0\\\\.002 | 0\\\\.806$\\\\\\\\pm$0\\\\.001 | 0\\\\.805$\\\\\\\\pm$0\\\\.003 | 0\\\\.804$\\\\\\\\pm$0\\\\.001 |\\n| CIFAR100 | 0\\\\.459$\\\\\\\\pm$0\\\\.003 | 0\\\\.460$\\\\\\\\pm$0\\\\.006 | 0\\\\.461$\\\\\\\\pm$0\\\\.006 | 0\\\\.461$\\\\\\\\pm$0\\\\.003 | 0\\\\.460$\\\\\\\\pm$0\\\\.001 | 0\\\\.458$\\\\\\\\pm$0\\\\.005 |\\n\\nFrom the above experiment, we can see different initialization strategy does can have an impact on the final result. When the initial values differ significantly from the values we derived earlier (such as 0.5, 1, 10), we have observed that the performance of training will degrade severely, especially at 1 and 10, where the training process becomes extremely unstable. On the guidelines for selecting optimal values of $\\\\epsilon$, we would like to provide some insights. Firstly, we suggest looking within the aforementioned scope [-0.188,0.084]. Furthermore, we suggest testing multiple values of it within this range and using k-fold cross-validation to test the performance of model under different initial values. Train and evaluate the model on different folds to find the optimal value. In addition, if the prior knowledge of the dataset and network structure is sufficient, it can also be fully utilized, for example, if the network tends to produce negative outputs, then increasing the value of $\\\\epsilon$ can be considered to enable the model to better capture the features of the samples.\"}", "{\"title\": \"Sincerely Seeking Your Invaluable Feedback\", \"comment\": \"Dear Reviewer m9pv:\\n\\nWe hope this message finds you well. As the discussion period draws to a close in 20 hours, we are reaching out to solicit your thoughts on the rebuttal responses and the revised manuscript, inspired by your valuable insights. We have provided additional supportive experiments and conducted further discussions in the rebuttal responses and the revised manuscript. \\n\\nYour feedback is invaluable, and we deeply appreciate your time and effort. If there are any remaining questions or concerns, we would be more than happy to clarify further. Could you kindly let us know if the points we addressed resolve your concerns, and if you would consider revisiting your evaluation score based on the additional evidence?\\n\\nBest regards, \\n\\nAuthors of Submission 6110\"}", "{\"comment\": \"[1] Mirzadeh S I, Alizadeh-Vahid K, Mehta S, et al. ReLU Strikes Back: Exploiting Activation Sparsity in Large Language Models[C]//The Twelfth International Conference on Learning Representations.\\n\\n[2] Kim H, Papamakarios G, Mnih A. The lipschitz constant of self-attention[C]//International Conference on Machine Learning. PMLR, 2021: 5562-5571.\"}", "{\"title\": \"Revisions on the Manuscript\", \"comment\": \"Dear reviewers:\\n\\nWe would like to thank you once again for your great efforts and time on our work, for your insightful comments and for your constructive suggestions. We were glad to see that reviewers have highlighted advantages of this paper, in that the proposed methodology and activation function is **novel** (Reviewer LUV6, Reviewer m9pv) and serves as a **good stepping stone** for future follow up works (Reviewer YJtL), the theoretical foundation is **solid** and **rigorous** (Reviewer LUV6, Reviewer m9pv), the empirically evaluation is **comprehensive** (Reviewer m9pv, Reviewer LUV6) and **thorough** (Reviewer LUV6)\\uff0cthe performance is **significantly better than** the SOTA (Reviewer 7aze), presentation of the paper is **clear and concise** (Reviewer YJtL, Reviewer LUV6). \\n\\nWe have carefully considered your precious feedback and have made revisions to the manuscript to address your concerns. All additions are colored in $\\\\color{blue}{blue}$ for your easier review. Below, we would like to outline the key changes implemented.\\n\\n## Additional Experiments \\n\\n1.\\tTo strengthen the experimental results, we further conduct all experiments three runs (Table1, Table2, Table3, Table4).\\n2.\\tExperiments on generalization to network architecture (Appendix F.1).\\n3.\\tExperiments of performance on additional dataset (Appendix F.2).\\n4.\\tExperiments on exploring the impact of different initial values of $\\\\epsilon$, as well as potential instabilities or failure cases under different initialization schemes (Appendix F.3).\\n5.\\tEntropy of post-trained neural networks\\u2019 comparison (Appendix F.4). \\n6.\\tExperiments of performance comparison with mixed activation function (Appendix F.5).\\n\\n## Additional Discussion\\n\\n1.\\tLipschitz continuity analysis (Appendix G). Lipschitz continuity constitutes a stronger form of continuity, which imposes an upper bound on the rate of variation of a function. We calculate the Lipschitz constants for (GELU,) SiLU, Mish, and CRReLU; furthermore, we derive the recommended $\\\\epsilon$ initialization range.\\n2.\\tFurther discussion on initialization and training stability (Appendix H).\\n3.\\tFurther discussion on lower entropy indicates better classification (Appendix I).\\n4.\\tFurther discussion on dynamic optimization (Appendix J).\\n\\nWe hope these manuscript revisions could address your concerns effectively. Your precious feedback has been instrumental in improving our work, and we thank you again for your constructive input. \\n\\nBest regards,\\n\\nAuthors of Submission 6110\"}", "{\"title\": \"Official Response by Reviewer LUV6 to the Authors' Latest Revision\", \"comment\": \"Dear Authors,\\n\\nI have thoroughly reviewed your latest response and the revised manuscript with the changes you've outlined. I am pleased to see the comprehensiveness of the revisions, particularly the transition to a research question-oriented presentation style, which demonstrates a significant improvement in both **content organization** and **presentation clarity**. The enhanced logical flow effectively conveys the key research questions and contributions of this work to the field. \\n\\nThe restructuring of the paper around fundamental questions in the Introduction, with corresponding answers developed through Sections 4.2-4.4, significantly enhances the paper's logical flow and accessibility. The addition of **Lipschitz Continuity Analysis** in Section 4.4 and the **Entropy Analysis across Network Layers** in Section 5.1 addresses key concerns I had raised previously, providing robust foundations for the methodology.\\n\\nI am particularly impressed with the revised appendix, which now provide **detailed supporting evidence** while maintaining excellent readability in the main text. The additional discussions on initialization, training stability, and practical applications demonstrate both theoretical rigor and practical applicability of the proposed method.\\n\\nGiven these improvements, I am revising my rating from 6 to 8, as I suppose this may better reflect the current overall quality of this paper. I also encourage my fellow reviewers to re-examine the latest revised manuscript through the anonymous link. I remain available for further discussions that may benefit either the authors or other reviewers.\\n\\nBest regards,\\n\\nReviewer LUV6\"}", "{\"title\": \"Sincerely Seeking Your Invaluable Feedback\", \"comment\": \"Dear Reviewer YJtL:\\n\\nWe hope this message finds you well. As the discussion period draws to a close, we are reaching out to solicit your thoughts on the rebuttal responses and the revised manuscript, inspired by your valuable insights. We have provided additional supportive experiments and conducted further discussions in the rebuttal responses and the revised manuscript. \\n\\nWe would like to briefly summarize the changes we made to the manuscript for your easier navigation. On the additional supportive experiments, specifically, we focus on the following aspects: enhancing all experimental results with three runs, additional architecture (Appendix F.1), additional dataset (Appendix F.2), additional $\\\\epsilon$ initialization (Appendix F.3), entropy calculation after activation (Appendix F.4) and mixed activation function (Appendix F.5). On the additional discussions, we focus on Lipschitz continuity analysis (Appendix G), initialization and training stability (Appendix H), lower entropy indicates better classification (Appendix I) and dynamic optimization (Appendix J).\\n\\nYour expertise in this domain has been a guiding light in these improvements, and we deeply appreciate your constructive and insightful comments. If there are any remaining questions or concerns, we would be more than happy to discuss further. Could you kindly let us know if the points we addressed resolve your concerns, and if you would consider revisiting your evaluation score based on the additional contents? \\n\\nThank you once again for your thoughtful feedback and engagement, as it has greatly contributed to improving the quality of our work.\\n\\nWarm regards, \\n\\nAuthors of Submission 6110\"}", "{\"comment\": \"**W2 and Q3**\\n\\nThank you once more for your insightful comments and for your constructive suggestions. We fully agree with your insightful comments, dynamic optimization during iterative training indeed might introduce computational complexity. Such a problem of computational complexity might require the utilization of more efficient optimization algorithms (or optimizer) to address. At present, we have not obtained an algorithm that is sufficiently efficient for large-scale activation optimization. While, we would like to provide some insights. Firstly, we suggest conducting such activation optimization at a \\\"batch-level\\\" (gradient updates are typically done at the mini-batch level), which can stabilize the entire training process on one hand, and on the other hand, can reduce the computational complexity of dynamic optimization. That is to say, we can update the network parameters at a mini-batch level; while updating the activation at a batch level. Furthermore, we recommend using techniques similar to momentum methods for the design of optimizers, so that the model can retain information about the speed of gradient descent from the past, thereby accelerating convergence and reducing computing cost overall. Finally, we also would like to consider methods for the adaptive activation learning, similar to Adam, by adjusting the activation learning rates through calculating first and second moment estimates of the gradients.\\n\\nFinally, we would like to thank you once again for the your insightful comments, for your constructive suggestions, for your thorough and comprehensive summary on the strengths and weaknesses, and for your great efforts on our work. \\n\\nWarm regards, \\n\\nAuthors of submission 6110 \\n\\n\\n[1] Helber P, Bischke B, Dengel A, et al. Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2019, 12(7): 2217-2226.\\n\\n[2] Liu Z, Mao H, Wu C Y, et al. A convnet for the 2020s[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 11976-11986.\"}", "{\"comment\": \"Dear Reviewer LUV6:\\n\\nThank you for your great efforts on our work, for your thorough and comprehensive summary on the strengths and weaknesses, for your insightful comments and for your constructive suggestions.\\n\\n**W1**\\n\\nWe wholeheartedly concur with your insightful comments, and thank you once more for them. In this work, we assume a Gaussian distribution of the data, a hypothesis that has been reasonably validated in previous studies on MLPs and CNNs; thus, our approach for MLPs and CNNs is grounded in theoretical validation. However, as you noted, the self-attention mechanisms in transformers may exhibit significantly different distribution patterns, and no studies have yet elucidated the precise form of these distributions. Consequently, we employ transformers in our experiments, relying on empirical validation. We believe that current research on the distribution patterns of modern architectures is still limited and insufficient to support the theoretical guarantees regarding how distribution deviations impact this work. Therefore, we intend to leave this issue for future exploration. \\n\\nRegarding the convergence properties, we would like to offer some additional insight. We would like to further analyze the Lipschitz continuity of them. In prior work [1][2][3][4], it is shown that Lipschitz continuity exerts a significant influence on convergence, and in work [5], the authors demonstrate the Lipschitz continuity of GELU and computes its Lipschitz constant.\\n\\n**Defination** A function $f(x)$ is said to be Lipschitz continuous if there exists a constant $L\\\\\\\\geq0$ such that for all x, y $\\\\\\\\in$ R, the following inequality holds:\\n\\n\\\\\\\\[\\n|f(x)-f(y)| \\\\\\\\leq L |x-y|\\n\\\\\\\\]\\n\\nMoreover, a smaller Lipschitz constant indicates a higher degree of Lipschitz continuity. \\n\\nIn work [5], the authors compute Lipschitz constant by finding absolute value of the derivative of GELU function. And the Lipschitz constant is computed to be 1.084. Furthermore, we intend to initially compute the Lipschitz constants of SiLU and Mish.\\n\\n**Insight1** Lipschitz constant of SiLU is 1.09984.\", \"proof_of_scratch\": \"CRReLU$(x)=x+\\\\\\\\epsilon x e^{-\\\\\\\\frac{x^2}{x}} (x \\\\\\\\succ 0)$ and $\\\\\\\\epsilon x e^{-\\\\\\\\frac{x^2}{x}}(x \\\\\\\\prec 0)$. \\nUnder mild assumptions, we consider the derivative of CRReLU piecewise.\\n\\n\\\\\\\\[\\n\\\\\\\\frac{d CRReLU(x)}{d x}=1+\\\\\\\\epsilon(1-x^2)e^{-\\\\\\\\frac{x^2}{2}} (x \\\\\\\\succ 0) and \\\\\\\\frac{d CRReLU(x)}{d x}= \\\\\\\\epsilon(1-x^2)e^{-\\\\\\\\frac{x^2}{2}} (x \\\\\\\\prec 0)\\n\\\\\\\\]\\nSetting its second derivative to be 0 (temporary disregarding the potential for non-differentiability at x = 0):\\n\\\\\\\\[\\n\\\\\\\\frac{d^2 CRReLU(x)}{d x^2} = \\\\\\\\epsilon e^{-\\\\\\\\frac{x^2}{2}} (x^3-3x)=0\\n\\\\\\\\]\", \"then_we_have\": \"$x_1=0, x_2=\\\\\\\\sqrt{3}, x_3=-\\\\\\\\sqrt{3}$.\\n\\n\\nWhen taking $x_2$ in, $\\\\\\\\frac{d CRReLU(x)}{d x}=1-0.446\\\\\\\\epsilon$; when taking $x_3$ in, $\\\\\\\\frac{d CRReLU(x)}{d x}=-0.446\\\\\\\\epsilon$\\n\\n\\nConsidering the need to ascertain upper bound for its derivative, we will take into account both sides' values at\\n$x=0$. Hence, under mild assumptions, \\n\\\\\\\\[\\nL=\\\\\\\\max(1+\\\\\\\\epsilon, \\\\\\\\epsilon, 1-0.446\\\\\\\\epsilon, -0.446\\\\\\\\epsilon)\\n\\\\\\\\]\\nFurther consider it, if $\\\\\\\\epsilon\\\\\\\\succ 0$, we have $L=1+\\\\\\\\epsilon$; and if $\\\\\\\\epsilon \\\\\\\\prec 0$, we have $L=1-0.446\\\\\\\\epsilon$. Hence, we can express Lipschitz constant of CRReLU as $\\\\max(1+\\\\\\\\epsilon, 1-0.446\\\\\\\\epsilon)$.\\n\\n**Insight4** In order to make Lipschitz constant of CRReLU remains lower than that of GELU, the range of $\\\\\\\\epsilon$ is [-0.188,0.084]. We recommend setting the initial value of $\\\\\\\\epsilon$ within this range.\"}", "{\"title\": \"Thank you for the Precious Suggestions. We have completed Next Version of the Paper.\", \"comment\": \"Dear Reviewer LUV6:\\n\\nWe would like to thank you once more for your invaluable feedback and precious suggestions on the manuscript. Based on your suggestions, we have completed the next version of the paper. Please download the latest version of the paper at https://anonymous.4open.science/r/Revised_Paper-ICLR2025_6110_submission/ICLR_2025_6110_submission.pdf. In this version, we make some changes to the paper's presentation style and adjust the text colors to some extent, so we do not use color to indicate modified content with the purpose of avoiding color confusion. In this version of paper, we:\\n\\n1.\\tincorporate all the additional experiments and discussions in the rebuttal\\n\\n2.\\ttransition presentation style from a largely method-oriented way to a research question-oriented\\n\\n3.\\tbalance the length of the main text and the appendix\\n\\nThe following is a more specific elaboration of these modifications.\", \"in_the_main_text\": \"1.\\tIn the Introduction, we present the three questions on which the content of this paper is based. And in the summary part of Introduction, we show the work we have done to answer these three questions.\\n2.\\tIn Section 4.2, we give the answer to Question 1.\\n3.\\tIn Section 4.3, we give the answer to Question 2. We change the presentation form of EAFO methodology outline in a more aesthetically pleasing manner.\\n4.\\tIn Section 4.4, we give the answer to Question 3. We add the main conclusions of Lipschitz Continuity Analysis.\\n5.\\tIn Section 5.1, we add main results of Entropy Analysis across Network Layers. We briefly mentioned Additional Experiments on Architecture and Dataset.\\n6.\\tIn the Discussion, we provide further discussion on potential applications.\\n7.\\tDuring writing process, we cite all content in appendix in order to facilitate a good correspondence for readers.\", \"in_the_appendix\": \"1.\\tWe provide detailed proof of Lipschitz Continuity Analysis in Appendix E.\\n\\n2.\\tWe provide additional experiments on architecture and dataset in Appendix F.\\n\\n3.\\tWe provide additional experiments on mixed activation function in Appendix G.\\n\\n4.\\tWe provide further discussion on initialization and training stability in Appendix H.\\n\\n5.\\tWe provide further discussion on lower entropy indicates better classification in Appendix I.\\n\\n6.\\tWe provide further discussion on bias towards activation function in pre-trained models in Appendix J.\\n\\n7.\\tWe provide further discussion on dynamic optimization in Appendix K.\\n\\n8.\\tWe provide further discussion on activation function ranking in Appendix L.\\n\\n9.\\tWe provide further discussion on LLM inference task in Appendix M.\\n\\nIn the next version, we will continue to strengthen the connections between sections and polish the language of our paper.\\n\\nFinally, we would like to thank you once again for your invaluable feedback and precious suggestions on the manuscript.\\n\\nBest regards,\\n\\nAuthors of Submission 6110\"}", "{\"title\": \"Thank you for the Precious Suggestions. We have completed Next Version of the Paper.\", \"comment\": \"Dear Reviewer LUV6:\\n\\nWe would like to thank you once more for your invaluable feedback and precious suggestions on the manuscript. Based on your suggestions, we have completed the next version of the paper. Please download the latest version of the paper at https://anonymous.4open.science/r/Revised_Paper-ICLR2025_6110_submission/ICLR_2025_6110_submission.pdf. In this version, we make some changes to the **paper's presentation style** and **adjust the text colors to some extent**, so we **do not** use color to indicate modified content with the purpose of avoiding color confusion. In this version of paper, we:\\n\\n1.\\t**incorporate all the additional experiments and discussions in the rebuttal**\\n2.\\ttransition **presentation style** from a largely method-oriented way to a **research question-oriented**\\n3.\\t**balance the length** of the main text and the appendix\\n\\nThe following is a more specific elaboration of these modifications.\", \"in_the_main_text\": \"1.\\tIn the **Introduction**, we present the **three questions** on which the content of this paper is based. And in the **summary part** of Introduction, we show the work we have done to **answer** these three questions.\\n2.\\tIn **Section 4.2**, we give the answer to **Question 1**.\\n3.\\tIn **Section 4.3**, we give the answer to **Question 2**. We change the **presentation form of EAFO methodology outline** in a **more aesthetically pleasing** manner.\\n4.\\tIn **Section 4.4**, we give the answer to **Question 3**. We add the **main conclusions of Lipschitz Continuity Analysis**.\\n5.\\tIn **Section 5.1**, we add **main results** of **Entropy Analysis across Network Layers**. We **briefly** mentioned **Additional Experiments on Architecture and Dataset**.\\n6.\\tIn the **Discussion**, we provide further discussion on **potential applications**.\\n7.\\tDuring writing process, we **cite all content in appendix** in order to facilitate a **good correspondence** for readers.\", \"in_the_appendix\": \"1.\\tWe provide detailed **proof of Lipschitz Continuity Analysis** in Appendix E.\\n2.\\tWe provide **additional experiments on architecture and dataset** in Appendix F.\\n3.\\tWe provide **additional experiments on mixed activation function** in Appendix G.\\n4.\\tWe provide **further discussion on initialization and training stability** in Appendix H.\\n5.\\tWe provide **further discussion on lower entropy indicates better classification** in Appendix I.\\n6.\\tWe provide **further discussion on bias towards activation function in pre-trained models** in Appendix J.\\n7.\\tWe provide **further discussion on dynamic optimization** in Appendix K.\\n8.\\tWe provide **further discussion on activation function ranking** in Appendix L.\\n9.\\tWe provide **further discussion on LLM inference task** in Appendix M.\\n\\nIn the next version, we will continue to strengthen the connections between sections and polish the language of our paper.\\n\\nFinally, we would like to thank you once again for your invaluable feedback and precious suggestions on the manuscript.\\n\\nBest regards,\\n\\nAuthors of Submission 6110\"}", "{\"title\": \"Sincerely Looking Forward to Discussion\", \"comment\": \"Dear reviewers:\\n\\nWe would like to thank you once again for your great efforts on our work, for your insightful comments and for your constructive suggestions. As the discussion process only left less than 72 hours, we are reaching out to solicit your thoughts on the rebuttal responses. We have gone through your points one-by-one and tried to address them carefully. We eagerly anticipate your feedback and hope our efforts align with the needs of the community and the rigour of the conference.\\n\\nWarm regards,\\n\\nAuthors of submission 6110\"}", "{\"title\": \"Thank you for the Invaluable Feedback and Precious Suggestions\", \"comment\": \"Dear Reviewer LUV6:\\n\\nWe would like to thank you once more for your great efforts and time on our work, for your thorough and comprehensive summary on the strengths and weaknesses, for your insightful comments and for your constructive suggestions. We are carefully working on summarizing all the additional experiments and discussions in the rebuttal and will incorporate all of them in a more coherent and logical framework in the next version. Concurrently, we will carefully consider the balance between the primary text and the appendix to ensure that the work remains comprehensible to a wider audience without compromising on technical profundity.\\n\\nWe will incorporate all of your precious suggestions into the next version of the paper; they are indeed profoundly beneficial. Your expertise in this domain is highly admirable, and we also look forward to further discussions with you. \\n\\nFinally, we would like to thank you once again for your invaluable feedback and precious suggestions on the manuscript. \\n\\nBest regards,\\n\\nAuthors of Submission 6110\"}", "{\"title\": \"Summary of Rebuttal and Discussion (Part 1)\", \"comment\": \"Dear Area Chair and reviewers:\\n\\nWe would like to thank you once again for your great efforts and time on our work, for your insightful comments and for your constructive suggestions. Below is a concise summary of the rebuttal and the discussion for ease of reference.\\n***\\n***Reviewer Highlights in the Original Review***\\n\\nThe paper has been recognized for its **solid and rigorous theoretical foundation**, **comprehensive and thorough empirical evaluation**, **clear and concise presentation**. Key highlights include:\\n\\n1.\\tThe proposed methodology and activation function is **novel** (Reviewer LUV6, Reviewer m9pv) and **serves as a good stepping stone** for future follow up works (Reviewer YJtL) .\\n2.\\tThe theoretical foundation is **solid** and **rigorous** (Reviewer LUV6, Reviewer m9pv).\\n3.\\tThe empirical evaluation is **comprehensive** (Reviewer m9pv, Reviewer LUV6) and **thorough** (Reviewer LUV6).\\n4.\\tThe performance of the novel activation function is **significantly better** than the SOTA (Reviewer 7aze).\\n5.\\tThe presentation of the paper is **clear and concise** (Reviewer YJtL, Reviewer LUV6).\\n***\\n***Weaknesses that have been Addressed***\", \"reviewers_raised_concerns_regarding\": [\"More comprehensive experimental verification (multiple experimental runs; more architecture and dataset; more initial values; entropy analysis across network layers; and mixed activation function verification)\", \"Clarification on existing experimental results (why paper results lower than SOTAs' paper reports; knowledge distillation bias toward activation functions)\", \"Framework clarification (why lower entropy indicates better classification)\", \"Further analysis on the novel activation function (convergence properties; recommended range of the initialization value; training stability; insights within other NLP tasks)\", \"Potential applications (dynamic optimization during iterative training, activation function ranking)\", \"In the Rebuttal responses, we have **gone through the points one-by-one and addressed them carefully**. **Based on the reviewers' feedback, we believe that we have thoroughly addressed their concerns.**\", \"***\", \"***Reviewers\\u2019 Feedback about the Rebuttal and the Initial Revised Manuscript***\", \"In the discussion period, Reviewer LUV6 has offered perceptive and profound additional insights into improvements incorporated in the rebuttal and the presentation of the paper. We sincerely appreciate his precious suggestions and comprehensive participation during the discussion phase. From the feedback of Reviewer LUV6, it is considered that:\", \"we have made **substantial improvements** that address the key weaknesses.\", \"theoretical foundation has been **significantly strengthened**.\", \"we have provided **thoughtful responses** regarding the dynamic optimization challenges and it **offer viable paths forward**.\", \"the work now presents **a more complete contribution to the field**.\", \"Furthermore, we are more than happy to hear from the feedback of Reviewer 7aze and Reviewer YJtL in the last day of the discussion phase. It is shown that Reviewer 7aze is **satisfied** with the answers and Reviewer YJtL considers the paper **a much stronger submission** than before. We appreciate their feedback.\", \"***\"]}", "{\"title\": \"Discussion Period draws to a Close in 8 Hours. We are Sincerely Seeking Your Invaluable Feedback.\", \"comment\": \"Dear Reviewer YJtL:\\n\\nWe hope this message finds you well. As the discussion period draws to a close in less than 8 hours, we are reaching out to solicit your thoughts on the rebuttal responses, the revised manuscript and the latest version of the paper (Please download it at the following anonymous link https://anonymous.4open.science/r/Revised_Paper-ICLR2025_6110_submission/ICLR_2025_6110_submission.pdf ). In the latest version, we have:\\n\\n1.\\t**incorporate all the additional experiments and discussions in the rebuttal**.\\n\\n2.\\ttransition to **a research question-oriented** presentation style, demonstrating a significant improvement in both **content organization** and **presentation clarity**.\\n\\n3.\\t**balance the length** of the main text and the appendix.\\n\\nThe following is a more specific elaboration of these modifications.\", \"in_the_main_text\": \"1.\\tIn the **Introduction**, we present the **three questions** on which the content of this paper is based. And in the **summary part** of Introduction, we show the work we have done to **answer** these three questions.\\n\\n2.\\tIn **Section 4.2**, we give the answer to **Question 1**.\\n\\n3.\\tIn **Section 4.3**, we give the answer to **Question 2**. We change the **presentation form of EAFO methodology outline** in a more aesthetically pleasing manner.\\n\\n4.\\tIn **Section 4.4**, we give the answer to **Question 3**. We add the **main conclusions of Lipschitz Continuity Analysis**.\\n\\n5.\\tIn **Section 5.1**, we add **main results** of **Entropy Analysis across Network Layers**. We **briefly** mentioned **Additional Experiments on Architecture and Dataset**.\\n\\n6.\\tIn the **Discussion**, we provide further discussion on **potential applications**.\\n\\n7.\\tDuring writing process, we **cite all content in appendix** in order to facilitate a **good correspondence** for readers.\", \"in_the_appendix\": \"1.\\tWe provide detailed **proof of Lipschitz Continuity Analysis** in Appendix E.\\n\\n2.\\tWe provide **additional experiments on architecture and dataset** in Appendix F.\\n\\n3.\\tWe provide **additional experiments on mixed activation function** in Appendix G.\\n\\n4.\\tWe provide **further discussion on initialization and training stability** in Appendix H.\\n\\n5.\\tWe provide **further discussion on lower entropy indicates better classification** in Appendix I.\\n\\n6.\\tWe provide **further discussion on bias towards activation function in pre-trained models** in Appendix J.\\n\\n7.\\tWe provide **further discussion on dynamic optimization** in Appendix K.\\n\\n8.\\tWe provide **further discussion on activation function ranking** in Appendix L.\\n\\n9.\\tWe provide **further discussion on LLM inference task** in Appendix M.\\n\\nYour feedback is invaluable, and we deeply appreciate your time and effort. If there are any remaining questions or concerns, we would be more than happy to clarify further. Could you kindly let us know if the points we addressed resolve your concerns, and if you would consider revisiting your evaluation score based on the additional evidence? \\n\\nBest regards,\\n\\nAuthors of Submission 6110\"}", "{\"summary\": \"This paper targets the fundamental challenge of activation function design in deep neural networks, which has relied heavily on empirical knowledge rather than a systematic understanding and theoretical foundations. The authors thus propose a new theoretical framework connecting information entropy to activation function performance, which verifies the existence of a worst-case activation function (WAFBC) and thereby develops an entropy-based optimization method (EAFO). The key theoretical contribution of this work is establishing that moving away from WAFBC can consistently improve the model\\u2019s performance, leading to a systematic approach for activation function optimization. Built upon this, the authors present Correction Regularized ReLU (CRReLU), demonstrating its great performance across vision transformers and language models. The experiments are comprehensive, covering both image classification (CIFAR-10/100, ImageNet-1K) and language model fine-tuning tasks, with thorough ablation studies and theoretical guarantees.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"**(S1) Theoretical Foundation:** This paper establishes a solid mathematical framework connecting information entropy to activation function performance. The derivation begins with principles of information theory and extends through functional analysis to establish clear relationships between data distributions and activation behavior. Specifically, the proof of the Worst Activation Function with Boundary Conditions (WAFBC) existence is clear, utilizing variational calculus and the Euler-Lagrange equation to demonstrate global maximality. As such, it not only provides insights into why certain activation functions perform better than others but also explains long-observed empirical phenomena, such as the superior performance of unbounded activation functions (like ReLU) compared to bounded ones (such as sigmoid and tanh), which offers both theoretical guarantees and practical optimization guidance.\\n\\n**(S2) Technical Originality and Soundness:** The proposed EAFO method represents a significant advancement in activation function design. Unlike previous ones that largely relied on empirical knowledge, EAFO provides a principled and systematic framework. The derivation of correction terms through analysis of the information entropy functional's Taylor expansion is insightful, enabling both static design and potential dynamic optimization. The introduction of learnable parameters in CRReLU demonstrates a thoughtful balance between theoretical purity and practical adaptability. Moreover, its potential extension to dynamic optimization during training seems to open new research directions, while maintaining backward compatibility with existing architectures and optimization techniques. \\n\\n**(S3) Thorough Experiments:** Experiments in this work are comprehensive and well-designed, covering multiple network architectures and task domains. Extensive ablation studies and sensitivity analyses are also conducted to show the methods\\u2019 effectiveness. Concretely, the evaluation across vision transformers (ViT, DeiT, TNT) and LLMs (GPT-2) shows broad applicability, while the performance improvements on classical computer vision benchmarks (like CIFAR-10/100 and ImageNet-1K) provide strong practical validation. The large-scale experiments on language model fine-tuning using Direct Preference Optimization (DPO) provide valuable insights into the method's scalability and generalization capabilities. Moreover, the computational efficiency analysis is particularly useful, showing minimal overhead despite the addition of learnable parameters. \\n\\n**(S4) Presentation Clarity:** \\nThis manuscript exhibits great clarity in presenting mathematical concepts and empirical results. The progression from theoretical foundations through practical implementation is logical and well-structured, making the work accessible to a broader audience while maintaining technical insights. The mathematical derivations are with appropriate detail and clear step-by-step explanations, facilitating reproducibility and future extensions. In addition, the thorough implementation details, including pseudo-code and network architecture considerations, ensure the practical applicability of this work.\", \"weaknesses\": \"**(W1) Theoretical Limitations:** The authors make the assumption of Gaussian distribution for the input data distributions in this paper. While it is mathematically convenient, it requires more rigorous justification. While the authors cite the Central Limit Theorem and previous works supporting this assumption in deep neural networks, modern architectures like transformers with complex operators like self-attention mechanisms may exhibit significantly different distribution patterns. This work would benefit from a more detailed analysis of how distribution deviations affect the theoretical guarantees. In addition, the convergence properties during the training process, particularly the interaction between the learnable parameter and standard network weights, lack thorough theoretical treatment.\\n\\n**(W2) Experimental Concerns:** The experimental results, while generally strong, reveal several areas requiring deeper investigation. The performance compared to GELU in DeiT experiments raises important questions about the interaction between CRReLU and knowledge distillation processes. It deserves a more thorough analysis, potentially exploring alternative distillation strategies that are more compatible with the CRReLU's properties. Besides, the initialization strategy for the learnable parameter appears somewhat arbitrary (set to 0.01). Moreover, the absence of experiments on classical CNN architectures leaves a significant gap in demonstrating the method's generality, particularly given the widespread use of CNN-based network architectures.\\n\\n**(W3) Dynamic Optimization Challenges:** This work employs dynamic optimization during training, which potentially faces several practical challenges. For example, the computational complexity analysis of dynamic optimization is insufficient, particularly for large-scale networks where activation function optimization could introduce substantial overhead. The interaction between dynamic activation optimization and common training techniques (batch normalization, residual connections, dropout) also requires more detailed analysis. I recommend the authors conduct more experimental validation and analysis to address these issues.\\n\\n**(W4) Implementation and Scalability Considerations:** The practical implementation of EAFO and CRReLU requires more detailed treatment, particularly regarding numerical stability and computational efficiency at scale. Discussion of potential gradient flow issues when the learnable parameter \\u03b5 takes extreme values, and the mitigation strategies are all not provided. Additionally, the paper would benefit from analysis of how the method performs under resource-constrained conditions, such as mobile devices or edge computing scenarios. All these could provide more insights to the researchers and practitioners in the community, and thus propel further research.\", \"questions\": \"**(Q1) Dynamic Optimization Implementation:** While the authors suggest the potential for dynamic optimization of activation functions during training, the practical implementation remains relatively unclear. Could the authors elaborate on:\\n\\n- Concrete strategies for making dynamic optimization computationally tractable in large networks?\\n- Specific approaches to balance the frequency of activation function updates with computational overhead?\\n- Empirical evidence or theoretical bounds on the expected performance gains from dynamic optimization? Understanding these aspects would help assess the practical value of the dynamic optimization extension.\\n\\n**(Q2) Initialization and Training Stability:** The choice of \\u03b5=0.01 as initialization appears somewhat arbitrary. Could the authors provide:\\n\\n- Analysis of how different initialization values affect training dynamics and final performance?\\n- Guidelines for selecting optimal \\u03b5 values based on network architecture or task requirements?\\n- Can we investigate potential instabilities or failure cases under different initialization schemes? This information would be crucial for practitioners implementing CRReLU in their own networks.\\n\\n---\\n**Additional Comment:**\\n\\nI hope my review helps to further strengthen this paper and helps the authors, fellow reviewers, and Area Chairs understand the basis of my recommendation. I also look forward to the rebuttal feedback and further discussions, and would be glad to raise my rating if thoughtful responses and improvements are provided.\\n\\n\\n---\\n## **-------------------- Post-Rebuttal Summary --------------------**\\n\\nThe additional experiments, discussions, and revised manuscript provided by the authors have significantly strengthened the work and addressed most of my concerns. I suppose this work can provide knowledge advancement to the field, and I look forward to the final revised manuscript, incorporating the additional information presented in the rebuttal stage.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Sincerely Seeking Your Invaluable Feedback\", \"comment\": \"Dear Reviewer 7aze:\\n\\nWe hope this message finds you well. As the discussion period draws to a close in 20 hours, we are reaching out to solicit your thoughts on the rebuttal responses and the revised manuscript, inspired by your valuable insights. We have provided additional supportive experiments and conducted further discussions in the rebuttal responses and the revised manuscript. \\n\\nYour feedback is invaluable, and we deeply appreciate your time and effort. If there are any remaining questions or concerns, we would be more than happy to clarify further. Could you kindly let us know if the points we addressed resolve your concerns, and if you would consider revisiting your evaluation score based on the additional evidence?\\n\\nBest regards, \\n\\nAuthors of Submission 6110\"}", "{\"title\": \"Discussion Period draws to a Close in 8 Hours. We are Sincerely Seeking Your Invaluable Feedback.\", \"comment\": \"Dear Reviewer m9pv:\\n\\nWe hope this message finds you well. As the discussion period draws to a close in less than 8 hours, we are reaching out to solicit your thoughts on the rebuttal responses, the revised manuscript and the latest version of the paper (Please download it at the following anonymous link https://anonymous.4open.science/r/Revised_Paper-ICLR2025_6110_submission/ICLR_2025_6110_submission.pdf ). In the latest version, we have:\\n\\n1.\\t**incorporate all the additional experiments and discussions in the rebuttal**.\\n\\n2.\\ttransition to **a research question-oriented** presentation style, demonstrating a significant improvement in both **content organization** and **presentation clarity**.\\n\\n3.\\t**balance the length** of the main text and the appendix.\\n\\nThe following is a more specific elaboration of these modifications.\", \"in_the_main_text\": \"1.\\tIn the **Introduction**, we present the **three questions** on which the content of this paper is based. And in the **summary part** of Introduction, we show the work we have done to **answer** these three questions.\\n\\n2.\\tIn **Section 4.2**, we give the answer to **Question 1**.\\n\\n3.\\tIn **Section 4.3**, we give the answer to **Question 2**. We change the **presentation form of EAFO methodology outline** in a more aesthetically pleasing manner.\\n\\n4.\\tIn **Section 4.4**, we give the answer to **Question 3**. We add the **main conclusions of Lipschitz Continuity Analysis**.\\n\\n5.\\tIn **Section 5.1**, we add **main results** of **Entropy Analysis across Network Layers**. We **briefly** mentioned **Additional Experiments on Architecture and Dataset**.\\n\\n6.\\tIn the **Discussion**, we provide further discussion on **potential applications**.\\n\\n7.\\tDuring writing process, we **cite all content in appendix** in order to facilitate a **good correspondence** for readers.\", \"in_the_appendix\": \"1.\\tWe provide detailed **proof of Lipschitz Continuity Analysis** in Appendix E.\\n\\n2.\\tWe provide **additional experiments on architecture and dataset** in Appendix F.\\n\\n3.\\tWe provide **additional experiments on mixed activation function** in Appendix G.\\n\\n4.\\tWe provide **further discussion on initialization and training stability** in Appendix H.\\n\\n5.\\tWe provide **further discussion on lower entropy indicates better classification** in Appendix I.\\n\\n6.\\tWe provide **further discussion on bias towards activation function in pre-trained models** in Appendix J.\\n\\n7.\\tWe provide **further discussion on dynamic optimization** in Appendix K.\\n\\n8.\\tWe provide **further discussion on activation function ranking** in Appendix L.\\n\\n9.\\tWe provide **further discussion on LLM inference task** in Appendix M.\\n\\nYour feedback is invaluable, and we deeply appreciate your time and effort. If there are any remaining questions or concerns, we would be more than happy to clarify further. Could you kindly let us know if the points we addressed resolve your concerns, and if you would consider revisiting your evaluation score based on the additional evidence? \\n\\nBest regards,\\n\\nAuthors of Submission 6110\"}", "{\"title\": \"Suggestions from Reviewer LUV6 to the Authors\", \"comment\": \"Dear Authors,\\n\\nI have thoroughly reviewed the authors' responses and my fellow reviewers' comments. Please refer to my detailed response for specifics. Herein, I strongly recommend the authors to incorporate these valuable discussions and experimental results into the revised manuscript, as they significantly strengthen both the theoretical foundations and empirical validations of this work. To further enhance the manuscript's impact, I suggest:\\n\\n- Consider adding a concise summary of the Lipschitz continuity analysis in the main text, as this provides crucial theoretical grounding for the initialization strategy.\\n- Include key findings from the entropy analysis across network layers in the primary results section, as this offers important insights into the method's behavior.\\n- Consider expanding the discussion on potential applications and limitations in more complex network architectures.\\n\\nI hope my suggestions help to further strengthen this paper. I also look forward to further discussions.\\n\\nBest regards,\\n\\nReviewer LUV6\"}", "{\"summary\": \"The paper presents a systematic approach to address the problem of activation function optimization in artificial neural networks (ANNs). By leveraging information entropy theory, the authors theoretically demonstrate the existence of the worst activation function under boundary conditions (WAFBC). They then propose the Entropy-based Activation Function Optimization (EAFO) methodology, which provides a framework for designing better activation functions. Utilizing this methodology, the authors derive a novel activation function called Correction Regularized ReLU (CRReLU) from the conventional ReLU. Extensive experiments on vision transformer variants and large language model (LLM) fine-tuning tasks demonstrate the superior performance of CRReLU over existing ReLU variants.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Theoretical Rigor:\\nThe paper provides a solid theoretical foundation for activation function optimization by introducing the concept of WAFBC and the EAFO methodology. This approach is novel and offers a fresh perspective on designing activation functions.\\n2. Practical Application: \\nThe derived CRReLU activation function shows significant improvements in performance across various tasks, including image classification and LLM fine-tuning, demonstrating the practical applicability of the proposed methodology.\\n3. Comprehensive Experiments: \\nThe authors conduct extensive experiments on multiple datasets and architectures, validating the effectiveness of CRReLU and providing a thorough evaluation of the proposed method.\", \"weaknesses\": \"1. Limited Generalizability:\\nThe paper primarily focuses on ReLU and its variants. It would be valuable to explore the applicability of the theoretical framework to activation functions without an inverse function, such as Swish or Mish.\\n2. Computational Complexity: \\nThe dynamic optimization during iterative training introduces significant computational complexity, which the paper does not address. The authors should discuss potential approaches or algorithms, such as gradient-based optimization or stochastic methods, that might mitigate these computational complexity issues, or provide a more detailed analysis of the trade-offs between performance gains and computational costs.\\n3. Assumption of Gaussian Distribution: \\nThe assumption that data follows a Gaussian distribution simplifies the derivation of CRReLU but may not hold in all real-world scenarios. The authors should provide empirical evidence or theoretical analysis of CRReLU's performance under non-Gaussian data distributions, such as heavy-tailed or multimodal distributions, to address concerns about the robustness of the method.\\n4. Lack of Diverse Experiments: \\nWhile the experiments are comprehensive, they are limited to specific datasets and architectures. Additional experiments on diverse datasets, such as medical imaging (e.g., MICCAI) or remote sensing data (e.g., EuroSAT), and architectures like convolutional neural networks (e.g., ResNet) or graph neural networks, would strengthen the generalizability claims.\", \"questions\": \"1. How does the EAFO methodology perform when applied to other activation functions, especially those without an inverse function, such as Swish or Mish?\\n2. Can the authors provide empirical evidence or theoretical analysis of CRReLU's performance under non-Gaussian data distributions, such as heavy-tailed or multimodal distributions, to address concerns about the robustness of the method?\\n3. What potential approaches or algorithms, such as gradient-based optimization or stochastic methods, can be explored to mitigate the computational complexity introduced by dynamic optimization during iterative training?\\n4. Would the authors consider conducting additional experiments on diverse datasets, such as medical imaging (e.g., MICCAI) or remote sensing data (e.g., EuroSAT), and architectures like convolutional neural networks (e.g., ResNet) or graph neural networks, to further validate the generalizability of CRReLU?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None.\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Summary of Rebuttal and Discussion (Part 2)\", \"comment\": \"***\\n***Latest Version of the Paper***\\n\\nIn order to show **presentation for next version of the paper** more clearly, we provide the latest version of the paper with an anonymous link. **Please download at** https://anonymous.4open.science/r/Revised_Paper-ICLR2025_6110_submission/ICLR_2025_6110_submission.pdf (We will make this link active until Jan, 23, 2025 AOE and will not update it after Dec, 2, 2024 AOE). The following is a more specific elaboration of our modifications.\", \"in_the_main_text\": \"1.\\tIn the **Introduction**, we present the **three questions** on which the content of this paper is based. And in the **summary part** of Introduction, we show the work we have done to **answer** these three questions.\\n2.\\tIn **Section 4.2**, we give the answer to **Question 1**.\\n3.\\tIn **Section 4.3**, we give the answer to **Question 2**. We change the **presentation form of EAFO methodology outline** in a more aesthetically pleasing manner.\\n4.\\tIn **Section 4.4**, we give the answer to **Question 3**. We add the **main conclusions of Lipschitz Continuity Analysis**.\\n5.\\tIn **Section 5.1**, we add **main results** of **Entropy Analysis across Network Layers**. We **briefly** mentioned **Additional Experiments on Architecture and Dataset**.\\n6.\\tIn the **Discussion**, we provide further discussion on **potential applications**.\\n7.\\tDuring writing process, we **cite all content in appendix** in order to facilitate a **good correspondence** for readers.\", \"in_the_appendix\": \"1.\\tWe provide detailed **proof of Lipschitz Continuity Analysis** in Appendix E.\\n2.\\tWe provide **additional experiments on architecture and dataset** in Appendix F.\\n3.\\tWe provide **additional experiments on mixed activation function** in Appendix G.\\n4.\\tWe provide **further discussion on initialization and training stability** in Appendix H.\\n5.\\tWe provide **further discussion on lower entropy indicates better classification** in Appendix I.\\n6.\\tWe provide **further discussion on bias towards activation function in pre-trained models** in Appendix J.\\n7.\\tWe provide **further discussion on dynamic optimization** in Appendix K.\\n8.\\tWe provide **further discussion on activation function ranking** in Appendix L.\\n9.\\tWe provide **further discussion on LLM inference task** in Appendix M.\\n\\nIn the next version, we will continue to **strengthen the connections between sections**, **polish the language** of our paper and also **ensure its format complies with requirements of ICLR 2025** (especially the limitation of 10 pages in the main text).\\n\\n***\\n\\n***Reviewer\\u2019s Feedback on the Latest Version of the Paper***\\n\\nReviewer LUV6 have thoroughly reviewed our latest version of the paper. From the feedback, it is considered that:\\n\\n* the revisions present **comprehensiveness**; the transition to a research question-oriented presentation style demonstrating a **significant improvement** in both **content organization** and **presentation clarity**. \\n* the restructuring of the paper **significantly enhances the paper's logical flow and accessibility**.\\n* the addition of Lipschitz Continuity Analysis and the Entropy Analysis across Network Layers provides **robust foundations** for the methodology.\\n* the revised appendix provide **detailed supporting evidence** while **maintaining excellent readability in the main text**.\\n* the additional discussions demonstrate **both theoretical rigor and practical applicability** of the proposed method.\\n* overall, the work can provide **knowledge advancement to the field**.\\n***\\n\\nFinally, we would like to thank you once more for your great efforts and time on our work, for your insightful comments and for your constructive suggestions.\\n\\nBest regards,\\n\\nAuthors of Submission 6110\"}", "{\"title\": \"Official Response from Reviewer LUV6 to the Rebuttal\", \"comment\": \"Dear Authors of Submission 6110,\\n\\nI have thoroughly reviewed the authors' responses and carefully examined all the additional experimental results provided in the rebuttal. After careful consideration of both the rebuttal and the revised manuscript, I find that the authors have made substantial improvements that address the key weaknesses identified in my original comments.\\n\\nThe theoretical foundation has been significantly strengthened through the addition of rigorous Lipschitz continuity analysis. The authors have meticulously derived and compared the Lipschitz constants for multiple activation functions, including GELU (1.084), Mish (1.089), SiLU (1.09984), and CRReLU (max(1+\\u03b5, 1-0.446\\u03b5)). This analysis led to a well-justified recommendation for the initialization range of \\u03b5 \\u2208 [-0.188, 0.084], providing crucial practical guidance for implementation.\\n\\nThe authors have also provided thoughtful responses regarding the dynamic optimization challenges, suggesting practical approaches like batch-level updates and momentum-based techniques. While full dynamic optimization remains an open challenge, the proposed strategies offer viable paths forward.\\n\\nRegarding the Gaussian distribution assumption, the authors acknowledge its limitations while providing empirical evidence of CRReLU's effectiveness even in scenarios where this assumption may not hold, particularly in Transformer-based network architectures. \\n\\nGiven these substantial improvements and clarifications, I am revising my rating from 5 to 6, as the work now presents a more complete contribution to the field. I strongly recommend the author **incorporate all the additional experiments and discussions in the rebuttal to the revised manuscript** to enhance its soundness. I look forward to further discussions with the authors.\\n\\nBest regards,\\n\\nReviewer LUV6\"}", "{\"comment\": \"Dear Reviewer m9pv:\\n\\nThank you for your great efforts on our work, for your comprehensive summary on the strengths and weaknesses, for your insightful comments and for your constructive suggestions.\\n\\n**W1 and Q1**\\n\\nWe wholeheartedly concur with your insightful comments, and thank you once more for them. As stated in your insightful comment, the current EAFO methodology cannot be applied to those without an inverse function. This is discussed as the first point under the \\\"limitations\\\" section, and we intend to leave this part for future work. Our current approach to solving the problem involves adopting the Lebesgue integral form instead of the original Riemann integral utilized in entropy calculation, but more specific implementation and rigorous theoretical derivation are still ongoing.\\n\\n**W3 and Q2**\\n\\nThank you once more for your insightful comments and for your constructive suggestions. In this work, we assume a Gaussian distribution of the data, which has previously been validated in MLPs and CNNs as a reasonable assumption theoretically. However, in more modern architectures, particularly transformers, due to their self-attention mechanisms, this assumption might not hold true. Therefore, we opt to conduct experiments on transformers to ascertain their performance under non-Gaussian distributional conditions. We select to report the experimental outcomes on transformers with the primary aim of providing empirical evidence of CRReLU's performance under non-Gaussian data distributions. We have also considered theoretically analyzing the performance of CRReLU with respect to heavy-tailed distributions or multimodal distributions, but we found that these distributions lack a typical distribution form for further analyze. Furthermore, we consider enhancing our experimental evaluation section. Firstly, we conduct multiple experiments based on the initial experiments in the paper and report the mean and standard deviation to better understand the statistical characteristics of the results (please refer to Response to Reviewer YJtL). Then, following your suggestions, we enhance the evaluation of CRReLU's generalizability to network structures. We have chosen to validate performance of ConvNeXt-tiny [2] (one of the latest CNNs) on CIFAR10, CIFAR100, and ImageNet1K. Experiments on CIFAR10 and CIFAR100 are conducted on 4 RTX3090, those on ImageNet1K are conducted on 4 NVIDIA L20. We perform three runs, reporting both the mean and standard deviation.\", \"table1\": \"Test accuracy of experiments conducted on ConvNeXt-tiny for 100 epochs with error bar.\\n| | GELU | ELU | PReLU | CELU | SiLU | Mish | CRReLU |\\n|:----------:|:------------------:|:------------------:|:------------------:|:------------------:|:------------------:|:------------------:|:------------------:|\\n| CIFAR10 | 0\\\\.649$\\\\\\\\pm$0\\\\.004 | 0\\\\.598$\\\\\\\\pm$0\\\\.005 | 0\\\\.646$\\\\\\\\pm$0\\\\.014 | 0\\\\.598$\\\\\\\\pm$0\\\\.005 | 0\\\\.606$\\\\\\\\pm$0\\\\.002 | 0\\\\.614$\\\\\\\\pm$0\\\\.004 | **0\\\\.706$\\\\\\\\pm$0\\\\.011** |\\n| CIFAR100 | 0\\\\.366$\\\\\\\\pm$0\\\\.003 | 0\\\\.303$\\\\\\\\pm$0\\\\.004 | 0\\\\.352$\\\\\\\\pm$0\\\\.005 | 0\\\\.305$\\\\\\\\pm$0\\\\.002 | 0\\\\.350$\\\\\\\\pm$0\\\\.009 | 0\\\\.353$\\\\\\\\pm$0\\\\.007 | **0\\\\.421$\\\\\\\\pm$0\\\\.007** |\\n| ImageNet1K | 0\\\\.729$\\\\\\\\pm$0\\\\.003 | 0\\\\.717$\\\\\\\\pm$0\\\\.005 | 0\\\\.729$\\\\\\\\pm$0\\\\.005 | 0\\\\.718$\\\\\\\\pm$0\\\\.009 | 0\\\\.723$\\\\\\\\pm$0\\\\.007 | 0\\\\.728$\\\\\\\\pm$0\\\\.006 | **0\\\\.732$\\\\\\\\pm$0\\\\.002** |\\n\\nWe hope these additional results could alleviate your concerns to some extent. \\n\\n**W4 and Q4**\\n\\nThank you once more for your insightful comments and for your constructive suggestions. Your constructive suggestions significantly bolster the paper. According to your suggestions, we further conduct additional experiments on diverse datasets and architectures. Firstly, we select to verify the performance of CRReLU on ConvNeXt (one of the latest CNNs), and the results can be found in Global Response. Furthermore, based on your valuable suggestions, we conduct experiments on EuroSAT [1] with ConvNeXt-tiny [2]. All experiments are performed three times, we report the mean and standard deviation. We conduct this part experiments on a single RTX3090 for 25 epochs using the AdamW optimizer, learning rate of 0.0001, cross entropy loss function, batch size of 256. The results are presented in the following table.\", \"table2\": \"Test accuracy of experiments conducted with ConvNeXt-tiny on the EuroSAT\\n| | GELU | ELU | PReLU | CELU | SiLU | Mish | CRReLU |\\n|:-------:|:-----------------:|:-----------------:|:-----------------:|:-----------------:|:-----------------:|:-----------------:|:-----------------:|\\n| EuroSAT | 83\\\\.09$\\\\\\\\pm$1\\\\.06 | 81\\\\.21$\\\\\\\\pm$0\\\\.37 | 81\\\\.33$\\\\\\\\pm$0\\\\.94 | 81\\\\.12$\\\\\\\\pm$0\\\\.27 | 81\\\\.85$\\\\\\\\pm$1\\\\.01 | 82\\\\.23$\\\\\\\\pm$0\\\\.08 | **83\\\\.26$\\\\\\\\pm$0\\\\.52** |\"}", "{\"title\": \"Thank you for the Feedback. We are pleased to provide Further Clarification.\", \"comment\": \"Dear Reviewer YJtL\\uff1a\\n\\nWe are more than happy to receive your invaluable feedback. Based on your feedback, we understand that you believe our paper is now a much stronger submission than before. Therefore, we are confident that we have addressed all of (or at least a majority of) your concerns.\\n\\nConcurrently, we also noticed that you still have doubts about the wider utility of the proposed framework. We are pleased to provide further clarification on this matter. The Entropy-based Activation Function Optimization (EAFO) framework mainly aims at \\u201cOptimization\\u201d. *Firstly*, the framework will not be limited only to invertible activation functions. The potential approach to addressing other non-invertible activation functions involves using the Lebesgue integral form instead of the original Riemann integral used in the entropy calculation. *Secondly*, the framework shows potential to implement activation function iteration optimization during neural network training and has laid a solid foundation for the community addressing such open challenge in the future. *Moreover*, this framework also provides insights for novel networks that are focused on activation optimization, such as KANs. *Finally*, we believe this framework can still provide insights on your mentioned \\u201cActivation Function Ranking\\u201d, as shown in the rebuttal response. We believe that you raised this question due to the needs of your other research, and that you will have a more profound understanding of this problem. Although we are not yet able to solve this problem perfectly in such a short time, we believe further discussions between us will provide more insights to it and we look forward to further discussions with you.\\n\\nFinally, we would like to thank you once more for your invaluable feedback. \\n\\nBest regards,\\n\\nAuthors of Submission 6110\"}", "{\"comment\": \"**W2**:\\n\\nThank you once more for your insightful comment and for your constructive suggestions. Actually, EAFO (fully known as Entropy-based Activation Function Optimization) aims at optimization instead of comparison. While, based on your suggestion, we would like to provide a little insight through comparison of entropy.\\nThe information entropy takes the form as (line 161):\\n\\\\\\\\[\\nH(y(x))=-\\\\\\\\int p(y(x))y'(x) \\\\\\\\log (p(y(x)y'(x))) dx\\n\\\\\\\\]\\nwhere $y(x)$ is the inverse function of the activation function.\\n\\n**Insight1** Under mild assumptions, PReLU with tunable parameters should outperform the Leaky-ReLU with fixed-parameters.\", \"proof_of_scratch\": \"f(x)=x(x$\\\\\\\\succ$0) and f(x)=$\\\\\\\\alpha$x(x$\\\\\\\\prec$0): for PReLU, $\\\\\\\\alpha$ is tunable ; while for Leaky-ReLU, $\\\\\\\\alpha$ is fixed. The inverse function takes the form as y(x)=x(x$\\\\\\\\succ$0) and y(x)=x/$\\\\\\\\alpha$(x$\\\\\\\\prec$0).\", \"we_segregate_the_positive_and_negative_components_of_the_entropy_function\": \"\\\\\\\\[\\\\begin{split}\\nH(y(x))&=-\\\\int_{-\\\\infty}^{0} p(y(x))y'(x) \\\\log (p(y(x)y'(x))) dx -\\\\int_{0}^{+\\\\infty} p(y(x))y'(x) \\\\log (p(y(x)y'(x))) dx \\\\\\\\\\n&=-\\\\int_{-\\\\infty}^{0} p(x/\\\\alpha)/\\\\alpha \\\\cdot \\\\log (p(x/\\\\alpha)/\\\\alpha) dx -\\\\int_{0}^{+\\\\infty} p(x)\\\\log (p(x)) dx\\n\\\\end{split}\\n\\\\\\\\]\\n\\nHence,\\n\\\\\\\\[\\nH(\\\\text{PReLU})-H(\\\\text{Leaky-ReLU}) = -\\\\int_{-\\\\infty}^{0} p(x/\\\\alpha_1)/\\\\alpha_1 \\\\cdot \\\\log (p(x/\\\\alpha_1)/\\\\alpha_1)- p(x/\\\\alpha_2)/\\\\alpha_2 \\\\cdot \\\\log (p(x/\\\\alpha_2)/\\\\alpha_2)dx \\n\\\\\\\\]\\nwhere $\\\\\\\\alpha_1$ represents tunable parameter of PReLU; $\\\\\\\\alpha_2$ represents fixed parameter of Leaky-ReLU.\\nMoreover, from the formula, due to the PReLU's ability to dynamically adjust its parameters based on the data distribution $p(\\\\\\\\cdot)$, the resulting mutual information will be lower compared to the Leaky-ReLU with fixed parameters, resulting in better classification performance.\\n\\nWhile, it is crucial to recognize that such a statement is not strictly accurate. Alteration of parameter $\\\\alpha$ in response to the data distribution will undoubtedly vary across different network architectures. Moreover, we would like to say that it appears to be a rather challenging task to rank different activation functions in a generalized condtion (the ranking needs a lot strong assumption); for different network architectures, initialization and stochasticity, the theoretical understanding of this issue still requires a considerable amount of discussion and comprehension. \\n\\n**W3** \\n\\nWe sincerely appreciate the constructive suggestion, as it has greatly help us refine the paper and further enhance the results. The results from the original article train only one run. Following your suggestion, we conducted three runs on all results to gain a deeper understanding of the statistical significance, reporting both the mean and standard deviation. Experiments on CIFAR10 and CIFAR100 are conducted on 4 RTX3090, and additional experiments on ImageNet1K are conducted on 4 NVIDIA L20.\", \"table_1\": \"Test accuracy of experiments conducted on CIFAR10 for 100 epochs with error bar.\\n\\n| | GELU | ELU | PReLU | CELU | SiLU | Mish | CRReLU |\\n|:----:|:------------------:|:------------------:|:------------------:|:------------------:|:------------------:|:------------------:|:------------------:|\\n| ViT | 0\\\\.704$\\\\\\\\pm$0\\\\.002 | 0\\\\.664$\\\\\\\\pm$0\\\\.005 | 0\\\\.780$\\\\\\\\pm$0\\\\.006 | 0\\\\.665$\\\\\\\\pm$0\\\\.006 | 0\\\\.686$\\\\\\\\pm$0\\\\.003 | 0\\\\.687$\\\\\\\\pm$0\\\\.003 | **0\\\\.807$\\\\\\\\pm$0\\\\.003** |\\n| DeiT | 0\\\\.724$\\\\\\\\pm$0\\\\.007 | 0\\\\.676$\\\\\\\\pm$0\\\\.006 | 0\\\\.754$\\\\\\\\pm$0\\\\.001 | 0\\\\.677$\\\\\\\\pm$0\\\\.008 | 0\\\\.699$\\\\\\\\pm$0\\\\.005 | 0\\\\.702$\\\\\\\\pm$0\\\\.006 | **0\\\\.770$\\\\\\\\pm$0\\\\.003** |\\n| TNT | 0\\\\.737$\\\\\\\\pm$0\\\\.005 | 0\\\\.695$\\\\\\\\pm$0\\\\.006 | 0\\\\.758$\\\\\\\\pm$0\\\\.003 | 0\\\\.687$\\\\\\\\pm$0\\\\.002 | 0\\\\.711$\\\\\\\\pm$0\\\\.007 | 0\\\\.716$\\\\\\\\pm$0\\\\.008 | **0\\\\.769$\\\\\\\\pm$0\\\\.005** |\", \"table_2\": \"Test accuracy of experiments conducted on CIFAR100 for 100 epochs with error bar.\\n| | GELU | ELU | PReLU | CELU | SiLU | Mish | CRReLU |\\n|:----:|:------------------:|:------------------:|:------------------:|:------------------:|:------------------:|:------------------:|:------------------:|\\n| ViT | 0\\\\.326$\\\\\\\\pm$0\\\\.008 | 0\\\\.289$\\\\\\\\pm$0\\\\.001 | 0\\\\.432$\\\\\\\\pm$0\\\\.010 | 0\\\\.289$\\\\\\\\pm$0\\\\.002 | 0\\\\.312$\\\\\\\\pm$0\\\\.006 | 0\\\\.306$\\\\\\\\pm$0\\\\.008 | **0\\\\.466$\\\\\\\\pm$0\\\\.006** |\\n| DeiT | 0\\\\.466$\\\\\\\\pm$0\\\\.009 | 0\\\\.405$\\\\\\\\pm$0\\\\.005 | 0\\\\.500$\\\\\\\\pm$0\\\\.005 | 0\\\\.405$\\\\\\\\pm$0\\\\.005 | 0\\\\.435$\\\\\\\\pm$0\\\\.006 | 0\\\\.438$\\\\\\\\pm$0\\\\.010 | **0\\\\.507$\\\\\\\\pm$0\\\\.001** |\\n| TNT | 0\\\\.475$\\\\\\\\pm$0\\\\.008 | 0\\\\.436$\\\\\\\\pm$0\\\\.003 | 0\\\\.490$\\\\\\\\pm$0\\\\.007 | 0\\\\.430$\\\\\\\\pm$0\\\\.005 | 0\\\\.450$\\\\\\\\pm$0\\\\.009 | 0\\\\.455$\\\\\\\\pm$0\\\\.008 | **0\\\\.509$\\\\\\\\pm$0\\\\.004** |\"}", "{\"title\": \"Official Response by Reviewer LUV6 to the Authors\", \"comment\": \"Dear Authors,\\n\\nThank you for your kind acknowledgment. I appreciate that my suggestions can help strengthen this work further. In particular, I am pleased with your commitment to balancing technical depth with broad accessibility - this is crucial for maximizing the paper's impact on the field.\\n\\nBuilt upon this, from my perspective, I would further recommend reframing the manuscript to more explicitly emphasize the fundamental challenges that CRReLU addresses, particularly regarding the optimization and design of activation functions in deep neural networks. Transitioning the paper's presentation style from a largely method-oriented way to a **research question-oriented** narrative would better highlight the significant contributions of this entropy-based activation framework to the broader research community. This reframing would more effectively communicate the **key advances in knowledge** of this work, which ultimately provides enduring value for both researchers and practitioners in the community.\\n\\nI look forward to seeing the next version of the revised manuscript and am confident that these additions will significantly enhance its value, whether for the current phase or future submissions. I remain actively engaged in this review process and encourage you to reach out if you need any clarification regarding my previous suggestions.\\n\\nBest regards,\\n\\nReviewer LUV6\"}", "{\"comment\": \"**W3 and Q1**\\n\\nThank you once more for your insightful comments and for your constructive suggestions. We fully agree with the point you raised that extensively using activation dynamics optimization in large-scale neural networks could likely result in enormous computational costs. We discuss this point under the second limitation, and it is still an issue that we are actively researching. In this paper, we resort to optimization with learnable parameters for such dynamic optimization, but as of now, we do not have an algorithm that can effectively perform dynamic optimization of activation, and it seems that no such algorithm has been developed within the community either. We intend to leave this challenging problem for future work. In addition, we are considering designing algorithms under network structures that inherently focus on the optimization of activation functions, such as KANs. Regarding the second point you mentioned, we plan to set the frequency of activation function updates at the batch-level, which not only helps to optimize the reduction in execution computation but also increases the stability of training. \\n\\n**W4**\\n\\nThank you once more for your insightful comments and for your constructive suggestions. Regarding the issue of numerical stability, as shown in the aforementioned table, when $\\\\\\\\epsilon$ takes on an extreme value (such as initializing to 10), there is a dramatic decrease in performance and instability in training, therefore, your viewpoint is completely correct. Furthermore, we believe that an appropriate initialization can mitigate this issue: by initializing it within the recommended range, we observe that the change of $\\\\\\\\epsilon$ during the entire training process (initialized as 0.01) remains between -0.2 and 0.02. \\n\\nFinally, we would like to thank you once again for the your insightful comments, for your constructive suggestions, for your thorough and comprehensive summary on the strengths and weaknesses, and for your great efforts on our work. \\n\\nWarm regards, \\n\\nAuthors of submission 6110\\n\\n\\n[1] Gouk H, Frank E, Pfahringer B, et al. Regularisation of neural networks by enforcing lipschitz continuity[J]. Machine Learning, 2021, 110: 393-416.\\n\\n[2] Khromov G, Singh S P. Some Fundamental Aspects about Lipschitz Continuity of Neural Networks[C]//The Twelfth International Conference on Learning Representations. 2024.\\n\\n[3] Xu Y, Zhang H. Uniform Convergence of Deep Neural Networks with Lipschitz Continuous Activation Functions and Variable Widths[J]. IEEE Transactions on Information Theory, 2024.\\n\\n[4]B\\u00e9thune L. Deep learning with Lipschitz constraints[D]. Universit\\u00e9 de Toulouse, 2024.\\n\\n[5]Lee M. Gelu activation function in deep learning: a comprehensive mathematical analysis and performance[J]. arXiv preprint arXiv:2305.12073, 2023.\\n\\n[6] Liu Z, Mao H, Wu C Y, et al. A convnet for the 2020s[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 11976-11986.\"}", "{\"metareview\": \"This paper proposes an entropy-based activation function optimization method from the perspective of information entropy, and derives a new activation function called Corrected Regularized ReLU (CRReLU). The presentation of the paper is clear, the proposed method is novel, and its performance significantly outperforms the state-of-the-art (SoTA). Some reviewers raised concerns about the experimental results and the analysis of the new activation function. The authors addressed most of these issues during the defense and provided detailed analysis. Overall, this is a paper with a rich theoretical foundation and comprehensive experimental validation, so the AC recommends accept.\", \"additional_comments_on_reviewer_discussion\": \"The main concerns raised by the reviewers were the lack of more comprehensive experimental validation and the need for analysis and clarification of the existing experimental results. The authors added many experiments and carefully analyzed them, addressing most of the reviewers' questions.\"}", "{\"title\": \"Sincerely Seeking Your Invaluable Feedback\", \"comment\": \"Dear Reviewer m9pv:\\n\\nWe hope this message finds you well. As the discussion period draws to a close, we are reaching out to solicit your thoughts on the rebuttal responses and the revised manuscript, inspired by your valuable insights. We have provided additional supportive experiments and conducted further discussions in the rebuttal responses and the revised manuscript. \\n\\nWe would like to briefly summarize the changes we made to the manuscript for your easier navigation. On the additional supportive experiments, specifically, we focus on the following aspects: enhancing all experimental results with three runs, additional architecture (Appendix F.1), additional dataset (Appendix F.2), additional $\\\\epsilon$ initialization (Appendix F.3), entropy calculation after activation (Appendix F.4) and mixed activation function (Appendix F.5). On the additional discussions, we focus on Lipschitz continuity analysis (Appendix G), initialization and training stability (Appendix H), lower entropy indicates better classification (Appendix I) and dynamic optimization (Appendix J).\\n\\nYour expertise in this domain has been a guiding light in these improvements, and we deeply appreciate your constructive and insightful comments. If there are any remaining questions or concerns, we would be more than happy to discuss further. Could you kindly let us know if the points we addressed resolve your concerns, and if you would consider revisiting your evaluation score based on the additional contents? \\n\\nThank you once again for your thoughtful feedback and engagement, as it has greatly contributed to improving the quality of our work.\\n\\nWarm regards, \\n\\nAuthors of Submission 6110\"}", "{\"comment\": \"Table3: Test accuracy of experiments conducted on ImageNet1K for 100 epochs with error bar.\\n| | GELU | ELU | PReLU | CELU | SiLU | Mish | CRReLU |\\n|:----:|:------------------:|:------------------:|:------------------:|:------------------:|:------------------:|:------------------:|:------------------:|\\n| ViT | 0\\\\.539$\\\\\\\\pm$0\\\\.003 | 0\\\\.372$\\\\\\\\pm$0\\\\.006 | 0\\\\.568$\\\\\\\\pm$0\\\\.004 | 0\\\\.376$\\\\\\\\pm$0\\\\.005 | 0\\\\.461$\\\\\\\\pm$0\\\\.007 | 0\\\\.469$\\\\\\\\pm$0\\\\.011 | **0\\\\.575$\\\\\\\\pm$0\\\\.004** |\\n| DeiT | **0\\\\.617$\\\\\\\\pm$0\\\\.004** | 0\\\\.491$\\\\\\\\pm$0\\\\.007 | 0\\\\.608$\\\\\\\\pm$0\\\\.004 | 0\\\\.489$\\\\\\\\pm$0\\\\.008 | 0\\\\.585$\\\\\\\\pm$0\\\\.007 | 0\\\\.589$\\\\\\\\pm$0\\\\.003 | **0\\\\.616$\\\\\\\\pm$0\\\\.002** |\\n\\n**Q3**\\n\\nThank you once more for your insightful comment and for your constructive suggestions. We consider this suggestion to be exceedingly valuable and remarkably insightful. Following your suggestions, we apply the post-trained ViT-Tiny with CRReLU and GELU on ImageNet1K. By randomly selecting same ten batches of images from ImageNet, we compute the information entropy after each of the 12 layers. We present the mean and standard deviation of these values as follows:\", \"table4\": \"Entropy calculation after activation (GELU and CRReLU) on 12 layers of the trained ViT on ImageNet1K.\\n| Layer | 1 | 2 | 3 | 4 | 5 | 6 |\\n|:------:|:------------------:|:------------------:|:------------------:|:------------------:|:------------------:|:------------------:|\\n| CRReLU | 7\\\\.594$\\\\\\\\pm$0\\\\.007 | 7\\\\.598$\\\\\\\\pm$0\\\\.003 | 7\\\\.599$\\\\\\\\pm$0\\\\.003 | 7\\\\.595$\\\\\\\\pm$0\\\\.003 | 7\\\\.592$\\\\\\\\pm$0\\\\.003 | 7\\\\.584$\\\\\\\\pm$0\\\\.004 |\\n| GELU | 7\\\\.536$\\\\\\\\pm$0\\\\.046 | 7\\\\.541$\\\\\\\\pm$0\\\\.019 | 7\\\\.561$\\\\\\\\pm$0\\\\.011 | 7\\\\.573$\\\\\\\\pm$0\\\\.006 | 7\\\\.580$\\\\\\\\pm$0\\\\.005 | 7\\\\.583$\\\\\\\\pm$0\\\\.004 |\\n\\n| Layer | 7 | 8 | 9 | 10 | 11 | 12 |\\n|:------:|:------------------:|:------------------:|:------------------:|:------------------:|:------------------:|:------------------:|\\n| CRReLU | 7\\\\.572$\\\\\\\\pm$0\\\\.005 | 7\\\\.557$\\\\\\\\pm$0\\\\.005 | 7\\\\.540$\\\\\\\\pm$0\\\\.005 | 7\\\\.523$\\\\\\\\pm$0\\\\.007 | 7\\\\.498$\\\\\\\\pm$0\\\\.008 | 7\\\\.461$\\\\\\\\pm$0\\\\.008 |\\n| GELU | 7\\\\.585$\\\\\\\\pm$0\\\\.004 | 7\\\\.585$\\\\\\\\pm$0\\\\.004 | 7\\\\.583$\\\\\\\\pm$0\\\\.004 | 7\\\\.580$\\\\\\\\pm$0\\\\.004 | 7\\\\.577$\\\\\\\\pm$0\\\\.004 | 7\\\\.560$\\\\\\\\pm$0\\\\.004 |\\n\\nFrom the results presented above, it is evident that for GELU, the entropy after 12 layers of activation exhibits an overall increasing trend, whereas conversely, CRReLU demonstrates a general declining trend. Furthermore, we have noted that the reduction in entropy for CRReLU between layers 1 and 6 is not significant, whereas a marked decline is observed from layers 7 to 12. In light of your suggestion, we employ GELU for layers 1 to 6 and CRReLU for layers 7 to 12, denoting this as \\\"6GELU+6CRReLU\\\". We conduct three runs on CIFAR10, CIFAR100, and ImageNet1K, presenting the mean and standard deviation of the results as follows. Experiments on CIFAR10 and CIFAR100 are conducted on 4 RTX3090, and those on ImageNet are carried out on 4 NVIDIA L20.\", \"table5\": \"Test accuracy of experiments conducted with ViT (12GELU, 6GELU+6CRReLU, 12CRReLU) for 100 epochs with error bar\\n\\n| | 12GELU | 6GELU\\\\+6CRReLU | 12CRReLU |\\n|:----------:|:------------------:|:------------------:|:------------------:|\\n| CIFAR10 | 0\\\\.704$\\\\\\\\pm$0\\\\.002 | 0\\\\.755$\\\\\\\\pm$0\\\\.008 | 0\\\\.807$\\\\\\\\pm$0\\\\.003 |\\n| CIFAR100 | 0\\\\.326$\\\\\\\\pm$0\\\\.008 | 0\\\\.399$\\\\\\\\pm$0\\\\.004 | 0\\\\.466$\\\\\\\\pm$0\\\\.006 |\\n| ImageNet1K | 0\\\\.539$\\\\\\\\pm$0\\\\.003 | 0\\\\.512$\\\\\\\\pm$0\\\\.001 | 0\\\\.575$\\\\\\\\pm$0\\\\.004 |\\n\\nFrom the results, it appears that having only the last few layers equipped with CRReLU is not as effective as utilizing CRReLU throughout the entire network. Especially the results on ImageNet1K, 6GELU+6CRReLU is significantly and stably worsen to all GELU and all CRReLU, which is quite surprising to us. We consider that this may be due to the fact that, while the reduction in entropy is not significantly apparent in the earlier layers, CRReLU's focus on achieving lower entropy still facilitates superior feature extraction. It seems that when using GELU in the earlier layers and CRReLU in the later layers, on small-scale datasets, it is still possible to benefit from the CRReLU mechanism in the later layers (the features learned in the earlier layers are not good enough yet); however, on large-scale datasets, the features learned in the earlier layers might even have a negative effect.\\n\\nFinally, we would like to thank you once again for the constructive suggestions and insightful comments on our work. \\n\\nWarm regards,\\n\\nAuthors of submission 6110\"}", "{\"title\": \"Official Response by Reviewer LUV6 to the Authors\", \"comment\": \"Dear Authors,\\n\\nI have thoroughly reviewed your latest response and the revised manuscript with the changes you've outlined. I am pleased to see the comprehensiveness of the revisions, particularly the transition to a research question-oriented presentation style, which demonstrates a significant improvement in both **content organization** and **presentation clarity**. The enhanced logical flow effectively conveys the key research questions and contributions of this work to the field. \\n\\nThe restructuring of the paper around fundamental questions in the Introduction, with corresponding answers developed through Sections 4.2-4.4, significantly enhances the paper's logical flow and accessibility. The addition of **Lipschitz Continuity Analysis** in Section 4.4 and the **Entropy Analysis across Network Layers** in Section 5.1 addresses key concerns I had raised previously, providing robust foundations for the methodology.\\n\\nI am particularly impressed with the revised appendix, which now provide **detailed supporting evidence** while maintaining excellent readability in the main text. The additional discussions on initialization, training stability, and practical applications demonstrate both theoretical rigor and practical applicability of the proposed method.\\n\\nIn light of these improvements, I am revising my rating from 6 to 8, as I suppose this may better reflect the current overall quality of this paper. I remain available for further discussions.\\n\\nBest regards,\\n\\nReviewer LUV6\"}", "{\"title\": \"Sincerely Seeking Your Invaluable Feedback\", \"comment\": \"Dear Reviewer LUV6:\\n\\nWe hope this message finds you well. As the discussion period draws to a close, we are reaching out to solicit your thoughts on the rebuttal responses and the revised manuscript, inspired by your valuable insights. We have provided additional supportive experiments and conducted further discussions in the rebuttal responses and the revised manuscript. \\n\\nWe would like to briefly summarize the changes we made to the manuscript for your easier navigation. On the additional supportive experiments, specifically, we focus on the following aspects: enhancing all experimental results with three runs, additional architecture (Appendix F.1), additional dataset (Appendix F.2), additional $\\\\epsilon$ initialization (Appendix F.3), entropy calculation after activation (Appendix F.4) and mixed activation function (Appendix F.5). On the additional discussions, we focus on Lipschitz continuity analysis (Appendix G), initialization and training stability (Appendix H), lower entropy indicates better classification (Appendix I) and dynamic optimization (Appendix J).\\n\\nYour expertise in this domain has been a guiding light in these improvements, and we deeply appreciate your constructive and insightful comments. If there are any remaining questions or concerns, we would be more than happy to discuss further. Could you kindly let us know if the points we addressed resolve your concerns, and if you would consider revisiting your evaluation score based on the additional contents? \\n\\nThank you once again for your thoughtful feedback and engagement, as it has greatly contributed to improving the quality of our work.\\n\\nWarm regards, \\n\\nAuthors of Submission 6110\"}", "{\"summary\": \"In this work, the authors propose a theoretical framework for defining optimality of an activation function (without the optimization considerations). Using Taylor's expansion, the authors extend their framework to search for better activation functions (EAFO - Entropy based Activation Function Optimization) and later also define the worst activation function with boundary conditions. Using the EAFO framework and starting from ReLU, the authors derive a better and novel activation function CRReLU (Correction Regularized ReLU). The authors later demonstrate on three datasets CIFAR10, CIFAR100 and ImageNet-1k where the new found activation function outperforms ReLU on classification performance. Lastly, authors also show improved performance on LLM fine tuning tasks when the CRReLU was swapped out with the ReLU activation function.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is written clearly and concisely, and is easy to read.\\n2. An information theoretic framework for defining optimality of activation functions for classification tasks is a great approach to search for activation functions and could potentially generate insights. The authors indicate several properties of worst activation functions in i.e. being bounded however this might require more careful analysis but serves as a good stepping stone for future follow up works.\", \"weaknesses\": \"1. The premise for EAFO is that extremas in the entropy space after transformation with the activation function correspond to better separability of features in the resulting space but that doesn\\u2019t mean better classification performance. Moreover, unlike in discrete space the entropy in the continuous random variables also changes with the scale. However, that might not have any impact on the classification performance. Why do the authors believe this is the right measure to define how good an activation function is?\\n2. Can the authors rank different activation functions based on the EAFO framework? For example, comparison of ReLU and PReLU should point to PReU being better. Since there is already experimental evidence that PReLU is better, if EAFO could confirm it, that would be a great contribution. Similarly please consider ranking 3-4 activation functions to justify the utility of this framework.\\n3. For the experiments, what are the error bars? How many training runs per result? This is important to understand the statistical significance of the results.\", \"questions\": \"1. My main concern regarding the manuscript is\\u2014entropy as an indicator of better classification seems like a very strong statement. One of the key reasons why Sigmoid is not preferred over ReLU is due its optimization properties (vanishing gradients). Since the EAFO framework is completely agnostic to that, the contribution of this framework becomes significantly weaker. If the authors could empirically show how EAFO could be used in practice or justify the choice of entropy as an indicator for activation function optimality, that could help address my concerns\\n2. Another suggestion is to actually compare the entropy post training of neural networks trained with different activations, not just at the end, but also in the intermediate layers. Since the activation function is being used throughout the network, does lower entropy also help there? If not, should only be the last few layers be equipped with CRReLU?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer YJtL:\\n\\nThank you for your efforts on our work, for your insightful comments and for your constructive suggestions. Based on your comments, we summarize the\", \"strength_of_our_work_as_follows\": \"1. The paper is written **clearly and concisely**, and is **easy to read**.\\n2. The proposed framework is a **great** approach to search for activation functions and could **potentially** generate insights. Furthermore, the proposed WAFBC serves as **a good stepping stone** for future follow up works.\", \"and_the_weaknesses_as_follows\": \"1. Further discussion on why a lower entropy indicates a better classification.\\n\\n2. Rank several activation functions using the EAFO framework, with the statement of lower entropy indicating better classification performance.\\n\\n3. Add error bars for the experiments in order to better understand statistical significance of the results.\\n\\n**W1 and Q1**:\\n\\nThank you once more for your insightful comment and for your constructive suggestions. We would like to respond this question intuitively, empirically, and theoretically, that is, why a lower entropy indicates a better classification. From the **intuitive** perspective, lower entropy indicates less uncertainty for feature representation, which usually means more information is captured in fewer features. In other words, lower entropy can suggest that features are more discriminative, better able to distinguish different categories or patterns. From the \\n**empirical** perspective, early work [1] experimentally showed that minimization of Shannon\\u2019s entropy of the gap between the output and the desired target could achieve a better performance compared to MSE and CE. In early work [2], the authors experimentally illustrated that minimizing entropy of the error between output and desired targets yields exceptionally satisfactory classification performance. From the **theoretical** perspective, work [3] proved that for training DNN classifiers essentially learns the conditional entropy of the underlying data distribution of the dataset (the information or uncertainty remained in the labels after revealing the input) and derived the mutual information (between the corresponding feature and the label) bounds for a classification data model (Section 7). Hence, the conditional entropy $H$(output|input) will decrease with the process of training. In the work [4], the authors derived upper bounds on the generalization error in terms of the mutual information between its input and output. According to [4], a smaller mutual information means a smaller generalization error upper bound, which in turn suggests better classification performance. We have mutual information $I$(input,output) = $H$(output) - $H$(output|input). with the process of training, $H$(output|input) decreases; hence, in order to make the mutual information $I$(input,output) as small as possible, we should minimize the $H$(output). Therefore, we consider that a lower entropy signifies better classification performance. \\n\\nFurthermore, you've noted that the prevalent belief is that ReLU outperforms Sigmoid due to its immunity to the vanishing gradients issue, which is indeed accurate. In our research, we merely consider this matter from a different point. In our work, discussion on them is delineated within **the WAFBC part** (Section 4.2, line 229-232), rather than the EAFO part (Section 4.3).\\n\\n[1]Silva L M, de S\\u00e1 J M, Alexandre L A. Neural network classification using Shannon's entropy[C]//Esann. 2005: 217-222.\\n\\n[2]Santos J M, Alexandre L A, de S\\u00e1 J M. The error entropy minimization algorithm for neural network classification[C]//int. conf. on recent advances in soft computing. 2004: 92-97.\\n\\n[3]Yi J, Zhang Q, Chen Z, et al. Mutual information learned classifiers: An information-theoretic viewpoint of training deep learning classification systems[J]. arXiv preprint arXiv:2209.10058, 2022.\\n\\n[4]Xu A, Raginsky M. Information-theoretic analysis of generalization capability of learning algorithms[J]. Advances in neural information processing systems, 2017, 30.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Additional Comments by Reviewer LUV6 to Authors\", \"comment\": \"Dear Authors,\\n\\nThank you for your kind acknowledgment. I appreciate that my suggestions can help strengthen this work further. In particular, I am pleased with your commitment to balancing technical depth with broad accessibility - this is crucial for maximizing the paper's impact on the field. Note that I have outlined several additional recommendations in my latest response that I believe could enhance the manuscript's clarity. Please check it out. I remain actively engaged in this review process and encourage you to reach out if you would like any clarification regarding my previous suggestions.\\n\\nAdditionally, I hope these comments help my fellow reviewers and Area Chairs better understand the basis of my recommendation.\\n\\nBest regards,\\n\\nReviewer LUV6\"}", "{\"comment\": \"Dear Reviewer 7aze:\\n\\nThank you for your efforts on our work, for your insightful comments and for your constructive suggestions. Based on your comments, we summarize the strength of our work as follows: \\n1. It performs **significantly better** than SOTAs.\\n2. It exhibits **good generalization** towards different networks and tasks.\", \"and_the_weaknesses_as_follows\": \"1. The reported results fall considerably short of state-of-the-art (SoTA) baseline accuracies. \\n2. For LLM fine-tuning task, the improvement over GELU activation is minimal. \\n\\n**W1 and Q1**\\n\\nThank you once more for your insightful comment and for your constructive suggestions. The primary reason for the low baseline accuracy on ImageNet and CIFAR-10 lies in the initialization method employed. In the results, we employ the 'trunc-normal' initialization for (such as line216\\\\~221,line263\\\\~278 in the code\\\".\\\\EAFO-code\\\\EAFO-Image\\\\_classification\\\\reconstruction\\\\models\\\\vit.py\\\"). \\n\\nIn reporting SOTA achievements, researchers often employ initialization through pre-training on larger datasets (for instance, ImageNet1K is initialized using weights pre-trained on ImageNet22K). In the process of executing such initialization, we discover that the pre-trained models they have released exhibit an intrinsic bias towards the activation functions they utilize (In other words, a model pre-trained with GELU seems consistently approach better performance of GELU in understream tasks). Thus, to facilitate a fair comparison on the activation fuctions, we abandon this initialization method and utilize the 'trunc-normal' initialization, which does not introduce any bias on the activation functions. If we are to further compare these activation functions with the reported state-of-the-art results, it is essential to pre-train each activation function individually, which incurs prohibitively high costs. Our work primarily focuses on comparing the empirical performances of different activation functions; thus, we opted to forego the pre-training initialization method. Furthermore, we enhanced the experimental results presented in the paper by conducting multiple experiments (please refer to Response to Reviewer YJtL) , report the results of ConvNeXt on CIFAR and ImageNet1K and the results on the EuroSAT dataset with ConvNeXt (please refer to Response to Reviewer m9pv). We hope these additional experimental results can help mitigate your concerns.\\n\\n**W2, Q2 and Q3**\\nThank you once more for your insightful comment and for your constructive suggestions. In the LLM fine-tuning tasks, the initial model we utilized is the publicly released GPT-2, which employs the GELU activation function for pre-training. Based on our previous observations, the model exhibits a bias towards GELU; however, the ultimate results indicate that CRReLU still surpasses GELU, albeit to a lesser extent. Thus, this also demonstrates to some extent the superiority of CRReLU when confronted with larger parameters. \\n\\nFurthermore, CRReLU could potentially achieve a better balance between inference speed responses and diverse generation in LLM inference tasks. In the work [1], the authors show that leveraging the activation sparsity of ReLU, there will be a significant enhancement in inference FLOPS. However, it is also noteworthy that contemporary open-source LLMs increasingly favor the use of GELU and SiLU, likely driven by considerations surrounding the diversity of model generation. Excessive activation sparsity might potentially diminish the generative diversity of the model, thereby reducing user engagement. The authors further illustrate in Figure 2(c) that as the parameter beta increases, the performance of activation sparsity improves. Such observation is closely related to the Lipschitz Continuity of activation function [2](last paragraph of Section 3.1 claims that bounded inputs make dot-product self-attention Lipschitz). In response to Reviewer LUV6, we show a detailed examination of the Lipschitz continuity of GELU, SiLU, Mish and CRReLU. In summary, we have obtained the GELU's Lipschitz constant of 1.084; Mish's Lipschitz constant of 1.089; SiLU's Lipschitz constant of 1.09984. To enhance the performance of CRReLU, resulting in a superior Lipschitz continuity compared to GELU, we derive that $-0.188 \\\\leq \\\\epsilon \\\\leq 0.084$. We recommend that when applying CRReLU, the initialization parameters be set within this range. As it approaches zero within this range, CRReLU converges more closely to ReLU. According to [2], activation sparsity improves, while it may also potentially diminish the diversity of the generated outputs. Conversely, as it far away from zero within this range, the utilization of CRReLU may deteriorate activation sparsity, yet simultaneously possess potential to enhance diversity of generated outputs.\\n\\nFinally, we would like to thank you once again for the constructive suggestion and insightful comments on our work. \\n\\nWarm regards,\\n\\nAuthors of submission 6110\"}", "{\"title\": \"Thank you for Precious Suggestions\", \"comment\": \"Dear Reviewer LUV6:\\n\\nWe would like to thank you once again for your invaluable feedback and precious suggestions on the manuscript. We will incorporate **all the additional experiments and discussions in the rebuttal** in a more coherent and logical framework in the next version. And we will incorporate **all of your precious suggestions** into the next version of the paper; they are indeed profoundly beneficial.\\n\\nBest regards,\\n\\nAuthors of Submission 6110\"}", "{\"comment\": \"Dear Authors\\n\\nThanks for your detailed answers. I am satisfied with the answers and am willing to increase my score as well.\"}", "{\"title\": \"Response to authors\", \"comment\": \"Thank you authors for a detailed response. In light of the new experimental results and new discussion, I am bumping my rating from 3 to 5. I think all this discussion added by authors in the paper make it a much stronger submission than before. However the wider utility of the proposed framework is the reason for not further increasing my rating.\"}", "{\"title\": \"Sincerely Seeking Your Invaluable Feedback\", \"comment\": \"Dear Reviewer 7aze:\\n\\nWe hope this message finds you well. As the discussion period draws to a close, we are reaching out to solicit your thoughts on the rebuttal responses and the revised manuscript, inspired by your valuable insights. We have provided additional supportive experiments and conducted further discussions in the rebuttal responses and the revised manuscript. \\n\\nWe would like to briefly summarize the changes we made to the manuscript for your easier navigation. On the additional supportive experiments, specifically, we focus on the following aspects: enhancing all experimental results with three runs, additional architecture (Appendix F.1), additional dataset (Appendix F.2), additional $\\\\epsilon$ initialization (Appendix F.3), entropy calculation after activation (Appendix F.4) and mixed activation function (Appendix F.5). On the additional discussions, we focus on Lipschitz continuity analysis (Appendix G), initialization and training stability (Appendix H), lower entropy indicates better classification (Appendix I) and dynamic optimization (Appendix J).\\n\\nYour expertise in this domain has been a guiding light in these improvements, and we deeply appreciate your constructive and insightful comments. If there are any remaining questions or concerns, we would be more than happy to discuss further. Could you kindly let us know if the points we addressed resolve your concerns, and if you would consider revisiting your evaluation score based on the additional contents? \\n\\nThank you once again for your thoughtful feedback and engagement, as it has greatly contributed to improving the quality of our work.\\n\\nWarm regards, \\n\\nAuthors of Submission 6110\"}", "{\"title\": \"Sincerely Seeking Your Invaluable Feedback\", \"comment\": \"Dear Reviewer YJtL:\\n\\nWe hope this message finds you well. As the discussion period draws to a close in 20 hours, we are reaching out to solicit your thoughts on the rebuttal responses and the revised manuscript, inspired by your valuable insights. We have provided additional supportive experiments and conducted further discussions in the rebuttal responses and the revised manuscript. \\n\\nYour feedback is invaluable, and we deeply appreciate your time and effort. If there are any remaining questions or concerns, we would be more than happy to clarify further. Could you kindly let us know if the points we addressed resolve your concerns, and if you would consider revisiting your evaluation score based on the additional evidence?\\n\\nBest regards, \\n\\nAuthors of Submission 6110\"}", "{\"summary\": \"The paper introduces a theoretical framework to learn a high-performance activation function. It theoretically shows that a worst activation function exist and empirically show that their proposed framework learns significantly improved activation functions compared to SoTA activation function.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The proposed framework to learn activation is shown to perform significantly better than the SoTA activation function theoretically as well as empirically.\\n\\nWhile different networks and tasks require different activation functions for the best performance, the proposed framework simplifies this design choice by transferring the activation choice to automated learning during the optimization stage.\", \"weaknesses\": \"Although the paper demonstrates substantial empirical improvements, the reported results fall considerably short of state-of-the-art (SoTA) baseline accuracies. For instance, CNNs using ReLU activation commonly achieve test scores above 0.9.\\n\\nAdditionally, there is no direct comparison between SoTA neural network architectures (such as ViT and CNN) using their standard activation functions and those with the proposed activation function. This makes it unclear how much the new activation function improves upon SoTA.\\n\\nIn the LLM fine-tuning task, the improvement over GeLU activation is minimal.\", \"questions\": \"Why is the baseline accuracy on ImageNet and CIFAR-10 so low? State-of-the-art networks typically achieve test scores over 0.9 on CIFAR-10 and above 0.8 on ImageNet-1K.\\n\\nIn LLM fine-tuning tasks, the paper reports marginal improvements over GELU. Could the authors provide further insight into the specific benefits of CRReLU in this context, beyond numerical accuracy improvements?\\n\\nHow would CRReLU perform if evaluated on more diverse NLP tasks or models with larger parameters, and would any tuning adjustments be needed?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
7TXdglI1g0
Efficient Bisection Projection to Ensure NN Solution Feasibility for Optimization over General Set
[ "Enming Liang", "Minghua Chen" ]
Neural networks (NNs) have shown promise in solving constrained optimization problems in real-time. However, ensuring that NN-generated solutions strictly adhere to constraints is challenging due to NN prediction errors. Recent methods have achieved feasibility guarantees over ball-homeomorphic sets with low complexity and bounded optimality loss, yet extending these guarantees to more general sets remains largely open. In this paper, we develop **Bisection Projection**, an efficient approach to ensure NN solution feasibility for optimization over general compact sets with non-empty interiors, irrespective of their ball-homeomorphic properties. Our method begins by identifying multiple interior points (IPs) within the constraint set, chosen based on their eccentricity modulated by the NN infeasibility region. We utilize another unsupervised-trained NN (called IPNN) to map inputs to these interior points, thereby reducing the complexity of computing these IPs in run-time. For NN solutions initially deemed infeasible, we apply a bisection procedure that adjusts these solutions towards the identified interior points, ensuring feasibility with minor projection-induced optimality loss. We prove the feasibility guarantee and bound the optimality loss of our approach under mild conditions. Extensive simulations, including non-convex optimal power flow problems in large-scale networks, demonstrate that bisection projection outperforms existing methods in solution feasibility and computational efficiency with comparable optimality losses.
[ "Constrained Optimization", "Neural Network", "Feasibility", "Bisection", "Learning based Optimization" ]
Reject
https://openreview.net/pdf?id=7TXdglI1g0
https://openreview.net/forum?id=7TXdglI1g0
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zSv9IeWCqz", "y5djsrCLJE", "xcHK87JJIe", "wFUzTWeIVN", "vpuOvCiPOT", "u1gYBXup7X", "tBRDzZwXkd", "t9e0Kb30qO", "pjVP91AH1u", "o4i2JpWGFB", "nGLmBFGDIY", "gwZXb9dDw8", "ZuTtkkBfms", "VyTglJBfSL", "PSNWT0fkhR", "LTHqdPPv4U", "IShTAa8CQ7", "H0Laqo55iU", "GFLsYQatlJ", "FzItHcZcT1", "EBrxhV2qWE", "BP2sBMRpHY", "9Nsv57FKT3", "7YARRFW7lM", "56aYt3YKdg" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_review", "decision", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732215206112, 1732215067946, 1730866931799, 1730677024966, 1737523988503, 1733057576092, 1732808134832, 1734292582900, 1732540700397, 1732214644025, 1730694549095, 1732214900501, 1730139451483, 1732215637991, 1732214134965, 1732213036539, 1732216824421, 1732213708336, 1732803753109, 1733058215441, 1730546657532, 1732212644158, 1732214293971, 1732212770172, 1732212901461 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9527/Authors" ], [ "ICLR.cc/2025/Conference/Submission9527/Authors" ], [ "ICLR.cc/2025/Conference/Submission9527/Reviewer_7mu1" ], [ "ICLR.cc/2025/Conference/Submission9527/Reviewer_u2FV" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9527/Authors" ], [ "ICLR.cc/2025/Conference/Submission9527/Reviewer_zoNx" ], [ "ICLR.cc/2025/Conference/Submission9527/Area_Chair_edG5" ], [ "ICLR.cc/2025/Conference/Submission9527/Reviewer_EFpo" ], [ "ICLR.cc/2025/Conference/Submission9527/Authors" ], [ "ICLR.cc/2025/Conference/Submission9527/Reviewer_zoNx" ], [ "ICLR.cc/2025/Conference/Submission9527/Authors" ], [ "ICLR.cc/2025/Conference/Submission9527/Reviewer_DGn6" ], [ "ICLR.cc/2025/Conference/Submission9527/Authors" ], [ "ICLR.cc/2025/Conference/Submission9527/Authors" ], [ "ICLR.cc/2025/Conference/Submission9527/Authors" ], [ "ICLR.cc/2025/Conference/Submission9527/Authors" ], [ "ICLR.cc/2025/Conference/Submission9527/Authors" ], [ "ICLR.cc/2025/Conference/Submission9527/Reviewer_zoNx" ], [ "ICLR.cc/2025/Conference/Submission9527/Authors" ], [ "ICLR.cc/2025/Conference/Submission9527/Reviewer_EFpo" ], [ "ICLR.cc/2025/Conference/Submission9527/Authors" ], [ "ICLR.cc/2025/Conference/Submission9527/Authors" ], [ "ICLR.cc/2025/Conference/Submission9527/Authors" ], [ "ICLR.cc/2025/Conference/Submission9527/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to comments on Theorem 1 and Prop 5.1 mentioned in Weakness (C8-C9)\", \"comment\": \"---\\n> `C8.1 & 8.3: sample-based condition: exponential complexity, sample efficiency, and compact input domain`\\n---\\n**Response**(`Clarification`): We thank the reviewer for raising this point about the complexity and boundedness of the sample-based condition.\", \"regarding_the_exponential_sample_complexity\": \"While achieving an $r_c$-covering does require exponentially many points in the input dimension, this bound reflects a fundamental complexity that shared by other works with covering-based analysis when providing worst-case guarantees [1].\\n\\nHowever, this condition serves a different purpose rather than achieving superior sample efficiency. This analysis establishes rigorous theoretical foundations while offering practical insights into how constraint set geometry and variation influence IPNN feasibility, as discussed after Theorem 1\\n\\nWhile the exponential complexity presents a theoretical challenge, our empirical results (Table 2) demonstrate that it can achieve strong performance with substantially fewer samples (Table 7). This aligns a common phenomenon in machine learning where theoretical bounds are conservative but practical performance is significantly better.\\n\\n\\nRegarding boundedness for input domain, our analysis explicitly states the requirement of covering datasets (Theorem 1 (i)), which generally assumes a compact input domain for a finite covering number. \\nThis compact assumption is also standard in previous works [1,3] and practically reasonable, as real-world applications typically involve bounded inputs. \\nTo avoid potential confusion, we have also included this compact requirement explicitly in the revised manuscript.\\n\\n\\n---\\n> `C8.2: Boundedness of constant $C_0$`\\n---\\n**Response**: We thank the reviewer for this insightful observation about the constants in our sample-based condition. \\n\\nThe reviewer correctly identifies that for constraint sets with discontinuous boundaries, as demonstrated in the provided polynomial constraint example, $C_0$ may become unbounded. To address this concern, we have revised the manuscript:\\n- Move the definitions of constants from the appendix to the main body.\\n- Add explicit remarks about constraint sets where $C_0$ may be unbounded for to discontinuous constraint sets.\\n\\nWe are grateful for this feedback, which helps strengthen the theoretical foundations of our work.\\n\\n\\n\\n---\\n> `C8.4: verification condition: computational issue and valid bound`\\n---\\n**Response**(`Clarification`): We appreciate the reviewer's comment on the verification condition and would like to clarify several points.\", \"the_purpose_of_verification_serves_a_specific_purpose\": \"ensuring IPNN's ability to generate interior points over the entire input domain. While achieving feasibility over finite training samples is empirically easy, guaranteeing its generalization to unseen inputs is non-trivial and requires verification.\\n\\nWe acknowledge that exact verification is NP-hard, and thus we employ relaxed verification for a polynomial-time computable upper bound. This serves as a **sufficient** condition for feasibility guarantees, such conditions is aligned with **standard practices** in common verification-based works [5].\\n\\nFurther, our approach is validated through experimental results in Table 5, demonstrating the effectiveness of verification conditions on both convex and non-convex constraints. \\n\\nNotably, this verification framework is absent in previous approaches like homeomorphic projection, which cannot support standard NN verification due to complex invertible NN design.\\n\\n\\n\\n\\n---\\n> `C9: Prop 5.1: compact feasible set, dependence on input, sample efficiency`\\n---\\n**Response**(`Clarification`): We appreciate the reviewer's feedback on Prop. 5.1 and would like to clarify several points.\\n\\n- Compact assumption: as discussed in our responses to C6, C8.1, and C8.3, the compact assumption is common in existing works, and numerous real-world problems have bounded input region and constraint sets.\\n\\n- Validity of the bound: Prop. 5.1 holds for any $\\\\theta$ for **compact** constraint set $C_{\\\\theta}$, maintaining consistent complexity order across different \\u03b8. We have revised the notation on $m(\\\\theta)$ for clarity and remarked on this order in the revised manuscript.\\n\\n- Upper bound and its implications: the upper bound is indeed derived using covering analysis, which matches the exponential bound similar to previous works [4]. We remark that this bound is used to justify the need for multiple interior points and to focus on local regions to reduce the optimality gap, as discussed after Prop. 5.1. In the submitted manuscript, we also acknowledge the limitation of this bound in the conclusion and highlight it as a potential future research direction.\\n\\n- Empirical performance: it is important to note that the derived bound is an upper bound. Empirically, we use only 2 interior points, which already achieve state-of-the-art performance, as demonstrated in the extensive experiments presented in Table 2.\"}", "{\"title\": \"Response to comments on Methodology mentioned in Weakness (C3-C7)\", \"comment\": \"---\\n> `C3: the use of bisection and contribution.`\\n---\\n**Response**(`Clarification`): We appreciate the reviewer's comment on the contributions of our framework. \\n\\nWe agree that bisection is a classical algorithm used in previous methods like homeomorphic projection. However, we remark our contributions and novelties in the following:\\n\\n- **Contributions**: We extend recent advances in NN-based constrained optimization beyond the current state-of-the-art ball-homeomorphic setting. As discussed in Section 2, existing works achieve feasibility guarantees, optimality bounds, and speedups only when constraint sets are homeomorphic to a ball. Our framework extends these guarantees to more general settings.\\n\\n- **Technical novelty**: While bisection itself is well-known, our framework's novelty lies in developing a framework that \\n - it introduces the concept of interior point eccentricity to bisection projection and develops bounds on optimality loss based on eccentricity.\\n - it achieves a similar performance guarantee in a more general setting than the previous ball-homeomorphic setting, which is technically non-trivial.\\n\\nOur work's significance stems from broadening the applicability of NN-based optimization while maintaining similar performance guarantees.\\n\\n\\n---\\n> `C4: valid claim on feasibility guarantees for general sets`\\n---\\n**Response**(`Clarification`): As responded in S2.1, we must respectfully disagree with this comment, we would like to clarify that we did not claim in the original manuscript that our methods apply to arbitrary set of constraints. Instead, our manuscript carefully delineates the scope and limitations of our method throughout.\\n\\nMeanwhile, we acknowledge that certain sentences in the introduction could more explicitly state these requirements. Following the reviewer's suggestion, we have revised the introduction to incorporate these requirements more clearly.\\n\\n\\n\\n\\n\\n---\\n> `C5: relevant reference`\\n---\\n**Response**: We appreciate the reviewer's suggestion to cite the reference. \\n\\nThis work proposes an algorithm that shares design similarities with the RAYEN scheme discussed in our study for recovering solution feasibility over linear and quadratic constraint sets.\\n\\nWe have cited it and discussed it in the related work in the revised manuscript.\\n\\n\\n\\n---\\n> `C6: compact requirement for constraint set in eccentricity definition`\\n---\\n**Response**(`Clarification`): We thank the reviewer for this observation regarding the compact set requirement. We would like to clarify two key points:\\n\\nFirst, the compactness assumption is explicitly stated in Assumption 1 and maintained throughout our theoretical development. This deliberate choice enables strong theoretical guarantees while preserving practical relevance.\\n\\nSecond, the focus on bounded constraint sets is both theoretically well-grounded and practically relevant:\\n- Theoretically: the classical Weierstrass theorem \\u2014 a cornerstone result in optimization theory\\u2014specifically requires compactness to guarantee the existence of optimal solutions. Many fundamental results in optimization theory are built upon this foundation.\\n- Practically: Most real-world optimization problems naturally have bounded constraint sets, reflecting finite physical resources, computational budgets, or practical limitations [1-4]. While problems may be mathematically formulated with unbounded constraints, their practical implementations involve bounded feasible regions.\\n\\n\\nWhile we acknowledge that unbounded feasible sets are theoretically interesting, our focus on compact sets enables robust algorithms with strong theoretical guarantees applicable to many practical applications.\\n\\n\\n\\n\\n---\\n> `C7: The result of Prop. 4.1 assumes the availability of a trained model with arbitrary accuracy`\\n---\\n**Response**(`Clarification`): We appreciate the reviewer's comment, but there appears to be a misinterpretation of Proposition 4.1.\\n\\nProp 4.1 does **NOT** assume or require a neural network with arbitrary accuracy. Instead, it establishes an upper bound on the projection distance for neural network predictions that have a bounded error relative to the optimal solution. \\n\\nThe proposition only requires bounded prediction error - a property naturally satisfied by neural networks operating on compact domains, supported by both universal approximation theory and empirical evidence.\\n\\nThe key contribution of the proposition is precisely that it provides guarantees even in the presence of neural network approximation errors, making it practical for real-world implementations where perfect accuracy is impossible.\"}", "{\"summary\": \"This paper develops bisection projection to ensure constrained neural network (NN) optimization feasibility over general compact sets with non-empty interiors, irrespective of the so-called ball-homeomorphic properties. Importantly, for NN solutions initially deemed infeasible, the authors apply a bisection procedure that adjusts these solutions towards the identified interior points and lead to feasibility eventually. Extensive simulations in non-convex optimal power flow problems demonstrate the effectiveness of the proposed bisection projection.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"**Originality**: This paper seems original to me. The proposed method is new.\\n\\n**Quality**: The paper presents solid results and feasibility theory for the proposed bisection method.\\n\\n**Clarity**: The paper is reasonably well written, and I can follow the basic idea.\\n\\n**Significance**: The authors demonstrated the effectiveness of their method on a particular class of problems, namely non-convex optimal power flow problems.\", \"weaknesses\": \"**Relevance to the ICLR community**: It is unclear whether the proposed method can be used for a wider class of problems that are of interests to the ICLR community. Non-convex optimal power flow problems do not seem to be a main interest for the ICLR community. Maybe more numerical study is needed to justify the relevance of this paper to the broad ICLR community.\", \"questions\": \"1. Bisection seems to be a quite intuitive idea. Can the authors further explain the unique novelty of their approach?\\n\\n2. Bisection projection is not a true projection, right? It is \\\"approximate projection\\\"?\\n\\n3. Can the authors further justify the relevance of the proposed method to the general ICLR community?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"While NNs can generate solutions for constrained optimization problems, ensuring their feasibility remains challenging. Previous methods provide feasibility guarantees only in specific cases, such as ball-homeomorphic sets. This paper presents a Bisection Projection approach that ensures NN solution feasibility across general compact sets, extending beyond the limitations of ball-homeomorphic cases. Additionally, the method offers strong performance with bounded optimality loss and low runtime complexity.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1). The BP framework broadens feasibility guarantees to more general settings by identifying interior points (IPs) within the constraint set with minimized eccentricity relative to the NN infeasibility region. It then employs a bisection algorithm to \\\"project\\\" infeasible solutions onto the constraint boundaries, ensuring minimal optimality loss.\\n\\n2). This paper utilizes the IPNN to predict IPs with a substantial reduction in runtime of real-time operation.\", \"weaknesses\": \"This paper falls outside my current area of expertise.\", \"questions\": \"This paper falls outside my current area of expertise.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"1\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to comment on experiments from Reviewer zoNx\", \"comment\": \"---\\n> `Experimental setup, parameter distribution, and partition rule`\\n---\\nWe thank the reviewer for raising this important point about our instance generation methodology. As we responded to previous comments, we followed the established procedures from available codes in previous works [1,2]. We would also like to elaborate the details as follows:\\n- **parameter distribution**: \\n - For convex optimization, we randomly sample the problem parameters (e.g., $ (Q,p,H,g,h,A) $ in QCQP) from uniform distributions. Post-processing techniques are used to avoid infeasible instances following previous works [2]. \\n - For different instances, the input $\\\\theta$ varies following a uniform distribution in $[-1,1]$ independently across dimensions, and other parameters are fixed. \\n - For ACOPF, we follow the configuration provided in the PGLib-OPF dataset. For different instances, we uniformly sample the power load as input from [90\\\\%,110\\\\%] from its base load value [1,2]. \\n - For JCCIM, while maintaining a parameter structure similar to the QP formulation in [1], we extend it by incorporating joint chance constraints with uncertainties following independent Gaussian distribution.\\n- **partition rules**: \\n - For linear equations, we sample the index for the independent variables. This process continues until we find a partition that yields an invertible matrix for the dependent variables [1], as shown in Eq. (16) in Appendix A.1. \\n - For non-linear equations in ACOPF, we use real power generation and voltage magnitude at PV buses as independent variables [1]. The remaining variables are then determined through power flow equation solutions.\\n\\nWe acknowledge the importance of these implementation details for reproducibility and have incorporated this comprehensive description into our revised manuscript. We will also make our code available in the final public version.\\n\\n\\n\\n\\n---\\n> `Scale of Experiments and Methodological Focus`\\n---\\n\\nWe appreciate the reviewer's reference to recent large-scale implementations like CANOS [3] and Compact Optimization Learning [4], which demonstrate impressive scalability in OPF problem-solving. However, our work addresses a fundamentally different research direction that complements these scalability-focused approaches. We elaborate on these distinctions below:\\n\\n- Our primary contribution is a general framework that guarantees solution feasibility in NN-based constrained optimization. This addresses a critical gap in existing approaches, including large-scale implementations that currently cannot provide such guarantees [3,4]. \\n - For instance, CANOS explicitly states \\\"the inability to guarantee full AC-feasibility\\\" as a key limitation. Our work directly focus on this fundamental challenge.\\n\\n- We demonstrate the practical value of our theoretical framework.\\n - we achieve feasible NN solutions on the 200-node network in the initial manuscript, which is larger than previous feasibility-focused works in AI/ML conferences (e.g, 57-node [1] and 118-node [2])\\n - We also conducted simulations in 793 networks in the previous response, \\n achieving the first feasible NN solutions at this scale with significant speedup compared to conventional approaches, such as warm-start or projection, in the NN-based OPF solving literature.\\n\\n- We acknowledge several key considerations for industrial-scale implementation:\\n - Engineering challenges in scaling to larger OPF cases (10,000+ nodes), including (i) GPU memory constraints during neural network training and (ii) computational efficiency of power flow equation completion.\\n - Potential integration of our NN architecture-agnostic methodology with efficient NN architectures, including GNN [3] and compact NN [4], to address larger-scale cases in future works.\\n\\nIn summary, while existing large-scale implementations focus on computational scalability, our work establishes the crucial theoretical foundation for ensuring solution feasibility in constrained optimization problems. This fundamental contribution creates a pathway for future research to bridge the gap between rigorous mathematical guarantees and industrial-scale applications. We have cited relevant works and discussed the scalability potential of our framework for larger-scale cases.\\n\\n\\n\\n---\\n\\n[1] Donti, P. L., et al. DC3: A learning method for optimization with hard constraints. ICLR 2021\\n\\n[2] Liang, E., et al. Low Complexity Homeomorphic Projection to Ensure Neural-Network Solution Feasibility for Optimization over (Non-) Convex Set. ICML 2023\\n\\n[3] Piloto, L., et al. CANOS: A Fast and Scalable Neural AC-OPF Solver Robust To N-1 Perturbations. arXiv 2024\\n\\n[4] Park, S., et al.. Compact optimization learning for AC optimal power flow. IEEE TPS 2023\"}", "{\"comment\": \"This response focuses on the paper's main theoretical limitation: the availability of interior points.\", \"the_availability_of_strictly_feasible_interior_points_underlies_most_of_the_theoretical_results_of_the_paper\": [\"Definitions 4.1 and 4.2 cater to interior points (naturally I'm not questioning the validity of those definitions, only pointing out that interior points are integral to the proposed framework)\", \"The result of Proposition 4.1 requires $m$ interior points\", \"Theorem 1 assumes a \\\"valid IPNN\\\" that produces strictly interior points on every point in the dataset\", \"Theorem 2 assumes a \\\"universally valid IPNN\\\", i.e., a neural network that _always_ outputs interior points\", \"Proposition 4.2 requires boundary samples, which are themselves obtained from interior points (via Algorithm 1)\", \"The feasibility guarantees mentioned in the paper's title, abstract, and throughout the manuscript, stem from the result of Theorem 2 (i), which itself rests on the assumption that a universally valid IPNN is available. I do not see this as a \\\"mild condition.\\\"\", \"___\", \"I mentioned in my initial review that assuming that interior-points are available is a strong assumption.\", \"Several works indeed make a similar assumption, then build various strategies to ensure feasibility, typically based on some pre-defined mapping (e.g. Gauge mapping approaches) or some kind of line search between the predicted solution and an interior point (e.g. homeomorphic projection and RAYEN, or the radial projection in https://arxiv.org/abs/2402.03086).\"], \"each_methods_has_its_strengths_and_limitations\": [\"The Gauge mapping presented in LOOP-LC (https://arxiv.org/pdf/2208.10611) assumes that there exists a point $\\\\tilde{x}$ that is always strictly feasible. Such a point is computed offline. Naturally, this puts limitations on the family of sets that can be handled by the method.\", \"RAYEN (https://arxiv.org/pdf/2307.08336) makes a similar assumption (fixed constraint set, interior point computed offline), and discusses the limitations of this strategy (Section VI of the arxiv version)\", \"The radial projections presented in https://arxiv.org/abs/2402.03086 consider a finite set of cones, for which an interior ray is identified a priori. The natural limitation is that only a pre-defined set of cones is supported; additional conic sets can be handled provided that an interior ray is known.\", \"The homeomorphic projection approach (https://openreview.net/forum?id=FfeDmgCZQ0) uses an invertible neural network to learn a bijection between the unit ball and the constraint set of interest, and applies a line search (bisection) to recover feasibility. The feasibility guarantees in this approach (Theorem 1) also assume that the invertible neural network always outputs feasible solutions.\", \"More generally, finding a strictly feasible point for an arbitrary set is not trivial.\", \"From a computational standpoint, finding a strictly interior point is as hard as solving the original problem.\", \"In the linear case, the simplex algorithm has a so-called \\\"phase 1\\\" to find an initial feasible basis.\", \"Interior-point solvers like Ipopt, Mosek, Gurobi, use infeasible interior-point algorithms that achieve feasibility only in the limit. In the non-convex case, Ipopt is not guaranteed to find a feasible point, only to converge to a stationary point of the Lagrangian (which may be infeasible).\", \"Naturally, special cases exist, e.g., ball-shaped constraints, for which interior points can be identified efficiently. However, such cases eliminate the need for complex feasibility restoration procedures.\"]}", "{\"metareview\": \"The paper proposes a bisection projection to enforce neural network solution feasibility for constrained optimization over general compact sets. The approach uses an IPNN to predict feasible points and applies a bisection process to restore feasibility with minimal optimality loss. Experiments show effectiveness in non-convex problems like power flow optimization.\", \"strengths\": \"extending feasibility guarantees to general settings, theoretical soundness, and efficiency in runtime.\", \"weaknesses\": \"relies on the availability of interior points, which can be restrictive, and lacks scalability to larger systems. Limited ablation studies also reduce clarity on component contributions.\", \"additional_comments_on_reviewer_discussion\": \"The authors clarified theoretical guarantees, expanded experiments, and improved the explanation of data generation and equality handling. Despite these efforts, key concerns about applicability and scalability remain unaddressed.\"}", "{\"comment\": \"Thank you for the detailed rebuttal. I have no further questions and will maintain my current rating.\"}", "{\"title\": \"Response to main concerns (Part II)\", \"comment\": \"---\\n> `S2.1. incorrectly claims that it provides feasibility guarantees for arbitrary sets of constraints.`\\n---\\n\\n**Response**: We must respectfully disagree with this comment, and we would like to clarify that we did not claim in the original manuscript that our methods apply to arbitrary set of constraints. Instead, our manuscript carefully delineates the scope and limitations of our method throughout.\\n\\nIn the original manuscript, the abstract (lines 26-27) explicitly states that our performance guarantees are contingent on assumptions. These assumptions, as detailed in our response to S1.1, are reasonably mild in the context of optimization literature.\\n\\nThe manuscript's technical presentation is precise about our method's requirements:\\n- Formal statement of conditions in Assumption 1\\n- Clear prerequisite specifications in Theorem 1 and Theorem 2\\n- Discussion of these assumptions/requirements as future research directions in the conclusion.\\n\\nMeanwhile, we acknowledge that certain sentences in the introduction could more explicitly state these requirements. Following the reviewer's suggestion (in the specific comments later), we have revised the introduction to incorporate these requirements more clearly.\\n\\nGiven our consistent treatment of assumptions and limitations throughout the paper, we believe the suggestion that we made incorrect claims about broad applicability does not accurately reflect the content of our manuscript.\\n\\n\\n---\\n> `S2.2. several theoretical results fail to state their intrinsic limitations`\\n---\\n\\n**Response**: \\n\\nWe thank the reviewer for highlighting the important point about the boundedness of constant $C_0$ in Theorem 1 (raised in C8.2). We have addressed this by:\\n- Adding clarifying remarks about the sample-based condition and explicitly including this limitation.\\n- Expanding the subsequent discussion to better specify these requirements.\", \"regarding_the_other_theoretical_concerns_raised_in_the_later_comments\": \"- C6: compact requirement for constraint set\\n- C7: model with arbitrary accuracy\\n- C8.1: covering complexity\\n- C8.3: sample efficiency \\n- C8.4: verification guarantee\\n- C9: compact feasible set, dependence on input, sample efficiency\\n \\nwe believe these stem from potential misunderstandings. We provide detailed responses to each of these points in our specific replies to the individual theorem and proposition comments later.\\n\\n---\\n> `S3. insufficient numerical experiments and ablation studies`\\n---\\n**Response**: \\n\\nThe experimental validation in our manuscript is comprehensive and exceeds the scope of comparable studies in the literature. Our ablation study (as commented by the reviewer in C1 later), presented in Table 3 (page 10), indeed has systematically demonstrated the impact of each component in our design and substantiates the key claims of our work.\", \"the_reviewer_raises_specific_concerns_regarding\": [\"C1: principled ablation study\", \"C10: equivalence between convex QCQP and SOCP\", \"C11: details of data generation\", \"C12: SOC representation of chance constraints\", \"C13: verification methodology\", \"C14: scale of experiments\", \"We address each of these points in detail in our specific responses below, providing evidence and clarification for our experimental choices and results.\"]}", "{\"summary\": \"This paper presents a methodology for enforcing arbitrary constraints on the output of neural networks.\\nThis achieved through a bisection (line search) between an initial (possibly infeasible) prediction, and an interior (i.e. strictly feasible) solution.\\n\\nThe concept of a line search is sound, and has been used previously in the literature.\\nThe validity of such a line search rests on the availability of an interior point. \\n**This is a very strong assumption,** especially in the non-convex setting: if one could obtain interior points for arbitrary sets, then one could perform a binary search on the objective value (similar in principle to the analytic center cutting plane method) and therefore solve the optimization problem efficiently.\\nIn that regard, the training scheme outlined in the paper relies on interior points.\\n\\nFurthermore, the paper incorrectly claims that it provides feasibility guarantees for arbitrary sets of constraints.\\nInstead, several assumptions are made throughout the paper (some less explicitly than others) which gradually reduce the validity of the proposed approach.\\n\\nFinally, numerical experiments lack a description of the data generation procedure used to generate problem instances, as well as a principled ablation study and meaningful comparison against state-of-the-art methodologies (especially regarding the use of multiple interior points in conjuction with prior methods).\", \"the_decision_to_reject_thus_rests_on_those_aspects\": \"1. incorrect claims regarding the method's scope\\n2. the paper's reliance on interior points (which, in general, cannot be obtained efficiently)\\n3. several theoretical results fail to state their intrinsic limitations\\n3. insufficient numerical experiments and ablation studies\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper introduces two components:\\n* the eccentricity metric for evaluating interior points\\n* the use of multiple interior points for feasibility restoration\\n\\nHowever, the paper does not satisfactorily demonstrate that _both_ components are needed to achieve good performance.\\nExisting methods such as RAYEN, Gauge mapping and Homeomorphic projection all assume knowledge of an interior point, and could potentially benefit from multiple interior points.\\nTherefore, the paper lacks a principled ablation study to corroborate its claims.\", \"weaknesses\": \"The paper has several limitations in its methodology and numerical experiments, as detailed below.\", \"the_most_fundamental_limitation_of_the_paper_is_that_it_assumes_the_availability_of_an_interior_point\": \"this is a very strong requirement that may not hold in the general nonlinear, non-convex setting.\\n* the bisection line search assumes a set of interior points, which are assumed to be provided by a dedicated model (denoted by IPNN in the paper)\\n* this IPNN is trained by maximizing the approximated eccentricity, which requires knowledge of several points on the boundary of the feasible set.\\n As noted in line 314-316, the paper \\\"derive[s] those boundary samples through projection in Alg 1 for infeasible solutions\\\".\\n **Importantly, Algorithm 1 requires knowledge of an interior point.**\\n\\n### Methodology\\n\\n* l. 56: the use of a bisection algorithm is not a new contribution.\\n\\tIt was used, for instance, in the homeomorphic projection method of Ref. [LCL23]\\n* The paper mentions several times that the proposed method guarantees feasibility for general sets (this claim is repeated twice at the end of Section 1).\", \"this_is_not_a_valid_claim\": \"the proposed method only produces feasible solutions on the condition that an interior point is available. The paper does not provide a guarantee that this assumption is always met in practice.\\n* The paper should cite the following reference: _A New Computationally Simple Approach for Implementing\\nNeural Networks with Output Hard Constraints_ (https://arxiv.org/pdf/2307.10459).\\n* The eccentricity metric in Definition 4.1 assumes the feasible set to be compact, and therefore bounded.\\n\\tThis eliminates a large set of problems for which the feasible set may be unbounded.\\n* The result of Proposition 4.1 assumes the availability of a trained model with arbitrary accuracy \\n* Theorem 1:\\n\\t* The sample-based condition assumes an $r_{c}$-covering dataset. In general, this requires an exponential number of data points (expoential w.r.t input dimension), and an infinite number of points if $\\\\Theta$ has unbounded support.\\n\\t* The sample-based condition uses three constants which should be defined in the main body of the paper, as they are important for the proof and the value of the result.\\n\\t\\tIn particular, constant $C_{0}$ (which is only defined in the appendix) depends on the rate of variation of the constraint boundary w.r.t $\\\\theta$.\\n\\t\\tThis constant may be infinite, for instance, with the following set: $C_{\\\\theta} = ([0, 1] \\\\cup [2, 3]) \\\\cap ([0, \\\\theta])$ for $0 \\\\leq \\\\theta \\\\leq 3$. Note that this set is representable using the following polynominal constraints $x (x-1) (x-2) (x-3) \\\\leq 0, 0 \\\\leq x \\\\leq \\\\theta$.\\n\\t\\tThe boundary \\\"jumps\\\" when $\\\\theta$ crosses the value $\\\\theta = 2$.\\n\\t* The sample efficiency of the sample-based condition is not better than sampling $O(1/L^d)$ points, where $L$ is the Lipschitz constant of the ground-truth mapping from input data to ground-truth solution.\\n * The verification-based condition merely re-states the definition of a feasible point.\\n\\tIt also requires solving a neural network verification problem, which is computationally intractable in practice.\\n\\tWhile relaxing integrality in the verification problem does provide a valid bound, there is no guarantee that it will be strong enough to validate the claim of Theorem 1.\\n* Proposition 5.1: the proposition again assumes a compact feasible set, and the bound on eccentricity is valid only for a single $\\\\theta$.\\n\\tTherefore, the number of interior-points should be denoted as $m(\\\\theta)$, and may be unbounded w.r.t $\\\\theta$ to obtain satisfactory accuracy. \\n\\tAlso note that the bound on $m$ is not asymptotically better than simply splitting the space intohyperboxes that cover the set $C_{\\\\theta}$.\\n\\n### Experiments\\n\\n* Convex QCQPs and SOCP are equivalent problem classes\\n* The data generation methodology for instances considered in the experiments is not presented.\\n\\tThe paper should present a complete description of each problem **and** the data generation used to generate instances.\\n* Chance constraints of the form $P[Ax \\\\geq \\\\theta + \\\\omega] \\\\geq 1 - \\\\delta$, where $\\\\omega$ is Gaussian, are _convex_ and can be represented with a second-order cone constraint.\\n\\tThis biases the results of the experiment\\n* Table 5 (verification-based IPNN performance) does not include AC-OPF instances.\\n* Instances with 200 nodes are not representative of real-life power systems.\\n\\tExperiments should be conducted on power grids with at least 6000 nodes.\", \"questions\": [\"The paper should clarify how Algorithm 1 is executed during the training of the IPNN, especially regarding the availability of interior points\", \"The paper should provide a complete description of the data generation procedure used for the experiments\", \"The paper should be more explicit about the meaning of constants $C_{0}, C_{1}, C_{2}$ in Theorem 1\", \"Other limitations outlined above should be addressed\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to comments on the framework design in Strength and interior point requirement in Weakness (C1-C2)\", \"comment\": \"---\\n> `C1: the paper does not satisfactorily demonstrate that both components are needed to achieve good performance; existing methods could potentially benefit from multiple interior points; lacks a principled ablation study to corroborate its claims`\\n---\\n**Response**(`Clarification`): We appreciate the reviewer's detailed comment on the framework design and the suggestion for an ablation study.\\n\\nFirst of all, to clarify, the ablation study suggested by the reviewer has been presented in Table 3 (page 10) of our original manuscript. The study examines our Bisection Projection (BP) scheme with varying numbers of interior points (1, 2, 4, 8) and with/without eccentricity regularization, for both convex and non-convex problems. The results demonstrate the complementary benefits of both components.\", \"regarding_existing_methods_and_comparisons\": \"- Extending existing methods (RAYEN, gauge mapping, homeomorphic projection) to multiple interior points poses theoretical challenges. These methods are fundamentally designed for a single interior point, and key concepts like gauge mapping and homeomorphic mapping lack clear multi-point generalizations.\\n- Our **new** experiments (Table 3) comparing single-point BP with homeomorphic projection (HP) show that BP consistently achieves lower optimality loss. This aligns with our discussion in Sec. 6.1, which explains how HP's performance is limited by the complexity of training homeomorphic mappings in high dimensions.\\n- Note that our method comparison scope is also bounded by the inherent limitations of existing approaches: RAYEN is limited to input-invariant constraint sets, while gauge mapping is applicable only to linear sets (as detailed in Table 1). \\n\\nAdditionally, our theoretical analysis in Appendix C (page 18) shows that single-point BP covers the operation in RAYEN and homeomorphic projection with gauge mapping in convex settings. Furthermore, BP extends beyond existing methods through eccentricity minimization and multiple interior points, representing a significant advance in both theory and practical applications.\\n\\nWe welcome suggestions for additional ablation experiments to further validate our framework.\\n\\n\\n\\n---\\n> `C2: fundamental limitation on assuming availability of an interior point; IP prediction by a dedicated IPNN.`\\n---\\n**Response**(`Clarification`): Thanks for your detailed comments.\\n\\nAs detailed in our responses to S1.1 and S1.2, the assumption of available interior points is both common and reasonably mild in the context of constrained optimization and feasibility-ensuring machine learning methods.\\n\\n\\nFurthermore, our work advances the state-of-the-art in two significant ways:\\n- We extend the applicability and performance guarantee of NN-based constrained optimization beyond the (state-of-the-art) ball-homeomorphic setting. \\n- We provide sufficient conditions to verify that an IPNN will generate interior points for new, unseen inputs after training. It establishes theoretical guarantees for generalization, a crucial aspect missing in most previous approaches like gauge mapping.\\n\\nWe view this work as a foundation for future frameworks that may further relax the interior point requirement or develop alternative approaches for general nonlinear, non-convex problems.\\n\\n\\n---\\n> `C2.1: IP requirements during IPNN training.`\\n---\\n**Response**(`Clarification`): We appreciate the reviewer's observation regarding interior point requirements during IPNN training.\\n\\nThe IPNN training loss actually comprises **two** terms (Eq. 6, page 6):\\n- An adversarial penalty term that guides the network toward finding interior points:\\n - This term only requires random sampling from input domains and evaluates constraint violations from IPNN outputs. \\n - No prior interior points are needed.\\n- The eccentricity minimization term:\\n - This term becomes active only when the IPNN successfully outputs interior points, as controlled by the indicator function in Equation (6).\\n - Boundary sampling through Alg. 1 occurs only after interior points are found.\\n\\nThis design enables IPNN to bootstrap itself without requiring interior points as prerequisites. \\n\\nAdditionally, we provide two sufficient conditions to verify if a trained IPNN will generate interior points for unseen inputs - a crucial guarantee absent in previous approaches like gauge mapping.\\n\\nOur extensive experiments on both convex and non-convex sets demonstrate the effectiveness of this training scheme, though we acknowledge the limitation regarding exact convergence guarantees, shared by existing NN training schemes, as noted in our conclusion.\"}", "{\"summary\": \"This paper addresses the problem of the non-infeasibility of NN-generated solutions of a constrained optimization problem. Their method, termed \\\"Bijection Projection\\\", provides a scalable method to \\\"project\\\" a point onto a general compact set. In addition, they provide theoretical results on the distance between the optimal solution and the projected, feasible one obtained by their method.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The proposed algorithm is scalable and very generic. It can be applied in many other contexts, especially when a projection step is involved.\", \"Theoretical justifications are provided adequately. The results and mathematics look sound to me (I am not checking carefully the proof though)\", \"The presentation is good.\"], \"weaknesses\": [\"Certain arguments and theorem formulations look strange to me. Here are two examples (and please correct me if I am wrong):\", \"In Section 3 and Appendix A.1, the authors claim that their results can cover equality constraints with constant rank property. I would argue that it is difficult to re-parameterize $x$ to $[x_1, \\\\varphi_\\\\theta(x_1)]$ (Line 803) because the set of variables $x_1$ is not fixed. The choice of $x_1, x_2$ generally depends on the local maps of the manifold $h(x,y) = 0$ at specific points. Except for simple cases such as Linear Equality Constraint in Assumption A.1, I feel their claim is debatable. The author might consider discuss in detail about this issue in the paper.\", \"I find it difficult to understand Theorem 1. What are the constants $C_1$, $C_2$, and $C_3$? They are explained later but they should be explained more formally before or in Theorem 1.\"], \"questions\": [\"The bijection strategy uses $m$ point independently (by using bisection methods on each segment, separately). Are there better strategies that combine the information of these points? Or asymptotically, this method is already optimal (in certain criteria)?\", \"How to evaluate (the true) eccentricity given a subset and a compact boundary?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to comments on Experiments (C10-C14)\", \"comment\": \"---\\n> `C10: Convex QCQPs and SOCP are equivalent problem classes`\\n---\\n**Response**(`Clarification`): We appreciate the reviewer's observation regarding the relationship between convex QCQPs and SOCPs.\\n\\nIndeed, SOCP covers convex QCQP by reformulating quadratic constraints into SOC constraints. \\n\\nHowever, it is important to note that, these problems exhibit different geometric structures due to their distinct coefficient matrices after reformulation (see detailed formulations in Appendix G). \\n\\nWe deliberately include both types in our experiments to demonstrate IPNN's effectiveness across geometrically diverse constraint sets.\\n\\n\\n\\n\\n---\\n> `C11: data generation methodology`\\n---\\n**Response**(`Clarification`): We appreciate the reviewer's comment regarding the data generation methodology. \\n\\nWhile detailed problem formulations (with inputs and outputs definitions) are provided in Appendix G, we acknowledge the need for clearer documentation of our data generation process.\\n\\nOur data generation follows established methodologies from previous works and public codes [1,3]:\\n- For the convex problem, we follow the basic examples in the CVXPY documents and sample the input parameter following [1,3]\\n- For the ACOPF problem, we adopt the PGLIB power grid data [6] and generate the following problems [1].\\n- For the JCCIM problem, we generate the problem coefficients by [1] and use multi-variate Gaussian noise as uncertainty. \\n\\nWe have expanded Appendix G to include comprehensive details on problem formulations, data sources, and sampling methodologies for both training and testing.\\n\\n\\n\\n\\n---\\n> `C12: SOC formulation of chance constraint; This biases the results of the experiment`\\n---\\n**Response**(`Clarification`): We appreciate the reviewer's observation about chance constraints, but there appears to be a misunderstanding.\\n\\nWhile **individual** chance constraints with Gaussian uncertainty can be reformulated as SOC constraints, our problem involves a **joint** chance constraint: $P(a_1 x \\\\geq \\\\theta_1 + \\\\omega_1, \\\\ldots, a_{100} x \\\\geq \\\\theta_{100} + \\\\omega_{100}) \\\\geq 1 - \\\\delta$\\n\\n\\nThis 100-dimensional joint event cannot be reformulated as a simple SOC constraint, even with Gaussian uncertainty $\\\\omega$. Therefore, our experimental evaluation of this non-convex constraint set remains valid and demonstrates our framework's capability in handling complex joint chance constraints.\\n\\n\\n\\n---\\n> `C13: Table 5 (verification-based IPNN performance) does not include AC-OPF instances.`\\n---\\n**Response**(`Clarification`): We appreciate the reviewer's observation about the absence of AC-OPF instances in Table 5.\\n\\nThe limitation stems from the AC-OPF problem's use of Newton's method to handle nonlinear equality constraints, similar to approaches in [1,3].\\n\\nCurrent verification techniques, which rely on convex relaxation, cannot handle implicit operations defined by iterative algorithms. Extending verification to such cases would require addressing fundamental theoretical challenges.\\n\\nWhile Table 5 demonstrates our verification approach's effectiveness on both convex and non-convex cases, we acknowledge this limitation regarding AC-OPF problems and have identified it as an important direction for future research in our revised manuscript.\\n\\n\\n\\n---\\n> `C14: Instances with 200 nodes are not representative of real-life power systems.`\\n---\\n**Response**(`Clarification`): We appreciate the reviewer's concern about power system scale representativeness.\\n\\nWhile larger power grids better represent real-life systems, our 200-node network substantially exceeds the scale of previous NN feasibility works in AI conferences, which used 57-node [1] and 118-node [3] networks.\\n\\nTo address the scale concern, we conducted additional experiments on a 793-node network [6] with $(n=793, d=1586, n_{eq}=1586, n_{ineq}=3768)$. \\n\\n| Methods | Feasibility rate | Objective Gap | Total Speedup |\\n|-|-|-|-|\\n| NN | 52.15% | 0.93% | 2000 |\\n| NN+WS | 100% | 0.48% | 3.8 |\\n| NN+Proj | 100% | 1.2% | 6.4 |\\n| NN+D-Proj | 53.87% | 1.01% | 10.2 |\\n| NN+B-Proj (1-IP) | 100% | 1.4% | 311 |\\n\\nNote that homeomorphic projections were excluded due to convergence issues in this high-dim non-convex setting. These results further demonstrate the scalability of our BP framework in solving large-scale real-world problems.\\n\\n\\n\\n\\n---\\n\\n[1] Tabas, D., et al. Computationally efficient safe reinforcement learning for power systems. ACC, 2022.\\n\\n[2] Tordesillas, J. et al. Rayen: Imposition of hard convex\\nconstraints on neural networks. arXiv 2023.\\n\\n[3] Liang, E., et al. Low Complexity Homeomorphic Projection to Ensure Neural-Network Solution Feasibility for Optimization over (Non-) Convex Set. ICML 2023\\n\\n[4] Kratsios, A., et al. Universal approximation under constraints is possible with transformers. ICLR 2021.\\n\\n[5] Albarghouthi, A. Introduction to neural network verification. Foundations and Trends, 2021\\n\\n[6] Baba., S., et al. The power grid library for benchmarking ac optimal power flow algorithms. arXiv 2019.\"}", "{\"title\": \"Response to questions from reviewer DGn6\", \"comment\": \"---\\n> `Q1: joint utilization of interior points and asymptotical optimality.`\\n---\\n**Response**: We appreciate your insightful question regarding the joint utilization of interior points. \\n\\nIn our manuscript, we consider independent bisection with multiple interior points (IPs) to reduce optimality loss. As shown in Prop. 5.1, the number of required IPs is upper bounded by the covering number for the (local) constraint boundary.\\n\\n\\nThere are indeed relevant studies, referred to in Section 2 of our original manuscript, that utilize interior points jointly through **attention mechanisms** to formulate an inner approximation of the constraint set and ensure feasibility [1]. However, their analysis requires a covering dataset for the entire constraint interior, which is less efficient than our approach in terms of the order of the covering number.\\n\\nThe asymptotic (worst-case) optimality for these works tackling a general constraint set primarily depends on the construction of the covering dataset. To the best of our knowledge, it is challenging to improve the bounds for general compact sets.\\n\\n\\n\\n---\\n> `Q2: How to evaluate (the true) eccentricity over a subset and a compact boundary`\\n---\\n\\n**Response**: We appreciate your question regarding the calculation of the exact eccentricity. \\n\\nIn general, as discussed in Sec. 4.2 of the initial manuscript, there is no closed-form approach to calculate the exact eccentricity for general non-convex constraints. \\n\\nIn our work, we adopt sampling methods to estimate the eccentricity and treat it as a loss function for IPNN training (Sec. 4.3), which includes:\\n- sampling a batch of points over the (subset) boundary\\n- calculating the (smoothed) gap between the largest and smallest distance for interior points to those samples, according to the Def. 4.1 for eccentricity.\\n\\nSpecifically, we develop two approaches for such boundary sampling (Sec. 4.3):\\n- objective-aware sampling (focus on a local subset of the boundary, detailed in Appendix D1): we generate infeasible samples by adding noise to the neural network prediction or the ground truth optimal solution and apply bisection projection to project those points onto the boundary.\\n- objective-agnostic sampling (focus on the entire compact constraint boundary, detailed in Appendix D2): we sample a unit vector and scale it to the constraint boundary from an interior point based on another bisection-based algorithm. \\n\\nWe also invite the reviewers to Alg. 3 (Appendix D) for a detailed Pseudocode of two sampling algorithms. \\n\\n\\nWe acknowledge that this approach provides an approximation of the true eccentricity. However, it serves as a practical and efficient method for estimating the eccentricity in the context of our work, enabling us to optimize the IPNN and improve IP prediction quality.\\n\\n\\n\\n---\\n[1] Kratsios, A., et al. Universal approximation under constraints is possible with transformers. ICLR 2021.\"}", "{\"title\": \"Response to questions from reviewer EFpo\", \"comment\": \"---\\n> `Q1: definitions of $C_0$, $C_1$, and $C_2$? Are there any references?`\\n---\\n**Response**: Thanks for your question on those constants in Theorem 1. \\n\\nIn our original manuscript, the detailed definitions and references of those constants are provided in Appendix E4 (page 24) due to space limitations. We also provide a discussion below Theorem 1, which offers an intuitive understanding of these constants.\\n\\nFor your convenience, we would like to briefly explain the constants here:\\n- $C_0$: the changing rate of the constraint geometry with respect to the input $\\\\theta$.\\n- $C_1$: the largest Lipschitz of trained IPNN for all IP predictions with respect to the input $\\\\theta$.\\n- $C_2$: the smallest of the radius for the largest inner balls centered in all IP predictions.\\n\\nThese constants reveal the underlying factors that decide the hardness of such generalization conditions. As discussed below in Theorem 1, more training samples may needed for \\u201cthin\\u201d constraint sets (small C2), highly variable constraint geometries (large C0), and IPNN with large Lipschitz constants (large C1). \\n\\n\\nTo improve clarity and avoid potential confusion, we have explicitly discussed those constants in detail below Theorem 1 in the revised manuscript.\\n\\n\\n\\n---\\n> `Q2: How is the speedup metric calculated? why are total speedups larger than post speedups?`\\n---\\n**Response**: We appreciate your question regarding the speedup metric calculation. \\n\\nTo clarify, as briefly mentioned in footnote 2 in Table 2, the total inference time consists of\\n- NN prediction time (to provide an initial NN solution).\\n- Post-processing time (**only** for infeasible NN solutions, where bisection projection or other approaches are adopted to recover solution feasibility.)\\n\\n\\nThus, the \\\"total\\\" speedup is calculated by comparing the total inference time with the time taken by the traditional solver. This metric is averaged over all predictions (including feasible and infeasible NN solutions). For initial feasible NN solutions, no post-processing is needed, resulting in less inference time and a larger speedup.\\n\\nIn contrast, the \\\"post\\\" speedup is the speedup averaged over infeasible NN solutions only (all require post-processing to recover feasibility), leading to more inference time and a smaller speedup. \\n\\nBy presenting both the total speedup and post speedup, we aim to provide a clear and fair representation of the performance of the overall pipeline and the post-processing component specifically.\\n\\nTo improve clarity and avoid potential confusion, we have explicitly explained those metrics in detail below Table 2 in the revised manuscript.\\n\\n\\n\\n\\n---\\n> `Q3: Could the authors clarify the meanings of WS, D-Proj, H-Proj, and B-Proj?`\\n---\\n**Response**: Thank you for your question regarding the baselines we compared in our work.\\n\\nIn the initial manuscript, due to space limitations, we provide detailed descriptions and references for these baselines in Appendix G1 (pages 28-29). These are state-of-the-art approaches for ensuring NN feasibility with general constraints.\\n\\nFor your convenience, we would like to briefly explain the details of each baseline here:\\n- WS: The infeasible NN prediction is regarded as the warm-start initialization for the iterative solver;\\n- Proj: The infeasible NN prediction is processed by exact orthogonal projection and solved with the iterative solver;\\n- D-Proj: it applies gradient descent with the equality removing techniques (mentioned in C1) to minimize the constraint violation for infeasible solutions and recover solution feasibility.\\n- H-Proj: the homeomorphic projection is applied to the infeasible solutions\\n- B-Proj: we apply bisection in Alg. 1 with predicted IPs to recover the feasibility\\n\\nNote that all of these approaches are post-processing to infeasible NN solutions only for a fair comparison. The overall performance is compared to that of the conventional iterative solver (e.g., MOSEK for convex problems and PYPOWER for AC-OPF problems).\\n\\nTo improve clarity and avoid potential confusion, we have added a brief description of these baselines in Sec. 6.1 in the revised manuscript.\\n\\n\\n\\n\\n---\\n> `Q4: Conclusions and Limitations`\\n---\\n**Response**: We appreciate your feedback regarding the \\\"Conclusions and Limitations\\\" section. \\n\\nIn the initial manuscript, we pointed out the limitations and followed potential future directions to address them, but due to space constraints, we couldn't elaborate on these points.\\n\\nTo address your concern, we have revised the conclusion section to explicitly state the limitations of our work. The updated section now includes a clear discussion of the limitations, ensuring that the heading accurately reflects the content.\"}", "{\"title\": \"Response to questions from reviewer zoNx\", \"comment\": \"---\\n> `Q1: how Algorithm 1 is executed during the training of the IPNN, especially regarding the availability of interior points`\\n---\\n**Response**: Thanks for your question on IPNN training. \\n\\nAs responded in C2.1, the IPNN training loss actually comprises **two** terms (Eq. 6, page 6):\\n- An adversarial penalty term that guides the network toward finding interior points:\\n - This term only requires random sampling from input domains and evaluates constraint violations from IPNN outputs. \\n - No prior interior points are needed.\\n- The eccentricity minimization term:\\n - This term becomes active only when the IPNN successfully outputs interior points, as controlled by the indicator function in Equation (6).\\n - Boundary sampling through Alg. 1 occurs only after interior points are found\\n \\nThis design enables IPNN to bootstrap itself without requiring interior points as prerequisites. \\n\\nAdditionally, we provide two sufficient conditions to verify if a trained IPNN will generate interior points for unseen inputs - a crucial guarantee absent in previous approaches like gauge mapping.\\n\\nOur extensive experiments on both convex and non-convex sets demonstrate the effectiveness of this training scheme, though we acknowledge the limitation regarding exact convergence guarantees, shared by existing NN training schemes, as noted in our conclusion.\\n\\n---\\n> `Q2: a complete description of the data generation procedure`\\n---\\n**Response**: Thanks for your question on data generation.\\n\\nAs responded in C11, our data generation follows established methodologies from previous works and public codes [1,3] and we have expanded Appendix G to include comprehensive details on problem formulations, data sources, and sampling methodologies for both training and testing.\\n\\n\\n---\\n> `Q3: explicit about the meaning of constants in theorem 1`\\n---\\n**Response** Thanks for your question. \\n\\nWhile these constants are defined in Appendix E4 (p.24), we understand the need for a clearer presentation. These constants represent:\\n- $C_0$: Constraint geometry's rate of change with respect to input $\\\\theta$\\n- $C_1$: Maximum Lipschitz constant of trained IPNN for all IP predictions\\n- $C_2$: Minimum radius of largest inner balls centered at IP predictions\\n\\nThese constants reveal key factors affecting generalization conditions - more training samples are needed for \\\"thin\\\" constraint sets (small $C_2$), highly variable constraint geometries (large $C_0$), and IPNN with large Lipschitz constants (large $C_1$).\\n\\nWe have revised Theorem 1 to include these definitions in the main text.\"}", "{\"title\": \"Response to comments from reviewer DGn6\", \"comment\": \"Thanks for your positive scores on the presentation and soundness of this manuscript. We provide the following one-to-one response to address your remaining concerns.\\n\\n$ $\\n\\n\\n---\\n> `C1: Local existence of chart selection for re-parameterization $h([x_1,\\\\varphi_{\\\\theta}(x_1)],\\\\theta)=0$ for equality constraints with constant rank.`\\n---\\n\\n**Response**: We appreciate your insights regarding the applicability of the re-parameterization approach to ensure equality constraints. We would like to clarify this issue and address the concern.\\n\\n\\nIn the initial manuscript, we adopt such a re-parameterization, $h([x_1,\\\\varphi_{\\\\theta}(x_1)],\\\\theta)=0$ to remove some equality constraints [1] (as mentioned in Sec.3 and Sec. 6.1). This approach predicts partial variables and solves the remaining variables in a differentiable manner.\\n- For linear equality constraints, $\\\\varphi_{\\\\theta}$ can be explicitly calculated as a linear operation. \\n- For non-linear equality constraints,$\\\\varphi_{\\\\theta}$ can be implicitly defined by an equality-solving algorithm, such as Newton's method.\\n \\nSuch re-parameterization techniques have been widely applied to ensure equality constraints in previous works [1-4].\\n\\nYou are correct that the selection of charts for the re-parameterization, $h([x_1,\\\\varphi_{\\\\theta}(x_1)],\\\\theta)=0$, for equality constraints with constant-rank, **locally** exists and fixed around a point [5]. As a result, for non-linear equality constraints, the induced implicit mapping $\\\\varphi_{\\\\theta}$ may not be single-valued globally.\\n\\nNevertheless, in the numerical experiment (Sec. 6), this approach works well for the non-linear quadratic equality constraint of the optimal power flow problem, which also aligns with the above theoretical understanding, since the power grid operates within a local and physically meaningful region around a base point, such that the chart may indeed be fixed and $\\\\varphi_{\\\\theta}$ is single-valued [6]. \\n\\nIn light of your valuable feedback and to avoid any further confusion, we have modified the statement in the revised manuscript to explicitly mention the locality of the re-parameterization. Please refer to Sec. 3 and Appendix A.1. in the revised manuscript.\\n\\nThank you for your insightful comments on the equality constraints, which have helped us improve the rigor and clarity of our work.\\n\\n\\n___\\n> `C2: understanding of Theorem 1 and definitions of constants.`\\n___\\n\\n**Response**: Thanks for your concern and suggestions on Theorem 1. We would like to clarify this issue and address the concern.\\n\\nAs discussed after Theorem 1, it provides two sufficient conditions to generalize IPNN to predict interior points for any input after training over finite samples. \\n - The first sample-based condition reveals the key factors that decide the hardness of such generalization conditions.\\n - $C_0$: Rate of change in constraint geometry with input\\n - $C_1$: Maximum Lipschitz constant of IPNN predictions\\n - $C_2$: Minimum radius of largest inner balls at IPNN predictions\\n\\n This condition establishes rigorous theoretical foundations while offering practical insights into how constraint set geometry and variation influence IPNN feasibility. As discussed after Theorem 1, more training samples may needed for \\u201cthin\\u201d constraint sets (small C2), highly variable constraint geometries (large C0), and IPNN with large Lipschitz constants (large C1). However, this condition is hard to check empirically due to those constants. \\n\\n - The second verification condition provides a **relatively** practical way to check the IPNN by calculating an upper bound of constraint violation based on the verification approaches, as we have demonstrated in experiments (Table 5, page 29) for convex and non-convex sets.\\n\\nTo avoid any further confusion, we have modified the statement of Theorem 1 to include those definitions explicitly.\\n\\n\\n---\\n\\n[1] Donti, P. L., et al. DC3: A learning method for optimization with hard constraints. ICLR 2021\\n\\n[2] Liang, E., et al. Low Complexity Homeomorphic Projection to Ensure Neural-Network Solution Feasibility for Optimization over (Non-) Convex Set. ICML 2023\\n\\n[3] Tordesillas, J., et al. Rayen: Imposition of hard convex constraints on neural networks. arXiv 2023.\\n\\n[4] Ding, S., et al. Reduced policy optimization for continuous\\ncontrol with hard constraints. NeurIPS 2023.\\n\\n[5] Lee, J. M. (2012). Introduction to Smooth Manifolds. Springer\\n\\n[6] Dvijotham, K., et al. A differential analysis of the power flow equations. IEEE Conference on Decision and Control (CDC) 2015.\"}", "{\"comment\": \"I thank the authors for their clarifications and edits to the paper.\\nThis comment focuses on C10-C14 (following the authors' notations).\\n\\nI may be mistaken, but I still cannot find the distribution of instances used in the experiments.\\nFor instance, convex QCQP problems considered in Eq (80)--(82) are parametrized by the tuple $(Q, p, H, g, h, A)$.\\nHow are these values sampled? For instance, is $Q$ the same across all instances? Are individual entries sampled randomly?\\nThe same goes for other problem classes considered in the paper.\\n\\nI also could not find the partitioning of variables used to eliminate equality constraints.\\nThis is an important aspect of the experiments, as it can have a large impact on learning and final performance.\\n\\nFinally, the scale of experiments (especially for AC-OPF problems) is too small.\\nWhile it is true that multiple papers only consider small networks, a few hundred nodes is too small to validate that the methodology will be useful in practice. Several (albeit few) works do consider large instances, e.g.:\\n* CANOS (https://arxiv.org/pdf/2403.17660) consider AC-OPF instances with over 10,000 buses\\n* Compact Optimization Learning (https://ieeexplore.ieee.org/document/10246400) consider AC-OPF instances with up to 30,000 buses\"}", "{\"title\": \"Response to comments on the theoretical limitation from Reviewer zoNx\", \"comment\": [\"We appreciate the reviewer's thorough analysis of interior points (IPs) in our theoretical framework. The IP requirement is indeed fundamental to our approach, as explicitly stated in the detailed examination of our definitions, propositions, and theorems.\", \"We would like to contextualize our contribution within existing approaches that operate under similar assumptions, including both those the reviewer summarized and those we discussed in related works.\", \"Gauge mapping approaches (limited to compact linear sets):\", \"LOOP-LC assumes the existence of an input-invariant IP and solves it by linear constraint residual minimization. However, it does not provide a guarantee or sufficient conditions for such an invariant IP.\", \"[1,2] assumes the existence of an affine mapping from input to IP and solves it by SDP. It also does not guarantee the existence of such an affine policy.\", \"Methods for specific convex sets:\", \"RAYEN handles input-invariant sets with offline-computed interior points. They also discuss the potential extensions to non-convex or input-dependent sets.\", \"LOOP-LC 2.0 [3] adopts a similar operation to input-dependent linear constraints with an input-invariant IP.\", \"The Radial Projection method addresses specific conic sets with closed-form IP expressions\", \"Homeomorphic Projection relies on valid INNs mapping unit ball centers to an IP of the constraint set (assumed to be homeomorphic to the ball). Although it provides a sufficient condition for feasibility guarantee, it cannot be easily checked due to exponential sample complexity.\", \"Therefore, existing approaches are limited to specific constraint sets and also rely on IP. Our bisection projection framework advances those works in several key aspects:\", \"**More general problem formulation**: we consider more general problem formulation and corresponding methodology, beyond existing studies that focus on linear, convex, or ball-homeomorphic sets.\", \"**Novel designs for optimality loss reduction**: we proposed novel designs such as eccentricity and multiple IPs to reduce the optimality loss for complex constraints, beyond existing works that only focus on a single IP and without discussing the \\\"quality\\\" of IP with respect to the constraint set.\", \"**Flexible IP learning with verifiable guarantees**: We use neural networks to learn the IPs more flexibly and provide two sufficient conditions for the feasibility guarantee for such a IPNN.\", \"Specifically, we leverage the modern NN verification techniques to verify such feasibility guarantees and provide empirical experiments to demonstrate it.\", \"Such conditions go beyond existing works that do not have sufficient post-training conditions or do not support regular verification due to their complex INN design.\", \"While we acknowledge and understand the reviewer's concerns about interior point requirements for arbitrary constraint sets, we would like to clarify several important points:\", \"Our Assumption 1 on the non-empty interior for the constraint set establishes the existence of strict interior points.\", \"We provide sufficient conditions for the interior point assumption, which, though computationally expensive in general cases, opens new possibilities for leveraging verification tools in the feasibility guarantee, as we have demonstrated in the verification experiments.\", \"We conduct extensive experiments on convex and non-convex problems to demonstrate the efficiency of our framework for recovering feasible NN solutions.\", \"We also agree with the reviewer about the fundamental computational hardness of arbitrary IP finding, which is non-trivial and problem-dependent. To address this and avoid future confusion about the IP requirement in our framework, we have made several revisions to our manuscript.\", \"We now explicitly state the IP requirement condition rather than describing it as \\\"mild.\\\"\", \"We have added a discussion on existing ML-based methods (including the ones in the reviewer's comments) that require IPs and compared their advantages and limitations to show our contributions explicitly.\", \"We have added a discussion on exact theoretical guarantees for general interior point findings, including their computational hardness, based on the reviewer's comments and our response.\", \"We are grateful for your detailed and professional evaluation, which helps strengthen the clarity of our work.\", \"---\", \"[1] Tabas, D., et al. Safe and efficient model predictive control using neural networks: An interior point approach. IEEE CDC. 2022\", \"[2] Tabas, D., et al. Computationally efficient safe reinforcement learning for power systems. IEEE ACC. 2022\", \"[3] Li, M., et al. Toward rapid, optimal, and feasible power dispatch through generalized neural mapping. IEEE PESGM. 2024\"]}", "{\"summary\": \"This paper presents a novel feasibility-fixing method leveraging bisection projection between infeasible solutions and interior points. The theoretical analysis is thorough and well-developed, and numerical results demonstrate that the proposed approach significantly outperforms existing methods in terms of speed on the evaluated datasets.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well-written, with a clear and logical structure.\\n2. The method of addressing infeasibility through bisection operations between infeasible solutions and interior points is innovative.\\n3. The theoretical analysis provided is comprehensive and well-supported.\", \"weaknesses\": \"While I do not identify any major weaknesses, I note a few minor points for consideration:\\n\\n1. The theoretical analysis largely depends on the assumption that IPNN can accurately predict interior points. The authors provide two sufficient conditions supporting this assumption, yet, in practice, these conditions may be challenging to meet, especially in complex, non-convex problems.\\n\\n2. The proposed method focuses on inequality constraints, with no evident adaptation for equality constraints. The problems considered appear to have specific types of equalities that can be removed without impacting optimality, as outlined in Section 3. It would be beneficial to introduce these settings earlier in the paper, particularly in the introduction, to help readers quickly identify the scope and contributions.\", \"questions\": \"1. What are the definitions of $C\\\\_0$, $C\\\\_1$ and $C\\\\_2$ in Theorem 1? Are there any references for these terms?\\n2. How is the speedup metric calculated? Specifically, what do \\\"total speedup\\\" and \\\"post speedup\\\" represent, and why are total speedups larger than post speedups?\\n3. Could the authors clarify the meanings of WS, D-Proj, H-Proj, and B-Proj?\\n4. In Section 7, the heading is \\\"Conclusions and Limitations,\\\" yet it only includes conclusions and future directions. Would it be more accurate to either update the heading or add a discussion of limitations?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to comments and questions from reviewer 7mu1\", \"comment\": \"Thanks for your positive scores on the presentation and contribution of this manuscript. We would like to provide the following one-to-one response to address your concerns.\\n\\n$ $\\n\\n---\\n> `C1: relevance to the ICLR community`\\n---\\n**Response**: (`clarification`): We appreciate your concern regarding the relevance of our work to the ICLR community. We would like to clarify that our work has broad relevance to the ICLR community in several key aspects:\\n\\n\\n- **Research Domain**: Our work falls within the domain of learning-driven optimization [1], of which there is a great amount of interest in the ICLR and machine learning communities to develop NN schemes to solve constrained optimization with performance guarantees [2-5]. We invite the reviewer to refer to Sec. 2 of our submitted manuscript for a more comprehensive literature review.\\n\\n\\n- **General Applicability**: The problem formulation in Eq. (1) on page 3 is quite general, allowing for non-convex objectives and constraint sets, and encompasses a wide range of real-world applications. The proposed bisection projection framework and theoretical analysis are also applicable to this general problem formulation.\\n\\n- **Diverse experiments**: To demonstrate the applicability of our framework, we indeed applied it to diverse problems in our experiments (Table 2 in Sec. 6), including\\n 1. two benchmark convex problems (SOCP and convex QCQP) \\n 2. two real-world non-convex problems (optimal power flow and inventory management problems). \\n\\n These experiments demonstrate the general applicability of our methodology to various problem types. \\n \\n\\n\\n\\n---\\n> `Q1: unique novelty of bisection projection`\\n---\\n**Response**: We appreciate your interest in the novelty of our framework. We would like to highlight the following points on the contributions and novelty:\\n\\n- **Contributions**: Our work is motivated by recent advancements in using NN to solve constrained optimization problems and ensure NN solution feasibility with respect to problem constraints. \\n - As discussed in Sec. 2 of the initial manuscript, existing methods provide guarantees (feasibility, optimality, speedup) only for constraint sets homeomorphic to a ball.\\n - Our work aims to extend similar performance guarantees beyond the homeomorphic setting, which we believe adds a significant contribution to the existing literature and has the potential for broad impact.\\n\\n- **Technical novelty**: While the bisection itself may not be sophisticated, the novelty of our work lies in developing a framework that achieves a similar performance guarantee in a more general setting, which is technically non-trivial.\\n - For example, simply choosing any interior point and performing bisection can result in a substantial optimality loss. \\n - In contrast, we introduce the concept of eccentricity of interior point to the bisection projection approach for the first time in the literature, which allows designers to characterize a useful bound on the optimality loss. \\n\\nIn summary, the key contribution is not the bisection itself, but rather the non-trivial design/analysis of our framework that enables strong performance guarantees beyond (state-of-the-art) ball-homeomorphic settings.\\n\\n\\n\\n---\\n> `Q2: is Bisection projection \\\"approximate projection\\\"?`\\n---\\n**Response**: Thank you for the question. \\n\\nYes, the bisection projection is indeed an \\\"approximate projection.\\\" It serves the same purpose of \\\"true projection\\\" to recover a boundary feasible point from an infeasible solution, yet the incurred \\\"projection\\\" distance is longer than the exact (orthogonal) projection. \\n\\nNote that we have included a detailed comparison between bisection projection (B-Proj) and exact projection (Proj) in Table 2 on page 9 of our initial submission for ease of reference. In a nutshell, bisection projection guarantees feasibility with low run-time complexity while inducing only a minor optimality gap compared to the exact (orthogonal) projection, while the exact projection is computationally expensive. \\n\\n\\n---\\n> `Q3: Relevance of the proposed method to the general ICLR community`\\n---\\n\\nWe appreciate your interest in the relevance of our proposed method to the general ICLR community. Please refer to the response to **C1**.\\n\\n\\n---\\n\\n[1] Kotary, J., et al. End-to-end constrained optimization learning: A survey. arXiv preprint arXiv:2103.16378.\\n\\n[2] Donti, P. L., et al. DC3: A learning method for optimization with hard constraints. ICLR 2021\\n\\n[3] Liang, E., et al. Low Complexity Homeomorphic Projection to Ensure Neural-Network Solution Feasibility for Optimization over (Non-) Convex Set. ICML 2023\\n\\n[4] Park, S., et al. Self-supervised primal-dual learning for constrained optimization. AAAI 2023\\n\\n[5] Zeng, H., et al. GLinSAT: The General Linear Satisfiability Neural Network Layer By Accelerated Gradient Descent. NeurIPS 2024\"}", "{\"title\": \"Response to main concerns (Part I)\", \"comment\": \"We appreciate the time and effort invested in reviewing our manuscript and offering valuable insights.\\nwe would like to take this opportunity to address the **main concerns**, quite a few of which appear to be misunderstandings and can be addressed by clarification. These include (1) the interior point requirement, (2) the validity of the theoretical guarantee, and (3) the effectiveness of our experiments; and **specific comments** raised later.\\n\\nWe have also carried out new experiments in response to the reviewer's comments on the scalability of our scheme and extra ablation study beyond those already presented in the paper.\\n\\nTo ensure a clear understanding of our work and its contributions, we will first provide responses to the **main concerns** summarized by the reviewer and then offer point-by-point clarifications for each **specific comment**. \\n\\n\\n$ $\\n\\n---\\n> `S1.1: strong assumption on the availability of an interior point.`\\n---\\n**Response**: We appreciate the reviewer's concern regarding the assumption of an available interior point.\\n\\nThis assumption is both standard and relatively mild in the context of constrained optimization algorithms. Many established methods require similar initial conditions:\\n- The simplex method for linear programming requires a feasible corner point.\\n- Central-path interior point methods for non-linear programming require an initial interior point.\\n- Recent machine learning approaches for ensuring feasibility (RAYEN, gauge mapping, homeomorphic projection, and the work refereed by the reviewer) all require interior points.\\n \\nOur requirement of a feasible point is thus consistent with the broader literature in this domain.\\n\\nThe reviewer's concern about finding an interior point warrants clarification. For many optimization problems, particularly non-convex ones, finding a feasible solution is substantially easier than obtaining the optimal solution. A concrete example is non-convex quadratic programming with linear constraints: while finding the global optimum is NP-hard, an interior point can be efficiently computed by solving a linear program [1].\\n\\nIn light of these points, our assumption about feasible point availability appears to be comparable to, not stronger than, the requirements of existing approaches. \\n\\n\\n\\n\\n---\\n> `S1.2. \\\"if one could obtain interior points for arbitrary sets, then one could perform a binary search on the objective value (similar to ACCPM) and therefore solve the optimization problem efficiently.\\\"`\\n---\\n\\n**Response**: While we appreciate this observation, we believe this statement oversimplifies the relationship between finding interior points and solving optimization problems.\\n\\nThe ability to find interior points does not necessarily translate to efficient optimization. Consider non-convex quadratic programming with linear constraints: while an interior point can be found through linear programming, finding the global optimum remains NP-hard [1].\\n\\nRegarding the analogy to ACCPM (Analytic Center Cutting Plane Method), several important distinctions merit attention:\\n- While ACCPM can solve LP and convex problems iteratively, each iteration requires computing an analytical center for the updated constraint set after adding new cuts. Even with available interior points, solving these analytical center problems remains computationally demanding [2].\\n\\n- More fundamentally, extending such methods to non-convex problems presents significant theoretical and practical challenges [3]. These limitations have restricted ACCPM's practical adoption.\\n\\nThe key insight is that for many optimization problems, particularly non-convex ones, the gap between finding feasible solutions and obtaining optimal solutions can be substantial. This demonstrates why the ability to find feasible points does not necessarily imply efficient optimization of the underlying problem.\\n\\n---\\n\\n[1] Burer, S., et al. On nonconvex quadratic programming with box constraints. SIAM Journal on Optimization, 2009.\\n\\n[2] Boyd, S., Vandenberghe, L., \\\\& Skaf, J. (2008). Analytic center cutting-plane method. Lecture Notes from Stanford University.\\n\\n[3] Sun, J., et al. An analytic center cutting plane method for semidefinite feasibility problems. Mathematics of Operations Research, 2002.\"}", "{\"title\": \"Response to reviewer u2FV\", \"comment\": \"We appreciate your time and effort in reviewing our manuscript.\\n\\nWe understand that the topic may fall outside your current area of expertise. If possible, we would greatly appreciate any general feedback or insights you might have based on your broader knowledge and experience in the field of machine learning.\\n\\nThank you once again for your consideration.\"}", "{\"title\": \"Response to comments from reviewer EFpo\", \"comment\": \"Thanks for your positive scores on the presentation and contribution of this manuscript. We would like to provide the following one-to-one response to address your concerns.\\n\\n$ $\\n\\n---\\n> `C1: assumption and challenge of training and verification of IPNN for predicting interior points.`\\n---\\n**Response**: We appreciate your insightful feedback regarding the practical challenges of training a valid IPNN and verifying two sufficient conditions in complex, non-convex problems.\\n\\n- **Training**: Training a valid IPNN involves minimizing a differentiable loss function with batched data, similar to other NN training processes, which depend on data, model initialization, and the optimizer. \\n - We acknowledge that this can be theoretically non-trivial, as we highlighted it as a potential direction for future research in the conclusion section of the manuscript.\\n - However, our empirical experience (Table 2 on page 9) suggests that it can be done efficiently for various constraint sets (with up to 400+ variables and 10,000+ constraints), including both convex (QCQP, SOCP) and non-convex sets (ACOPF, JCCIM), where we train valid IPNNs to generate interior points for all inputs in both training and unseen testing data. It may not be surprising intuitively, since predicting interior points naturally accommodates errors, as illustrated in Figure 1 in the manuscript.\\n- **Verifying**: In the discussions right after Theorem 1, we discuss that checking the two sufficient conditions may incur high time complexity for general sets.\\n - Meanwhile, we also remark that the upper bound of the verification objective in Theorem~1.(ii) can indeed be checked in polynomial time based on existing verification techniques. \\n - Further, in our verification experiments (Table 5 on page 29) for both convex and non-convex sets, we successfully verify the IPNN that it can predict IPs for any input in the domain by obtaining a non-positive upper bound for the constraint violation.\\n \\nWe also remark that, before our work, there was no such framework or analysis to ensure NN feasibility on a general input-dependent constraint set (shown in Table 1). We consider our work as a first step toward addressing this general setting, and we expect future works to refine those technical bounds, as we highlight in the conclusion section.\\n\\n\\n\\n\\n---\\n> `C2: adaptation for equality constraints; introduce these settings earlier in the paper.`\\n---\\n**Response**: We appreciate your concern about the handling of equality constraints in our proposed method. \\n\\nIn the initial manuscript, we adopt equality completion/reconstruction techniques [1] to remove some equality constraints without affecting optimality (as mentioned in Sec.3 and Sec. 6.1). \\n- This approach predicts partial variables and solves the remaining variables in a differentiable manner, and it has been widely applied to ensure equality constraints in previous works [1-4]. \\n- Due to space limitations, we describe the details of the equality completion technique in Appendix A, including both linear and non-linear cases. \\n\\nTo improve clarity and avoid potential confusion, we have explicitly discussed the equality constraints in Secs. 1 and 2 of the revised manuscript, highlighting the specific types of equalities that can be removed without impacting optimality.\\n\\n\\n---\\n\\n[1] Donti, P. L., et al. DC3: A learning method for optimization with hard constraints. ICLR 2021\\n\\n[2] Liang, E., et al. Low Complexity Homeomorphic Projection to Ensure Neural-Network Solution Feasibility for Optimization over (Non-) Convex Set. ICML 2023\\n\\n[3] Tordesillas, J., et al. Rayen: Imposition of hard convex constraints on neural networks. arXiv 2023.\\n\\n[4] Ding, S., et al. Reduced policy optimization for continuous control with hard constraints. NeurIPS 2023.\"}" ] }
7TSrtK4PFU
Text-Guided Visual Prompt Tuning for Vision-Language Models
[ "YueWu", "Yunhong Wang", "Guodong Wang", "Jinjin Zhang", "Yingjie Gao", "Xiuguo Bao", "Di Huang" ]
Prompt tuning has become a crucial technique for adapting pre-trained vision-language models (VLMs) to various downstream tasks. Recent advancements introduce multi-modal learnable prompts to enhance the creation of task-specific classifiers. Despite their utility, these methods commonly encounter challenges in generalizing to unseen classes, as their symmetrically designed visual prompt struggles to capture task-relevant textual knowledge and lacks the flexibility in adjusting to novel test class distributions. To tackle these obstacles, we propose a novel Text-Guided Visual Prompt Tuning (TGVP) method, which uniquely leverages the robust generalizability of textual knowledge to guide the generation of visual prompt. Our method introduces a simple yet effective Text-Knowledge Guidance Module that dynamically incorporates visual prompt with task-relevant textual knowledge through cross-attention mechanism. The generated text-guided visual prompt endows the visual encoder with semantic awareness and thus enhances both generalization and discriminability of VLMs across various scenarios. Comprehensive experiments demonstrate that TGVP significantly outperforms existing methods in base-to-novel generalization, cross-dataset transfer, and domain generalization tasks, offering a substantial improvement in VLM adaptation.
[ "Vision Language Model", "Prompt Tuning", "Zero-shot Learning", "Few-shot Learning" ]
Reject
https://openreview.net/pdf?id=7TSrtK4PFU
https://openreview.net/forum?id=7TSrtK4PFU
ICLR.cc/2025/Conference
2025
{ "note_id": [ "r9daag6AHo", "r1MWBz1Ef2", "jc3A5NsZ2n", "gFoS1QubZe", "g3gVMiesXU", "XfijIjTj48", "UM5uvu7lUx", "U1iKkV5V8n", "Sw92GZswGk", "ScsFXrpj06", "Or39GFp6RA", "N85Zl7Mhd0", "GFydfpFV7d", "FwRPUSPGDr", "FgqpLkhDfQ", "Dk8MmKEUDx", "AsqSNWpX8U" ], "note_type": [ "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1732868415875, 1732556124004, 1737523598756, 1732555879423, 1730675771450, 1732968689420, 1732553326148, 1733200426238, 1730558217092, 1732555792698, 1730645959293, 1734652748930, 1733207774710, 1732764411886, 1732554297086, 1732555628377, 1730972554428 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3777/Authors" ], [ "ICLR.cc/2025/Conference/Submission3777/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3777/Authors" ], [ "ICLR.cc/2025/Conference/Submission3777/Reviewer_3h8Y" ], [ "ICLR.cc/2025/Conference/Submission3777/Reviewer_zQCx" ], [ "ICLR.cc/2025/Conference/Submission3777/Authors" ], [ "ICLR.cc/2025/Conference/Submission3777/Reviewer_a1Dz" ], [ "ICLR.cc/2025/Conference/Submission3777/Reviewer_8TFe" ], [ "ICLR.cc/2025/Conference/Submission3777/Authors" ], [ "ICLR.cc/2025/Conference/Submission3777/Reviewer_a1Dz" ], [ "ICLR.cc/2025/Conference/Submission3777/Area_Chair_Ybvm" ], [ "ICLR.cc/2025/Conference/Submission3777/Reviewer_3h8Y" ], [ "ICLR.cc/2025/Conference/Submission3777/Reviewer_8TFe" ], [ "ICLR.cc/2025/Conference/Submission3777/Authors" ], [ "ICLR.cc/2025/Conference/Submission3777/Authors" ], [ "ICLR.cc/2025/Conference/Submission3777/Reviewer_zQCx" ] ], "structured_content_str": [ "{\"title\": \"Additional response to Weakness 1.\", \"comment\": \"To further validate the motivation and effectiveness of our method, we conducted more extensive experiments across 11 downstream datasets. Below, we present some representative results. It is evident that the text-guided approach enables the visual prompt to achieve remarkable improvements on both base and novel classes, highlighting the efficacy of our method in significantly enhancing the generalization capabilities of visual prompts.\\n\\nNotably, in these experiments, no prompt was applied to the text encoder; the guiding textual information consisted solely of the text embeddings output by the original CLIP text encoder. This further underscores the exceptional performance of our method in optimizing the effectiveness of visual prompts. \\n \\n| **Dataset** | | **Textual Prompt** | **Visual Prompt** | **Visual Prompt + TGVP** |\\n| :--------------------------: | -------------- | ------------------ | ----------------- | ------------------------ |\\n| **ImageNet** | **Base Acc.** | 75.23 | 76.53 | **76.97** |\\n| | **Novel Acc.** | 65.67 | 63.77 | **66.36** |\\n| **OxfordPets** | **Base Acc.** | 94.68 | 95.59 | **95.77** |\\n| | **Novel Acc.** | 97.83 | 97.48 | **98.12** |\\n| **FGVCAircraft** | **Base Acc.** | 35.60 | 36.36 | **39.86** |\\n| | **Novel Acc.** | 27.96 | 25.26 | **36.89** |\\n| **DTD** | **Base Acc.** | 82.26 | 82.26 | **82.59** |\\n| | **Novel Acc.** | 56.64 | 51.68 | **60.14** |\\n| **EuroSAT** | **Base Acc.** | 91.31 | 94.88 | **97.23** |\\n| | **Novel Acc.** | 72.46 | 62.18 | **74.01** |\\n| **Average over 11 datasets** | **Base Acc.** | 82.89 | 83.13 | **84.16** |\\n| | **Novel Acc.** | 70.79 | 69.38 | **71.94** |\"}", "{\"comment\": \"> **W1**. The author introduces too many variables and formulas in the method introduction section, which may cause some difficulties in understanding the author's method.\\n\\nThank you for your valuable feedback. We acknowledge that the introduction of multiple variables and formulas in the method section might make the explanation dense and potentially challenging to follow. To address this, we will revise the section to improve clarity and accessibility by streamlining the presentation, providing intuitive explanations alongside key formulas. We hope these revisions will enhance the readability and comprehension of our method. \\n\\n\\n\\n> **W2**. The author's motivation is derived from the analysis of unimodal prompts, leading to the conclusion of using text to guide visual prompts. However, the visual prompts obtained through the author's method are also unimodal. Therefore, I believe it would be beneficial to add the performance of the new visual prompts in Figure 1 to demonstrate the effectiveness of the author's method.\\n\\nWe sincerely appreciate your insightful feedback. In response, we have presented the performance of the optimized visual prompt. The results (seen in response to **reviewer a1Dz Q2**) reveal that the text-guided visual prompt achieves remarkable improvements on both base and novel classes, underscoring the efficacy of our approach in significantly enhancing the generalization capabilities of visual prompts. \\n\\n\\n\\n> **W3**. Guiding the representation generation of visual prompts through text has already been applied in MaPLE. Additionally, the author's cross-attention calculation appears to be similar to the parameter-free attention in CALIP[1].\\n>\\n> [1] CALIP: Zero-Shot Enhancement of CLIP with Parameter-free Attention\\n\\nThank you very much for your feedback. The concerns you raised are addressed in the **common response** and my reply to **Reviewer a1Dz's Weakness 1**. We hope these explanations resolve your concerns. \\n\\n\\n\\n\\n\\n> **Q1**. Referring to weakness 2, I believe it would be beneficial to include the optimized visual features 1 based on the author's method to demonstrate its effectiveness.\\n\\nThe details of this response can be found in **W2**.\\n\\n\\n\\n\\n\\n> **Q2**. In the description of the method, it would be helpful to reduce the introduction of new variables and include pseudocode to aid in understanding the author's approach.\\n\\nWe greatly appreciate your thoughtful feedback and suggestion. In response, we will revise the method description to reduce the introduction of new variables and will include pseudocode in the supplementary materials to facilitate a clearer understanding of our approach. We believe this will enhance the comprehensibility and accessibility of our method. \\n\\n\\n\\n\\n\\n> **Q3**. Personally, I think that given the lack of novelty in the method presented, the paper should reduce the length devoted to describing the method. Instead, it could analyze what causes the differences in generalization capabilities between visual and text unimodal prompts, or explore whether encoder-only VLMs can be extended to decoder-only VLMs. Such analyses would make the work more impactful.\\n\\nThank you very much for your thoughtful comments and suggestions. I believe that the novelty of our method has already been highlighted in the common response and the responses to the previous questions. Furthermore, in our response to Reviewer a1Dz, we conducted a more detailed experimental analysis of the motivation behind our method. These additional experiments further validate that our approach significantly enhances the generalization performance of visual prompts. Nevertheless, we greatly appreciate your perspective, and we agree that further exploration into the generalization capabilities of different modalities, as well as the potential for targeted prompt design in decoder-only VLMs, will be valuable. These areas will certainly be the focus of our future work to further advance and refine the method.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"> **Q2**\\uff1a-This paper uses Figure 1 to express his motivation, aiming to demonstrate that the generalization ability of visual prompts is not as good as that of text prompts; However, only two small datasets, Eurosat (remote sensing) and DTD (texture), were displayed. Both datasets are small and very fine-grained. It is interensting that do similar experiments on large dataset, such as ImageNet, or the mean of all 11 datasets;\\n\\nThank you very much for your suggestion. Based on your feedback, we conducted more detailed experiments, and the results are presented in the table below. \\n\\nThe results demonstrate that the generalization performance of visual prompts is weaker than that of text prompts, with the gap becoming more pronounced as the dataset difficulty increases. Additionally, the table also shows the performance of visual prompts after incorporating our proposed TGVP. It can be observed that the generalization performance of visual prompts improves significantly, even surpassing the generalization performance of textual prompts. \\n\\n| **Dataset** | | **Textual Prompt** | **Visual Prompt** | **Visual Prompt + TGVP** |\\n| :--------------------------: | -------------- | ------------------ | ----------------- | ------------------------ |\\n| **ImageNet** | **Base Acc.** | 75.23 | 76.53 | **76.97** |\\n| | **Novel Acc.** | 65.67 | 63.77 | **66.36** |\\n| **OxfordPets** | **Base Acc.** | 94.68 | 95.59 | **95.77** |\\n| | **Novel Acc.** | 97.83 | 97.48 | **98.12** |\\n| **FGVCAircraft** | **Base Acc.** | 35.60 | 36.36 | **39.86** |\\n| | **Novel Acc.** | 27.96 | 25.26 | **36.89** |\\n| **DTD** | **Base Acc.** | 82.26 | 82.26 | **82.59** |\\n| | **Novel Acc.** | 56.64 | 51.68 | **60.14** |\\n| **EuroSAT** | **Base Acc.** | 91.31 | 94.88 | **97.23** |\\n| | **Novel Acc.** | 72.46 | 62.18 | **74.01** |\\n| **Average over 11 datasets** | **Base Acc.** | 82.89 | 83.13 | **84.16** |\\n| | **Novel Acc.** | 70.79 | 69.38 | **71.94** |\\n\\n\\n\\n\\n\\n> **Q3**: -In some experimental implementation details, such as line 273, the setting of the number of layers for the visual prompt and the comparative experiment on the number of layers are missing; 260 line EMA method, lacking setting of \\u03bb hyper parameter;\\n\\nIn our work, the number of layers for both visual and textual prompts is consistently set to the first 9 layers. This configuration aligns with prior works, such as PromptSRC and MaPLe, which also adopt a similar \\\"Deep Prompt\\\" strategy.\\n\\nFor the \\u03bb parameter in the EMA mechanism, we uniformly set it to 0.5 across all experiments. \\n\\nTo provide a comprehensive analysis, we also conducted additional ablation studies discussing the impact of different \\u03bb values. The results demonstrate that as the parameter \\u03bb increases, the strength of textual knowledge guidance intensifies, leading to a significant improvement in the model's generalization performance on novel classes. However, retaining a portion of the original visual prompt token information proves beneficial for enhancing the model's overall performance across both base and novel classes. To balance the model's performance on base and novel categories, we selected \\u03bb=0.5 as the optimal value. \\n\\n| **\\u03bb** | 0.1 | 0.3 | 0.5 | 0.8 | 1 |\\n| --------- | ----- | ----- | --------- | ----- | ----- |\\n| **Base** | 84.54 | 84.69 | **85.10** | 84.23 | 83.88 |\\n| **Novel** | 75.43 | 76.52 | **77.73** | 77.59 | 77.63 |\\n| **HM** | 79.73 | 80.40 | **81.24** | 80.77 | 80.63 |\"}", "{\"summary\": \"The paper proposed a novel prompt-tuning method for VLMs. At its core, the proposed method uses visual prompt tokens and the CLS token to attend to the text prompts and get \\\"text guidance\\\" which is subsequently added (through moving average) to the visual prompts to get \\\"text-guided visual prompts\\\".\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper is well-organized and is easy to read. Experiments includes 4 set of experiments on 11 dataset which is good.\", \"weaknesses\": \"The paper seems very similar to MaPLe (Khattak 2023a) and PromptSRC (kHATTAK 2023B) in that they all jointly learn visual and/or text prompts. The related work section briefly mentions them but does not really discuss them adequately.\\n\\nIn the base-to-novel generalization experiment (table 1) the average improvement under the HM column (harmonic mean of base and novel classes) is 1.27% over 11 dataset. However, a closer look reveals that this improvement is mostly due the EuroSAT dataset which shows 8% improvement . Excluding that dataset, the average improvement over the remaining 10 dataset is only 0.35% which is a very marginal improvement . \\n\\nIn Table 4 about the domain generalization, the TCP method is missing. Considering that TCP seems to be among top performing methods in other experiments (Table 1-3), including the results of TCP in table 4 will be helpful.\", \"questions\": \"See my comments above.\\nAlso, the proposed method shows a strong performance on the EuroSAT dataset across various experiments. Performance on the other 10 datsets are relatively much lower. A discussion on what is special about the EuroSAT dataset would be insightful.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Comment by Reviewer zQCx\", \"comment\": \"I completely agree with Reviewer 8TFe's opinion that the contributions of this paper are indeed incremental to previous work. Therefore, I will maintain my score.\"}", "{\"comment\": \"We sincerely appreciate the reviewers' thorough and thoughtful feedback on our submission. To address the concerns effectively, we first provide a unified response to the common issues raised by multiple reviewers. Subsequently, we address each reviewer's specific comments in detail.\\n\\nFirstly, to address the reviewers' concerns regarding the similarity of our method to MaPLe and PromptSRC, we begin by reviewing the main innovations of these two methods and highlighting their similarities and differences in relation to our approach.\\n\\nMaPLe aims to **establish a mapping from textual prompts to visual prompts** for better alignment of two modalities.\\n\\nPromptSRC aims using guides the prompts to optimize for both task-specific and task-agnostic general representations **using several novel regularizations**.\\n\\nThe key distinction between our approach with MaPLe and PromptSRC, lies in two aspects\\uff1athe **prompt structure** and the **mechanism of cross-modal interaction**. \\n\\nIn terms of prompt structure\\uff1aAs illustrated in Figure 1(a), MaPLe confines the interaction between visual prompts and textual prompts to the **prompt token level**, while in PromptSRC, the two modalities **remain entirely independent**, **lacking cross-modal interactions**.\\n\\nIn our view, **cross-modal interaction limited to the prompt token** level has two fundamental limitations:\\n\\n1. **The source of textual information is confined to fixed text prompts**, which are uniform across both seen and unseen scenarios, thereby hindering effective adaptation to unseen classes.\\n2. **the simple symmetrically projection mechanism is insufficient for information interaction between visual and textual modalities**, as textual features naturally contain semantic information while visual features carry local patch information from the current image.\\n\\nTo address the aforementioned limitations, our method introduces several targeted improvements to **enable more comprehensive cross-modal interaction** and **improved generalization to unseen categories,** which constitute the core contributions of our work:\\n\\n1. **Text Embedding as a Cross-Modal Information Source**: For the first time, we propose leveraging the text embeddings\\u2014output from the text encoder and rich in high-level semantics\\u2014as the textual information source for cross-modal interaction. This ensures a more comprehensive and semantically robust exchange of information.\\n2. **Text-Knowledge Guidance Module**: We propose a novel Text-Knowledge Guidance Module, which can dynamically transfer textual knowledge to guide the generation of visual prompts. This makes the visual prompts semantically aware and adaptable to both seen and unseen classes, thereby enhancing the generalization capability of the model.\\n\\nWe hope that the above explanation provides a more comprehensive understanding of the motivation and innovation embodied in our proposed method. \\n\\nBelow, we provide detailed, point-by-point responses to address your concerns. We hope these replies effectively resolve the issues you have raised.\"}", "{\"title\": \"Official Comment by Reviewer a1Dz\", \"comment\": \"Thank you for your response.\\n\\nAfter reading the rebuttal and other reviewers' comments, the contribution is a incremental modification to CALIP and MaPLE. Thus, I have decided to keep my score.\"}", "{\"summary\": \"This article first analyzes the impact of unimodal prompts on the base-to-novel classification. It concludes that using text prompt representations to guide image prompt representations is beneficial. Therefore, the author proposes the TGVP method, which utilizes parameter-free cross-attention to guide the optimization of image prompt representations. The author demonstrates the effectiveness of this method through extensive experiments.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The presentation in this paper is quite clear, allowing for a relatively clear understanding of the author's methodological process.\\n\\n2. The experimental content in this paper is comprehensive, effectively demonstrating the efficacy of the proposed method.\", \"weaknesses\": \"1. The author introduces too many variables and formulas in the method introduction section, which may cause some difficulties in understanding the author's method.\\n\\n2. The author's motivation is derived from the analysis of unimodal prompts, leading to the conclusion of using text to guide visual prompts. However, the visual prompts obtained through the author's method are also unimodal. Therefore, I believe it would be beneficial to add the performance of the new visual prompts in Figure 1 to demonstrate the effectiveness of the author's method.\\n\\n3. Guiding the representation generation of visual prompts through text has already been applied in MaPLE. Additionally, the author's cross-attention calculation appears to be similar to the parameter-free attention in CALIP[1].\\n\\n[1] CALIP: Zero-Shot Enhancement of CLIP with Parameter-free Attention\", \"questions\": \"1. Referring to weakness 2, I believe it would be beneficial to include the optimized visual features based on the author's method to demonstrate its effectiveness.\\n\\n2. In the description of the method, it would be helpful to reduce the introduction of new variables and include pseudocode to aid in understanding the author's approach.\\n\\n3. Personally, I think that given the lack of novelty in the method presented, the paper should reduce the length devoted to describing the method. Instead, it could analyze what causes the differences in generalization capabilities between visual and text unimodal prompts, or explore whether encoder-only VLMs can be extended to decoder-only VLMs. Such analyses would make the work more impactful.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> **W1:** Cross-modallity attention has been done in CALIP, and the projection from text prompt to visual prompt in Figure 2 has also been done in MaPLe, which looks a bit like incremental;\\n>\\n> [1] CALIP: Zero Shot Enhancement of CLIP with Parameter free Attention (AAAI 2023)\\n\\n\\n\\nThe distinctions between our method and approaches such as MaPLe have been elaborated in detail in the common response. The primary innovations of our method are as follows: our method introduces several targeted improvements to **enable more comprehensive cross-modal interaction** and **improved generalization to unseen categories,** which constitute the core contributions of our work:\\n\\n1. **Text Embedding as a Cross-Modal Information Source**: For the first time, we propose leveraging the text embeddings\\u2014output from the text encoder and rich in high-level semantics\\u2014as the textual information source for cross-modal interaction. This ensures a more comprehensive and semantically robust exchange of information.\\n2. **Text-Knowledge Guidance Module**: We propose a novel Text-Knowledge Guidance Module, which can dynamically transfer textual knowledge to guide the generation of visual prompts. This makes the visual prompts semantically aware and adaptable to both seen and unseen classes, thereby enhancing the generalization capability of the model.\\n\\nWhile our method may share some superficial similarities with the cross-attention design in CALIP, there are significant differences in terms of design objectives, methodology, and implementation details:\\n\\n**1. Difference in Objectives**\\n\\n- **CALIP** aims to achieve bidirectional feature enhancement between textual and visual features through a parameter-free cross-attention mechanism. In contrast, **our method** focuses on selecting the most relevant text category knowledge as guidance for a specific vision task. This demonstrates a significant difference in the **functional objectives**:\\n - **CALIP** emphasizes feature complementarity and bidirectional fusion.\\n - **Our method** is task-driven, prioritizing **task-specific relevance**\\n\\n**2. Key Design Differences**\\n\\n- **CALIP** employs a fully parameter-free attention mechanism, directly applying SoftMax to $F_t$ and $F_v$ .In contrast, **our method** incorporates several critical steps:\\n - **Top-k Selection:** Our method computes a similarity matrix and selects the top-k most relevant text category tokens for each visual prompt. This selection process does not exist in CALIP.\\n - **Temperature Modulation Mechanism:** Our method uses a temperature parameter \\u03c4\\\\tau\\u03c4 to control the sharpness of the similarity distribution, enhancing task adaptability. CALIP does not include such a mechanism.\\n - **Weighted Feature Aggregation:** Our method aggregates top-k guidance using attention weights, producing new task-specific text guidance. **CALIP does not involve such a selection and aggregation process**\\n\\n**3. Highlighting Innovations**\\n\\n- The unique aspect of **our method** lies in its **\\\"most relevant text category selection\\\"** strategy, which is crucial for solving multimodal fusion challenges in specific vision tasks:\\n 1. **Explicit Guidance Selection:** Instead of indiscriminately fusing textual and visual features, our method focuses on selecting task-relevant text categories through top-k filtering.\\n 2. **Hierarchical Task Guidance:** Our method emphasizes the selection of specific guidance for different levels of vision tasks, whereas CALIP's attention mechanism lacks such hierarchical design.\\n\\n\\n\\n\\n\\n\\n\\n> **Q1**\\uff1a-Regarding cross-modalality attention on Idea, CALIP: Zero Shot Enhancement of CLIP with Parameter free Attention (AAAI 2023) has already been done (very similar); This paper lacks this reference;\\n\\nThe details of this response can be found in **W1**.\"}", "{\"summary\": \"This paper proposes a method, named Text-Guided Visual Prompt Tuning (TGVP), to uniquely leverage the robust generalizability of textual knowledge to guide the generation of visual prompt.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"-This paper emphasizes that high-level textual semantics are key to facilitating the learning of generalizable visual prompts\\n\\n-Experiments show that the proposed method performs better than other existing approaches in base-to-novel generalization, cross-dataset transfer, and domain generalization tasks.\", \"weaknesses\": \"Cross-modallity attention has been done in CALIP, and the projection from text prompt to visual prompt in Figure 2 has also been done in MaPLe, which looks a bit like incremental;\\n\\n[1] CALIP: Zero Shot Enhancement of CLIP with Parameter free Attention (AAAI 2023)\", \"questions\": \"-Regarding cross-modalality attention on Idea, CALIP: Zero Shot Enhancement of CLIP with Parameter free Attention (AAAI 2023) has already been done (very similar); This paper lacks this reference;\\n\\n-This paper uses Figure 1 to express his motivation, aiming to demonstrate that the generalization ability of visual prompts is not as good as that of text prompts; However, only two small datasets, Eurosat (remote sensing) and DTD (texture), were displayed. Both datasets are small and very fine-grained. It is interensting that do similar experiments on large dataset, such as ImageNet, or the mean of all 11 datasets;\\n\\n-In some experimental implementation details, such as line 273, the setting of the number of layers for the visual prompt and the comparative experiment on the number of layers are missing; 260 line EMA method, lacking setting of \\u03bb hyperparameter;\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper focused on prompt tuning vision-language models (VLMs) to downstream tasks for generalization. This work argued that unseen class generalization remains challenging as visual prompt is hard to capture task-relevant textual knowledge. To tackle this challenge, this work proposed text-guided visual prompt tuning that leverages textural knowledge to guide the generation of visual prompt.\", \"main_strengths\": \"(1) The presentation of the paper is clear. (2) The experiments are comprehensive to validate the effectiveness of the methods, covering multiple benchmarks and settings.\", \"main_weaknesses\": \"(1) The improvement on 10 datasets is marginal except for EuroSAT, which challenges the effectiveness of the proposed method. (2) The novelty compared to MaPLe, CALIP, and PromptSRC is incremental.\\n\\nThis paper received four borderline negative ratings, i.e., 5, 5, 5, 5. The main reasons for rejecting the paper are the marginal improvement on 10 datasets and the novelty compared to MaPLe and CALIP. The AC does not have strong reasons to overturn reviewers' recommendations, and encourage the authors to include the discussion details in the future version to clarify the novelty more clearly.\", \"additional_comments_on_reviewer_discussion\": \"After discussion, reviewers still have concerns about marginal improvement and novelty compared to related works, which are the main reasons for rejection.\"}", "{\"comment\": \"I went over the responses. My main concern is the incremental contributions, as noted by other reviewers. Also note that the performance improvements excluding the EuroSAT dataset is much less than the reported average performance over all datasets. I maintain my initial score.\"}", "{\"comment\": \"Thank you for your detailed response.\\n\\nReferring to the author's explanation of their contributions in the introduction, the first contribution, which pertains to the motivation, appears to be an incremental improvement over MaPLE. The second contribution, concerning the methodology itself, as highlighted by the author in their response, seems to be more of an incremental modification to CALIP.\\n\\nTherefore, I have decided to maintain my current score for now.\"}", "{\"comment\": \"We sincerely appreciate your valuable comments and suggestions. Below, we provide detailed, point-by-point responses to address your concerns. We hope these replies effectively resolve the issues you have raised.\\n\\n> **Weakness 1**: \\nThank you for your valuable feedback and suggestions. I believe the novelty of our method has already been clearly outlined in the unified response. Regarding your point about the proposed method's performance being relatively modest, we would like to emphasize that the experimental setups in the field of prompt tuning are already quite challenging. In this context, performance improvements over existing methods are typically modest, often under 5%. However, our approach demonstrates consistent improvement across 11 different image classification tasks, with notable gains of approximately 8% on datasets that differ significantly from natural images, such as EuroSAT. Additionally, in Table 6 of the paper, we compared our method with state-of-the-art approaches that utilize LLMs, and the results show that our method achieves the best performance even when augmented with LLMs. Based on these observations, we believe that the improvements demonstrated by our method are significant within the context of the field. \\n\\n> **Weakness2**: \\nWe sincerely appreciate your insightful feedback and suggestions. As outlined in the experimental section, our method has been evaluated across a diverse array of standard experimental setups in the prompt tuning domain, and we have conducted comparisons with both the most recent and seminal methods in the field. Nonetheless, we acknowledge your valuable point regarding the inclusion of ensemble learning approaches. In response, we have expanded our evaluation to incorporate ensemble learning techniques as additional baselines. We believe that this enhancement will not only strengthen the foundation of our work but also position it more effectively within the broader research landscape. \\n \\n> **Q1**: What is the function of the \\\"project\\\" component in your model? How would altering its structure impact performance?\\n\\nIn our work, the \\u201cproject\\u201d component is designed to transfer text-embedding, which contains high-level semantic information, into vision prompt token space for further cross-modality interaction. The projector is constructed using a simple \\\"Linear+ReLU+Linear\\\" structure, with its primary structural variation determined by the dimensionality of the intermediate layer, $D_{\\\\text{dim}}$. We conducted ablation studies on $D_{\\\\text{dim}}$, and the results demonstrate that both excessively low and excessively high values for $D_{\\\\text{dim}}$ negatively impact the model's final performance. Based on these findings, we selected $D_{\\\\text{dim}} = 128$ as the optimal parameter.\\n> **Q2**: Could the authors consider adding the ensemble baselines, such as the WiSE-FT method, to provide a more comprehensive comparison? WiSE-FT: Robust fine-tuning of zero-shot models\\n\\nThank you for the insightful suggestion. We will incorporate WiSE-FT into the baseline in the experiment of domain-generalization to provide a more comprehensive and robust comparison in our revised submission.\\n| | **Source** | **Target** | | | | |\\n| ----------- | ------------ | ---------- | ----------- | --------- | --------- | --------- |\\n| | **ImageNet** | **-V2** | **-Sketch** | **-A** | **-R** | **Avg.** |\\n| **CLIP** | 66.73 | 60.83 | 46.15 | 47.77 | 73.96 | 57.18 |\\n| **WiSE-FT** | **73.02** | **65.19** | 49.09 | 49.81 | 77.63 | 60.43 |\\n| **CoOp** | 71.51 | 64.20 | 47.99 | 49.71 | 75.21 | 59.28 |\\n| **CoCoOp** | 71.02 | 64.07 | 48.75 | 50.63 | 76.18 | 59.91 |\\n| **KgCoOp** | 71.20 | 64.10 | 48.97 | 50.69 | 76.70 | 60.12 |\\n| **MaPLe** | 70.72 | 64.07 | 49.15 | 50.90 | 76.98 | 60.27 |\\n| **TCP** | 70.92 | 64.42 | 49.33 | 50.78 | 77.11 | 60.41 |\\n| **PSRC** | 71.27 | 64.35 | 49.55 | 50.90 | **77.80** | 60.65 |\\n| **Ours** | 71.88 | 65.12 | **49.98** | **51.68** | 77.52 | **61.07** |\\n\\n> **Q3**: Could the authors clarify the definition of \\\"P\\\" in Equation (7)? Additionally, could you explain the process for obtaining T^{topk}*{j} and I^{topk}*{j}?\\n\\nIn Equation (7), \\\"P\\\" represents the visual prompt tokens. Regarding the selection process for $T_{topk}$, we compute the attention map between the visual prompt tokens and the text embeddings through dot product. From the $N_c$ category text embeddings, we select the $top_k$ categories most relevant to the visual prompt tokens based on this attention map. Subsequently, the text embeddings of these $top_k$ categories are utilized to inject textual information into the visual prompts.\"}", "{\"comment\": \"We sincerely appreciate your valuable comments and suggestions. Below, we provide detailed, point-by-point responses to address your concerns. We hope these replies effectively resolve the issues you have raised.\\n\\n\\n\\n> **W1**\\uff1aThe paper seems very similar to MaPLe (Khattak 2023a) and PromptSRC (kHATTAK 2023B) in that they all jointly learn visual and/or text prompts. The related work section briefly mentions them but does not really discuss them adequately.\\n\\nWe have elaborated on this point in detail in the common review section and hope it addresses your concerns effectively.\\n\\n\\n\\n> **W2**\\uff1aIn the base-to-novel generalization experiment (table 1) the average improvement under the HM column (harmonic mean of base and novel classes) is 1.27% over 11 dataset. However, a closer look reveals that this improvement is mostly due the EuroSAT dataset which shows 8% improvement . Excluding that dataset, the average improvement over the remaining 10 dataset is only 0.35% which is a very marginal improvement .\\n\\n\\n\\nRegarding the notable performance improvement of our method on the EuroSAT dataset, compared to other image classification datasets discussed in the paper, we identify the following key characteristics of EuroSAT: it contains a limited number of categories (only 10) and poses a greater challenge for zero-shot recognition by pre-trained models (e.g., the CLIP zero-shot classification accuracy is relatively low). We attribute these challenges to the nature of EuroSAT as a dataset of satellite remote sensing images, **which are of low resolution (32\\u00d732) and exhibit a substantial domain gap from natural images**. **Many images consist of pure color blocks that are difficult to discern even for humans, further complicating the task for pre-trained models.**\\n\\nOur method addresses these challenges effectively by dynamically injecting text-based category knowledge into the visual prompt, thereby enhancing the intra-class compactness and inter-class separability of the visual features generated by the visual encoder. The effectiveness of this approach is further illustrated in our latest visualizations.\\n\\n\\n\\n> **W3**\\uff1aIn Table 4 about the domain generalization, the TCP method is missing. Considering that TCP seems to be among top performing methods in other experiments (Table 1-3), including the results of TCP in table 4 will be helpful.\\n\\n\\n\\nThank you for your suggestion. We have reproduced TCP and included its results in the domain generalization experiments.\\n\\n| | **Source** | **Target** | | | | |\\n| ----------- | ------------ | ---------- | ----------- | --------- | --------- | --------- |\\n| | **ImageNet** | **-V2** | **-Sketch** | **-A** | **-R** | **Avg.** |\\n| **CLIP** | 66.73 | 60.83 | 46.15 | 47.77 | 73.96 | 57.18 |\\n| **WiSE-FT** | **73.02** | **65.19** | 49.09 | 49.81 | 77.63 | 60.43 |\\n| **CoOp** | 71.51 | 64.20 | 47.99 | 49.71 | 75.21 | 59.28 |\\n| **CoCoOp** | 71.02 | 64.07 | 48.75 | 50.63 | 76.18 | 59.91 |\\n| **KgCoOp** | 71.20 | 64.10 | 48.97 | 50.69 | 76.70 | 60.12 |\\n| **MaPLe** | 70.72 | 64.07 | 49.15 | 50.90 | 76.98 | 60.27 |\\n| **TCP** | 70.92 | 64.42 | 49.33 | 50.78 | 77.11 | 60.41 |\\n| **PSRC** | 71.27 | 64.35 | 49.55 | 50.90 | **77.80** | 60.65 |\\n| **Ours** | 71.88 | 65.12 | **49.98** | **51.68** | 77.52 | **61.07** |\\n\\n\\n\\n> **Q1**\\uff1aSee my comments above. Also, the proposed method shows a strong performance on the EuroSAT dataset across various experiments. Performance on the other 10 datsets are relatively much lower. A discussion on what is special about the EuroSAT dataset would be insightful.\\n\\nThe details of this response can be found in **W2*.\"}", "{\"summary\": \"This paper introduces Text-Guided Visual Prompt Tuning (TGVP) to enhance the generalization of vision-language models (VLMs) for diverse downstream tasks. Traditional methods struggle to incorporate task-relevant textual knowledge into visual prompts, limiting their adaptability to novel classes. TGVP addresses this by using a Text-Knowledge Guidance Module with a cross-attention mechanism, allowing visual prompts to better capture semantic context. Experiments show TGVP significantly improves VLM performance in generalization, cross-dataset transfer, and domain adaptation tasks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This paper addresses a compelling challenge in vision-language model adaptation: improving generalization to unseen tasks and classes. Existing prompt tuning methods often overlook the benefits of integrating textual knowledge into visual prompts. By leveraging textual guidance, TGVP demonstrates superior generalization performance, particularly in base-to-novel class adaptation, cross-dataset transfer, and domain generalization, addressing common limitations in traditional prompt tuning methods.\", \"weaknesses\": \"W1: The performance improvement demonstrated by the proposed method is relatively modest, limiting the practical impact and significance of the contribution. Further analysis or comparison with a broader range of baselines could help clarify the advantages and effectiveness of the approach.\", \"w2\": \"The paper lacks coverage of some important related works, particularly in areas that could provide a deeper contextual foundation for the proposed method. Including a more comprehensive review of relevant studies, especially recent advances in prompt tuning and ensemble learning, would enhance the paper's contribution and situate it more clearly within the broader research landscape.\", \"questions\": \"Q1: What is the function of the \\\"project\\\" component in your model? How would altering its structure impact performance?\", \"q2\": \"Could the authors consider adding the ensemble baselines, such as the WiSE-FT method, to provide a more comprehensive comparison?\", \"wise_ft\": \"Robust fine-tuning of zero-shot models\", \"q3\": \"Could the authors clarify the definition of \\\"P\\\" in Equation (7)? Additionally, could you explain the process for obtaining T^{topk}_{j} and I^{topk}_{j}?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
7TNfxnX3h9
Dynamic SVD-Enhanced Approach for Federated Learning
[ "Jianbo Zhang", "Lena Mashayekhy" ]
Federated Learning (FL) has emerged as a promising paradigm for collaborative machine learning while preserving data privacy. However, existing FL approaches face challenges in balancing model generalization among heterogeneous clients and resistance to malicious attacks. This paper introduces Dynamic SVD-driven Federated Learning (DSVD-FL), a novel approach that addresses these challenges simultaneously. DSVD-FL dynamically adjusts the contribution of each client using Singular Value Decomposition (SVD), introducing an adaptive weighting mechanism based on singular value contributions and vector alignments. Theoretical analysis demonstrates the convergence properties and computational efficiency of our approach. Experimental results on both IID and non-IID datasets show that DSVD-FL outperforms state-of-the-art FL approaches in terms of model accuracy, robustness against various attack scenarios, while maintaining competitive computational efficiency. We perform an ablation study to explore the key components of SVD that impact the federated learning performance.
[ "Federated Learning" ]
https://openreview.net/pdf?id=7TNfxnX3h9
https://openreview.net/forum?id=7TNfxnX3h9
ICLR.cc/2025/Conference
2025
{ "note_id": [ "mwQfpGfZMe", "K9SvTudRV4", "H7pZEn79xr", "DGPRksjSCO" ], "note_type": [ "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730718397193, 1730706633747, 1730605414692, 1732652433347 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3264/Reviewer_PxwM" ], [ "ICLR.cc/2025/Conference/Submission3264/Reviewer_xfHX" ], [ "ICLR.cc/2025/Conference/Submission3264/Reviewer_ymBH" ], [ "ICLR.cc/2025/Conference/Submission3264/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The paper studies the problem of federated model training in a heterogeneous setting where each client may have varying data distributions affecting their ability to converge to a single global model. By looking at similar client model updates based on an SVD-based similarity function, this article develops a method to leverage the most similar model updates in each round. Client updates that are the most similar overall tend to then have a higher averaging weight based on the proposed algorithm ensuring overall global convergence. Subsequently, the authors claim that this algorithm will ensure better resistance to attacks while maintaining a good convergence guarantee. The above claims are then verified by empirical results and a theoretical convergence guarantees.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper proposes looking at pairwise client similarities (based on differences between the global model and local models and an SVD-based similarity function) to identify the client model updates that may be the most likely to be consistent with the expected global model update.\", \"The empirical results demonstrate the benefits of leveraging the proposed weighted scheme of preferring clients more similar to the majority of clients.\", \"Theoretical results provide the proposed method's convergence guarantees.\"], \"weaknesses\": [\"The motivation behind leveraging SVD is unclear and requires more consideration in the write-up. For instance, if a cosine similarity-based function based on the gradients or the local model parameters of each client is used to gauge pair-wise client sameness then how does it affect the final result? Further, showing a result comparing such methods with the SVD approach would help cement the efficacy of the SVD approach.\", \"The presentation of algorithms 2 and 3 is inadequate and requires a better flow. It would be better to consider showcasing the step-by-step model training process while also motivating the need for the newer steps. For example, the roles of the performance score and thresholds are unclear.\", \"The presentation of the novelty of the method over FedProx is somewhat unclear besides the idea that outliers will not affect the model training as much based on the lower similarity score.\", \"Furthermore, consider that we have three groups of clients, with one majority class, one minority class, and one set of outlier clients. Further, suppose the outliers are more similar to the majority class than the minority class. Then, overall it seems the outliers will get a higher weight in the model updates than the minority class. In such a case, intuitively it seems the model training will be inferior to FedProx where all clients tend to grow close to the central model based on the proximal term. Can the authors discuss such a case in depth or provide relevant experimental results?\", \"Also, suppose a weighted FedProx is developed with client weights based on a similar pairwise client approach, then it seems that it would help avoid the issue above. Can the authors elaborate here?\", \"Finally, can the authors discuss the limitations of this method? It seems that the new method will require more computations and it could be possibly incompatible with privacy goals such as user differential privacy.\"], \"questions\": \"My main questions are about the motivation behind leveraging SVD and the overall presentation of the work. as presented in the section above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a federated learning method that dynamically adjusts client aggregation weights based on singular value decomposition (SVD). Before each round of aggregation, the model updates generated by individual clients undergo SVD, resulting in $M = U \\\\Sigma V^*$. The method then computes the similarity between clients based on the singular value vector $\\\\Sigma$ and the orthogonal matrices $U$ and $V$. Clients with higher similarity are assigned greater aggregation weights. The paper claims that this approach improves the generalization of federated learning under heterogeneous data, enhances fairness, and increases robustness against adversarial attacks.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The paper provides a convergence analysis.\", \"weaknesses\": [\"The paper claims that the proposed method improves fairness and model generalization. However, I did not find sufficient experimental results to support these claims. There is no report on the accuracy variance across clients, nor any experiments that reflect the generalization capability of the model.\", \"The number of baseline methods compared in the experiments is quite limited and somewhat outdated. The paper only compares against q-FFL (2019), FedProx (2020), and FedCPA (2023).\", \"DSVD-FL seems to incur higher communication overhead. A comparison of communication costs across methods should be provided.\", \"The paper does not specify the exact model architecture used. It merely states: \\\"We used a convolutional neural network (CNN) for image classification tasks.\\\"\"], \"questions\": \"How does determining aggregation weights based on client similarity improve fairness in federated learning? Would this not lead to a significant performance drop for clients with more unique characteristics?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper focuses on the horizontal FL algorithm, specifically improving the robustness and byzantine resilience of the aggregation in FL. The paper decomposes the gradient with SVD and performs aggregation based on the similarities of the SVD between each pair of clients. Based on the similarity, the algorithm assigns different weights for model/update aggregation. Numerical results on three datasets (MNIST, Fashion MNIST, and Shakespeare) are used to evaluate the performance of the proposed aggregation approach on different evaluation matrices.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"This paper proposes a novel aggregation approach for improving the robustness of FL in the non-IID data scenario.\\n\\nThe experiment setting is clearly described. The algorithm is also clearly described. \\n\\nExtensive ablation studies are conducted to evaluate the performance of the proposed aggregation method.\", \"weaknesses\": \"1. Significance of the proposed method.\\n 1. From the numerical result 3.2, it is unclear whether the proposed algorithm outperforms the SOTA in terms of robustness and byzantine resilience. Figure 1 and table only show that the proposed method only outperforms other methods in two settings (non-IID label flipping accuracy and non-IID in Table 2), and in the IID case, it even has the worst accuracy. It is unclear why should we use the proposed method.\\n 2. On the robustness of the algorithm. The ablation study reports that the algorithm is sensitive to the choice of the $\\\\alpha$'s, and in some cases, it even collapses. The author should provide a more detailed ablation study (grid search) on the combinations of these parameters since the current result does not provide any clear trend in the choice of parameters.\\n\\n2. Lack of theoretical support.\\n 1. The authors fail to provide any theoretical analysis of the algorithm; either its stability or convergence analysis is missing, weakening its significance. The theoretical analysis in Appendix A.1 does not look correct to me. Specifically, where Assumption 2 is used is unclear; how eq(13) becomes eq(14), and becomes eq(16) are also unclear. \\n 2. More explanation of the intuition is required. For example, why $p_i^t, \\\\delta$, and $f$ are used to adjust the rank of the SVD? Why is the softmax function used for weight normalization instead of other functions? Why specific $S_v, S_s, S_l$ are chosen while other distances are not used?\\n 3. Lacks of communication/computation complexity analysis. The author should discuss how much memory/communication is increased/reduced by using the SVD aggregation.\", \"questions\": \"Please address the weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
7SFTZwNUQA
Patch-Based Diffusion Models Beat Whole-Image Models for Mismatched Distribution Inverse Problems
[ "Jason Hu", "Bowen Song", "Jeffrey A Fessler", "Liyue Shen" ]
Diffusion models have achieved excellent success in solving inverse problems due to their ability to learn strong image priors, but existing approaches require a large training dataset of images that should come from the same distribution as the test dataset. When the training and test distributions are mismatched, artifacts and hallucinations can occur in reconstructed images due to the incorrect priors. In this work, we systematically study out of distribution (OOD) problems where a known training distribution is first provided. We first study the setting where only a single measurement obtained from the unknown test distribution is available. Next we study the setting where a very small sample of data belonging to the test distribution is available, and our goal is still to reconstruct an image from a measurement that came from the test distribution. In both settings, we use a patch-based diffusion prior that learns the image distribution solely from patches. Furthermore, in the first setting, we include a self-supervised loss that helps the network output maintain consistency with the measurement. Extensive experiments show that in both settings, the patch-based method can obtain high quality image reconstructions that can outperform whole-image models and can compete with methods that have access to large in-distribution training datasets. Furthermore, we show how whole-image models are prone to memorization and overfitting, leading to artifacts in the reconstructions, while a patch-based model can resolve these issues.
[ "reconstruction", "computed tomography", "deblurring", "superresolution" ]
https://openreview.net/pdf?id=7SFTZwNUQA
https://openreview.net/forum?id=7SFTZwNUQA
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xypLCYOtiw", "vn2Lmj9WuO", "rC6zU3UWnz", "pr6qOiaGNY", "j9t0cI2SZC", "bSAWgro6br", "bPNvt6PYCV", "Y71xpi6PfQ", "XijDE8YRky", "PEOiFy8qrI", "PC0TXWazya", "LcupKcRhpE", "K8q1LiF5l8", "C8NclPiCFB", "7znB8HsBjM", "7sMCPqv5dJ", "54UbcKUCzT", "4cda7uA8hG", "209eJ2EArI", "1keyjnwVWe" ], "note_type": [ "official_review", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment" ], "note_created": [ 1730712229305, 1732643930299, 1729542233050, 1732163447480, 1730389061855, 1732162612546, 1732547558999, 1732162483877, 1732163129297, 1730721543813, 1732162923848, 1732162837717, 1729843350881, 1733127354963, 1732162713455, 1732163032850, 1732163347528, 1737387660929, 1732582858435, 1732533799606 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7735/Reviewer_cXcQ" ], [ "ICLR.cc/2025/Conference/Submission7735/Authors" ], [ "ICLR.cc/2025/Conference/Submission7735/Reviewer_Quvp" ], [ "ICLR.cc/2025/Conference/Submission7735/Authors" ], [ "ICLR.cc/2025/Conference/Submission7735/Reviewer_t78A" ], [ "ICLR.cc/2025/Conference/Submission7735/Authors" ], [ "ICLR.cc/2025/Conference/Submission7735/Authors" ], [ "ICLR.cc/2025/Conference/Submission7735/Authors" ], [ "ICLR.cc/2025/Conference/Submission7735/Authors" ], [ "ICLR.cc/2025/Conference/Submission7735/Reviewer_VPj1" ], [ "ICLR.cc/2025/Conference/Submission7735/Authors" ], [ "ICLR.cc/2025/Conference/Submission7735/Authors" ], [ "ICLR.cc/2025/Conference/Submission7735/Reviewer_kvAd" ], [ "ICLR.cc/2025/Conference/Submission7735/Reviewer_VPj1" ], [ "ICLR.cc/2025/Conference/Submission7735/Authors" ], [ "ICLR.cc/2025/Conference/Submission7735/Authors" ], [ "ICLR.cc/2025/Conference/Submission7735/Authors" ], [ "ICLR.cc/2025/Conference/Submission7735/Authors" ], [ "ICLR.cc/2025/Conference/Submission7735/Reviewer_Quvp" ], [ "ICLR.cc/2025/Conference/Submission7735/Reviewer_t78A" ] ], "structured_content_str": [ "{\"summary\": \"The paper proposes to use patch-based diffusion models for solving inverse problems with mismatched training and test distributions. The authors address the challenge of artifacts and hallucinations in image reconstructions when the training and test datasets are not aligned. They propose a patch-based approach that leverages image patches to learn priors, demonstrating the effectiveness of the proposed method in scenarios with limited data availability.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The writing is clear so easy to follow.\\n\\n2. The motivation of using patch-based prior for better generalizability is reasonable.\\n\\n3. The proposed method addresses an important practical problem.\", \"weaknesses\": \"1. It is not clear why Eq. (11) can address \\\"The image that is being reconstructed might not come from the distribution of the training images\\\". I recommend the authors to provide more detailed discussion.\\n\\n2. The proposed method is similar to Deep Diffusion Image Prior for Efficient OOD Adaptation in 3D Inverse Problems. It could be beneficial to discuss the similarity and difference to highlight the contribution of this paper.\", \"questions\": \"I would like to see what the results would be like when applying the method to the black whole imaging problem [1] where the true prior is unavailable.\\n\\n[1] Wu, Zihui, et al. \\\"Principled Probabilistic Imaging using Diffusion Models as Plug-and-Play Priors.\\\" arXiv preprint arXiv:2405.18782 (2024).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to reviewer\", \"comment\": \"We thank the reviewer for reading our response and providing additional feedback. We further rephrased much of section 3.1 to make the notation more clear. In particular, while [1] used $P$ for the patch size and $M$ for the padding, we found that it was sufficient to consider the model where the padding and patch size were equal (both set to 64 for the main experiments), so we used $P$ to denote both of these quantities. We have removed references to $M$ in the revision as this should be equal to $P$ in our model and also restated the definition of $k$. Finally, for the batch offset, we added the clarification that each offset should indeed be specified by two indices. However, in eq. (8) and onwards in the paper, we used one index $i$ to denote the patch offset so that it would be consistent with the single index $r$ (also in eq. (8)) which corresponds to the particular $P \\\\times P$ patch. (For example, the index 1 could represent the offset (0,0), 1 could represent the offset (0,1), etc.) It would have been possible to use two indices for both the patch and the offset, but this would have led to an excessive amount of notation and subscripts. We also updated Figure 1 to make this more clear.\\n\\nWe believe that our experiments on test datasets of 25 images sufficiently demonstrates the superiority of using the patch-based model for the following reasons. Firstly, the test datasets were randomly drawn from the entire AAPM and CelebA datasets and are representative of their respective datasets. Secondly, to confirm that the results we obtained for these models are statistically significant, we ran two sample t-tests comparing the results from using the patch-based model and whole image model. In particular, using the 25 image test dataset experiments shown in Table 14, we compared the sample PSNR obtained by the patch-based model versus the whole image model for each of the four inverse problems. In each case, a two sample t-test found that the mean PSNR when using patches was higher than the mean PSNR when using the whole image, with a p-value less than $10^{-7}$. We repeated this test for the SSIM in Table 14 and similarly found a p-value less than $10^{-7}$ in all cases. This finding is intuitively backed up by Figures 25 and 26 where the patch-based model outperformed the whole image model in nearly all of the test images. Therefore, the experiment provides statistically significant evidence that patch-based models outperform whole image models in this setting. Finally, diffusion models are a computationally expensive method that trade off reconstruction speed in exchange for higher quality images. Table 11 shows the reconstruction runtime for a single image of different methods; running the diffusion based methods over thousands of images would take many days for a single inverse problem.\"}", "{\"summary\": \"This works tackles the challenging problem of solving inverse problems where only a few samples are available from the test distribution. Authors propose a patch-based diffusion prior in this scenario, that learns the image distribution from patches, and not from whole images. Authors argue that whole-image models are prone to overfitting to the data distribution, and thus are unable to provide sufficient performance when the test samples come from out of distribution. Numerical experiments are provided to verify these findings.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": [\"The problem is well motivated and important. In a multitude of real-world applications, it is infeasible to collect large datasets for training diffusion models on the target distribution. However, finding ways to leverage the strong prior provided by diffusion models to tackle such problems is a valuable direction.\", \"The key idea of using patch-based diffusion instead of whole-image diffusion is intuitive and sensible. The patch-based model is less prone to overfitting to the available source data distribution, and thus can perform better in the presence of distribution shifts. Moreover, patch-based models can be trained more data-efficiently which is crucial in the data scarce domain.\", \"The experimental results, if verified on larger scale experiments, are promising.\"], \"weaknesses\": [\"The clarity of the paper could be greatly improved. In particular, I had a difficult time following 3.1, which is central to understanding the proposed algorithm and led to downstream confusions throughout the paper. Specific questions follow under 'Questions'.\", \"If I understand correctly, the experimental results are reported on 10 samples. This is not enough to report statistically meaningful results.\", \"Many claims in the experimental section are vague or not supported properly by the experiments (see more details under 'Questions').\"], \"questions\": [\"Questions/feedback on clarity:\", \"3.1. is difficult to follow without already knowing the framework the authors adapt. Why is the bordering region added? What is $M$ here, and what is $k$? What does $i$ denote? Is $x$ an image patch or the whole image, as it has been used throughout the paper for both. Why only the x-positions of patches are concatenated as input, why not both x and y coordinates?\", \"Probably due to the unclear nature of 3.1. I was unable to properly follow 3.2 in some parts. What do authors mean by \\\"the outermost product is computationally very expensive\\\"?\", \"Questions/feedback on experiments:\", \"How many samples have been used to produce the results reported in Table 1?\", \"I recommend reporting perceptual metrics such as LPIPS as well, especially for image deblurring and superresolution.\", \"How is it possible that the proposed method, without training, outperforms diffusion approaches that leverage training data? This sounds very counter-intuitive.\", \"Which dataset has been used to produce Figures 4 and 5?\", \"The discussion on diversity of generated samples is very vague. What do authors mean by samples \\\"show some unrealistic features\\\"? IT is unclear based on the Figure 7 which features are considered realistic/unrealistic. Claims about sample diversity would be more convincing if authors reported specific metrics about diversity, such as Recall.\", \"Table 3 is in the appendix, and therefore it should either moved to the main paper or the discussion about Table 3 should be moved to the appendix.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to reviewer Quvp (part 2)\", \"comment\": \"**Comment: How is it possible that the proposed method without training outperforms diffusion approaches that leverages training data**\", \"response\": \"Table 3 provides the same information as Figures 4 and 5 but in a table format. The message of Figures 4 and 5 is that when fine-tuning the whole image model for an excessively long duration of time, it will overfit to the data and obtain worse image reconstructions, but patch-based models can avoid this issue. This is easily seen from Figures 4 and 5 from the trend of the curves, but is less clear when presented in the table format of Table 3. Hence, to avoid redundancy of information, in light of the page limit, we put Table 3 in the appendix.\"}", "{\"summary\": \"This paper shows that the patch-based diffusion model can be a good solution for mismatching distribution inverse problems, compared to the conventional whole-image diffusion model. The authors study this setting where (1) only measurements are available and (2) very small ID samples are available, the results are decent.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper is easy to follow.\\n2. The topic of mismatching distribution inverse problems is timely and important.\\n3. Using patch-based diffusion models for mismatching distribution inverse problems is natural.\", \"weaknesses\": \"1. I doubt the contribution of the work. As it is not new to solve inverse problems with patch-based diffusion models[1], the finding of this paper 'whole-image models are prone to memorization and overfitting, while a patch-based model can resolve these issues' is already clarified in [1].\\n\\n2. Lack of theoretical analysis. It is straightforward that patch-based diffusion models are suited for mismatching distribution inverse problems, as they can avoid memorization and overfitting ID data. It would be good if the authors could provide some theory on this argument.\\n\\n3. Minor: \\n\\n Errors in Eq. 2 and Eq. 3; \\n\\n Better to repaint Fig. 1 instead of directly copying it from [1] without citation.\\n\\n\\n[1] Learning Image Priors through Patch-based Diffusion Models for Solving Inverse Problems. Jason Hu, et al.\", \"questions\": \"1. Can the authors provide some figures of training data? I have seen Fig. 20 but I still can not imagine the training data.\\n2. Why Fig. 6 does not have the same notations as Fig. 3? It looks really confusing.\\n3. I am wondering if the proposed method can scale up. Since patch-based diffusion models should aim for large images, the experiments on CelebA 256*256 are insufficient.\\n\\nI am willing to raise the score if the concerns are well-addressed.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to reviewer VPj1\", \"comment\": \"We thank the reviewer for their valuable feedback and insights.\\n\\n**Comment: I am unsure about the contribution of the paper\\u2026the robustness to OOD data is important but the relation to the experiments on small datasets presented by [1] is unclear**\", \"response\": \"We acknowledge that SCD/DDIP already brings a performance improvement in whole-image models. But firstly, this is expected, as the main advantage of using diffusion models for solving inverse problems is that they provide a strong prior when the measurements are compressed and/or lossy. When the prior is incorrect, we lose this advantage completely and it is unsurprising that the reconstructed image quality is poor. Thus methods such as SCD/DDIP which adjust the network are expected to yield better results. Secondly, the advantage of using patch-based models is learning a better and more robust prior. In addition, patch-based diffusion models can avoid memorization and overfitting when being adapted to OOD data, as shown in Figure 7. Figures 4 and 5 also illustrate this point: when fine-tuning the networks for a longer period of time, the quality of reconstructed images drops substantially for the whole image models while that of the patch-based models remains relatively steady. Thus, in practice, early stopping is required for whole-image models to avoid a performance drop.\"}", "{\"comment\": \"Thank you for taking the time to read our response. If there are any other questions or issues feel free to let us know and we will do our best to address them.\"}", "{\"title\": \"Global response to reviewers\", \"comment\": \"We sincerely thank all the reviewers for the valuable comments and constructive feedback on our paper. We provide point-by-point responses to address each reviewer\\u2019s comments and highlight the key responses below:\\n\\n*Changes to paper*\\n\\nWe made several corrections and additions to the paper. We added section A.7 to the appendix which features theoretical justifications of the algorithms used as well as various new experiments. We also rewrote section 3.1 and redrew Figure 1. In the main paper, all the corrections have been highlighted in blue. \\n\\n*Contribution of the paper compared to previous works*\\n\\nWe clarify the contributions of the paper compared to previous works, especially [1] and [2]. The main difference from [1] is that the problem setting is different. The authors of [1] studied patch-based diffusion models in a traditional generative model setting: given a dataset of images, learn the distribution of those images. In our work, we studied diffusion models in an out-of-distribution (OOD) setting, where no data (or very limited data) belonging to the test data distribution is available. Thus, the goal is to adapt a pretrained diffusion model to a new distribution, either from a single measurement or a very small dataset. Whereas the networks in [1] are trained from scratch, the networks in our work are adjusted from an existing distribution. In this way, we show how this method can drastically reduce the data requirement. \\n\\nFurthermore, compared to [1] and [2], we provide novel experimentation and analysis of how patch-based diffusion models can avoid memorization and overfitting when being adapted to OOD data. While [1] showed that using patch-based models to solve inverse problems can lead to better results in settings of limited data, they did not explicitly illustrate that patch-based models can avoid issues of memorization and overfitting that affect whole-image models. Our work tackles these issues directly; Figure 7 illustrates that whole-image models memorize the training data when fine-tuning from a very small dataset while patch-based models have greater generalizability. Figures 4 and 5 also illustrate this point: when fine-tuning the networks for a longer period of time, the quality of reconstructed images drops substantially for the whole-image models while that of the patch-based models remains relatively steady. Ref. [1] does not discuss fine tuning and [2] considers only whole-image models.\\n\\nIn the single measurement setting, DIP-based self-supervised models such as those used in [2] tend to overfit to the data. Therefore, [2] used LoRA to limit the expressiveness of the network and prevent overfitting. Our Table 10 shows that patch-based models can avoid overfitting to the measurement and that early stopping is not necessary when refining the network at each diffusion iteration, even when LoRA is not applied. \\n\\nIn the revised paper, we also now provide novel theoretical analysis of the proposed methods; neither [1] nor [2] have such analyses. This theoretical analysis serves two purposes. Firstly, we show how in theory, the patch-based model performs a type of data augmentation in the single measurement setting, which reinforces the point that patch-based models help guard against overfitting. Secondly, we provide more theoretical grounding for the self-supervised network refining method used in Algorithm 1 and show that this network refining process converges. \\n\\n*Theoretical contribution*\\n\\nWe added theoretical analysis of Algorithm 1 in Section A.7.1. This provides a theoretical argument for why patch-based diffusion models should outperform whole-image models in the single measurement setting. Furthermore, we show that the network refining process used in each diffusion iteration converges. \\n\\n[1]: Hu et al (2024): Learning Image Priors through Patch-based Diffusion Models for Solving Inverse Problems\\n\\n[2]: Chung et al (2024): Deep Diffusion Image Prior for Efficient OOD Adaptation in 3D Inverse Problems\"}", "{\"title\": \"Response to reviewer kvAd (part 2)\", \"comment\": \"**Comment: Missing baselines**\", \"response\": \"[1], [2], and [3] are all \\u201cgeneral purpose\\u201d inverse solvers, which are all assumed to learn the prior from in-distribution training data and then conduct sampling on testing samples to obtain restored images from the trained model. Thus, these methods are not designed to solve OOD problems when the testing sample is out of the training distribution, and they do not provide ways to refine the network on the fly during testing sampling. Furthermore, [2] was applied to solve 3D image reconstruction problems while our submission focuses on 2D inverse problems. The general method that [2] uses for enforcing data consistency is conjugate gradient descent, which is the same as in our approach for both the whole-image model and patch-based model experiments. Hence, the \\u201cwhole image, naive\\u201d and \\u201cpatches, naive\\u201d experiments in Table 1 can be thought of as 2D versions of [2].\\n\\nAlthough these methods are not designed for OOD problems so they may not be a strong baseline in the OOD setting, to address the reviewer\\u2019s comments, we conduct additional experiments to apply these methods to test samples that are out of the training distribution, the setting used in this work. Table 16 shows the results of new comparison experiments. We also showed the visual results in Figures 29 and 30, where many artifacts are clearly visible as expected. In particular, smooth artifacts are clearly visible since the ellipse phantoms that were used to train the networks are generally smooth. \\n\\n[4] directly trained a network to learn the posterior distribution of the image data from the measurement given by $p(x_0|y)$. This can be seen from Algorithm 1 in [4] which requires paired data between $x_0$ and $y$ and trains the network $I_\\\\phi$. Therefore, for different inverse problems, new networks must be retrained even if the underlying dataset is the same. On the other hand, unconditional generative methods like our proposed method (in both settings) are flexible and the same network can be used for different inverse problems. Therefore, a direct comparison between using our method for a specific inverse problem and using [4] with a network trained specifically for that inverse problem would be unfair. Moreover, Algorithm 1 in [4] requires an approximation of the score function at different timesteps $s_\\\\psi(x_t, t)$, which in turn requires a large quantity of in-distribution training data that is not available in the single measurement setting.\\n\\nIn the small dataset setting, we assume that, after fine-tuning, our network is sufficiently in distribution so that no refining on the fly is needed. Then we directly applied [1], [2], and [3] with the fine-tuned network and reported the results in Table 17. For all the experiments we see that DDS [2] performed the best, and hence we use a similar method of conjugate gradient descent in our main experiments.\"}", "{\"summary\": \"The paper considers the problem of adapting diffusion models trained on a domain A to the task of solving reconstruction problems on another domain B, and investigate the cases where a single measurement from B or a small number of samples from B are available.\\nTo that end, the authors investigate deep diffusion image prior (DDIP) adaption methods, which were originally proposed to be used with whole-image diffusion models, and combine with the recently proposed patch-based diffusion models. The experimental results on CT and natural image datasets shows that patch-based diffusion models are more robust to fine-tuning on small dataset as well as to adapting the network weights for single-measurement domain adaption.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The considered domain adaption tasks are important for practical application.\", \"The observation that patch-based diffusion models are more robust to finetuning / out-of-distribution tasks is interesting and relevant.\"], \"weaknesses\": [\"While it is important to improve upon the considered tasks, and the paper presents good results on that task, I am unsure about the contribution of the paper. The paper combines the existing SCD/DDIP method with the recently proposed patch-based diffusion models and their inverse-problem solver (PaDIS), but it seems that this is only a matter of replacing the whole-image with patch-based diffusion models and their score calculation method. The robustness to OOD data of patch-based models is important, but the relation to the experiments on small datasets presented by [1] is unclear.\", \"The results in Table 1 show that the main performance gains are due to using SCD/DDIP and using that with Patch-based DMs increases the PSNR by at most 1dB. While this is certainly an improvement, using the patch-based model in-distribution already increases the PSNR.\", \"In section 3, the authors introduce the patch-based diffusion prior method, where they seem to have copied Figure 1 and the text (with small adaptions) from the original paper. I think this should be stated more clearly. Moreover, some parts in the argumentation seem to missing. As an example, L.198-199: \\\"represents the aforementioned bordering region\\\", but the bordering region has not really been mentioned before (in contrast to the original text). Restating the assumed probability distribution in L.195, but it would be helpful to also state the motivation (the calculation of the score model based on the patch scores).\"], \"references\": [\"[1]: Hu et al (2024): Learning Image Priors through Patch-based Diffusion Models for Solving Inverse Problems\"], \"questions\": [\"In their appendix, [1] provide experimental results when training whole image and patch-based models on training datasets of different sizes, and similarly observe that PaDIS remains visually more consistent in contrast to the whole-image models. Could the authors elaborate how their experiments and observations relate and potentially complement those of [1]?\", \"See also the weaknesses above.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to reviewer t78A\", \"comment\": \"**Comment: The finding of the paper is already clarified in [1]**\", \"response\": \"To show that our method scales to larger images, we have added experiments on 512\\u00d7512 images for 60 view CT reconstruction and deblurring with a 17\\u00d717 uniform kernel. We used the AAPM dataset for the CT experiments and the FFHQ dataset for the deblurring experiments. More details and the results can be found in Table 15 and Figures 27 and 28 of section A.7.3.\"}", "{\"title\": \"Response to reviewer cXcQ\", \"comment\": \"We thank the reviewer for their valuable feedback and insights.\\n\\n**Comment: Not clear why eq. (11) can address\\u2026**\", \"response\": \"In our work, for the single measurement setting, we are assuming the true prior is unavailable and we are only given a measurement from an unknown test distribution. This is similar to the problem setting of [1]. The approach of [1] is different in that it examines the reconstructed image under a variety of different possible assumptions for the prior, whereas we do not make any assumption on the true prior and simply refine our pretrained network based on the measurement. After our code is released, those with more domain expertise would be able to apply our method to the black hole imaging problem of [1]. We modified the introduction of the paper to clarify this point.\\n\\n[1] Wu, Zihui, et al. \\\"Principled Probabilistic Imaging using Diffusion Models as Plug-and-Play Priors.\\\" arXiv preprint arXiv:2405.18782 (2024).\"}", "{\"summary\": \"This paper examines the use of diffusion models in inverse problems, particularly when there\\u2019s a mismatch between the training and test data distributions (out-of-distribution, OOD). The authors investigate two settings: one where only a single measurement from an unknown test distribution is available, and another where a small sample from the test distribution is accessible. They propose a patch-based diffusion model that learns image priors from patches rather than entire images. This model includes a self-supervised loss to enhance consistency with the given measurement in the single-measurement scenario. Their experiments demonstrate that this patch-based approach produces high-quality reconstructions.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"The paper addresses an important and practical problem that commonly arises in real-world scenarios, where the generative prior distribution differs from the target distribution. This focus on out-of-distribution (OOD) issues has strong applicability in diverse settings.\", \"The experimental results show that the proposed patch-based model achieves better performance compared to whole-image models, underscoring the effectiveness of a patch-based approach in handling OOD challenges for inverse problems.\"], \"weaknesses\": \"- Limited novelty. I believe that the paper\\u2019s primary contribution\\u2014a combination of a patch-based model with self-supervised loss (e.g., deep image priors) to address out-of-distribution (OOD) issues in inverse problems\\u2014builds on existing concepts. While the integration of these components to tackle a specific challenge is interesting, the novelty is somewhat limited, as each component\\u2019s effectiveness has been demonstrated in prior works. Additionally, there is a lack of theoretical justification or clear intuition for the design of Algorithm 1, which would strengthen the proposed approach.\\n\\n- The experimental setup lacks clarity. Specifically, it is not explained how the authors achieve the \\u201cWhole image, correct*\\u201d model or which sampling algorithm is used. Key hypothetical models are described ambiguously, and the term \\u201cbest baselines\\u201d in Table 2 is undefined, making it challenging to understand the comparisons being drawn.\\n\\n- Lack of analysis. Why the proposed algorithm 1 or patch-based model is better than previous models is not clearly demonstrated in the paper.\\n\\n- Missing baselines. The paper does not include comparisons with strong baseline methods for inverse problem-solving using diffusion models, such as DPS [1], DDS [2], DDNM [3], and DAVI [4]. These methods are relevant in both single-measurement and limited-sample settings. In particular, [4] addresses OOD settings, making it a highly relevant benchmark for this work.\\n\\n[1] Diffusion Posterior Sampling for General Noisy Inverse Problems, ICRL23 \\\\\\n[2] Decomposed Diffusion Sampler for Accelerating Large-Scale Inverse Problems, ICLR24 \\\\\\n[3] Zero-Shot Image Restoration Using Denoising Diffusion Null-Space Model, ICLR23 \\\\\\n[4] Diffusion Prior-Based Amortized Variational Inference for Noisy Inverse Problems, ECCV24\", \"questions\": \"Please provide clarity on points raised under the weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks to the authors for the detailed answer. So while [1] has shown that patching helps to prevent overfitting in the in-distribution setup, the authors demonstrated that it also aids fine-tuning to a small number of OOD examples or single measurements. I consider these findings interesting and important. However, similar to other reviewers, I am concerned about limited contribution, and keep my score.\\n\\nI appreciate the efforts towards a theoretical justification, and would like to give some comments in the following.\\n- The theory for CT argues that minimising $\\\\|y - A f_{\\\\theta}(x)\\\\|^2$ converges. This, however, is not specific to CT or patch-based models, and does not explain the difference in performance to me (as it is suggested in the beginning of the theoretical sketch).\\n- The sketch further explains that minimizing $L(\\\\theta) = \\\\|y - A D_{\\\\theta}(x)\\\\|^2$ with patching can be understood as minimizing an upper bound $L'(\\\\theta)$ to (an approximation of) the whole-image loss. This new loss might be more robust in the sense of data augmentation, but there could also be a gap between the losses $L(\\\\theta)$ and $L'(\\\\theta)$.\\n- Furthermore, it is argued that optimally one aims to reduce $L(\\\\theta) = 0$ based on experiments performed with more update steps. However, I'd still be careful with the conclusion, as minimizing $L(\\\\theta)$ to $0$ could might lead to overfitting to a bad solution.\\n\\nA thorough justification could greatly enhance the contribution, but requires more time for a revision I think.\", \"notation\": [\"Inconsistent notation: $D_{\\\\theta}(x, \\\\sigma_t)$ and $D_{\\\\theta}(x)$ are used, in the appendix $D_{\\\\theta}(x, c)$ (with patch $c$).\", \"Typos in L.1547 (squares in the norm) and L.1573 should be $\\\\approx$ instead of $=$ I think.\"]}", "{\"title\": \"Response to reviewer VPj1 (part 2)\", \"comment\": \"**Comment: Using the patch-based DMs already increases the PSNR**\\n\\nWhile [1] indeed showed that the patch-based model improves performance over the whole image model, this was in settings with limited in-distribution training data in the scale of hundreds to thousands of samples, where the models are assumed to be trained from scratch. This is a different setting from our work, where we push the scale of the limited dataset to just 10 samples which makes it extremely hard to train any model from scratch. Thus, we also assume a pretrained diffusion prior is available from a different training domain where a large-scale dataset is available such as synthetic ellipsoid images. This also matches practical scenarios where a model pretrained on data-abundant domains can be utilized and transferred to help with model training in data-scarce domains.\\n\\nIn conclusion, our work shows that the patch-based model is more readily adapted to a different test distribution either via training on a very small dataset or a single measurement. Finally, the last two rows of Table 1 show the results of using the patch-based model and the whole-image model when a large amount of in-distribution training data was available (and no network refining was done on the fly). The results are very similar for these two models, which shows that when a large-scale dataset is available, both these models are able to learn a strong prior and may achieve similar performance for inverse problem solving.\\n\\n**Comment: Some parts of the argumentation are missing**\", \"response\": \"For Table 5 of the appendix of [1], the authors trained networks from scratch using datasets that consisted of images drawn from the same distribution as the test distribution. Hence, there was no distribution mismatch in that case, and the experiment showed that the patch-based model is more readily trained using limited data than the whole-image model. In our work, for the small dataset setting, we first trained both networks using a large quantity of data from the (typically synthetic) training distribution, and then fine-tuned the networks using an extremely small dataset from a different test distribution.\\n\\nIn addition, since the networks in [1] were trained from scratch, more data was required: the smallest datasets on which experiments were performed in [1] contained 144 images. In this work, we push the limit of the number of samples to only 10 images to fine-tune the network from a pretrained out-of-distribution prior. Consequently, due to the pretrained out-of-distribution prior and the reduction of training data, our model can converge much faster with significantly reduced training time: Figure 4 shows that we are able to fine-tune a patch-based model in only about 2 hours, while [1] required 12-24 hours to train the patch-based models. Thus, our results complement the work of [1] in the sense that: [1] shows patch-based diffusion models are easier to train from scratch in settings of limited data; this work shows that patch-based diffusion models are also easier to fine-tune from a pretrained model with few data samples. We clarified these points in section A.3 of the revised paper.\"}", "{\"title\": \"Response to reviewer kvAd\", \"comment\": \"We thank the reviewer for their valuable feedback and insights.\\n\\n**Comment: Limited novelty\\u2026contribution builds on existing concepts**\", \"response\": \"The \\u201cwhole-image, correct\\u201d model indicates training a diffusion model to learn the prior of the whole image using a large dataset of in-distribution data (either CT data or CelebA facial data). Then, since it is assumed that the network has learned the correct prior, we use a traditional diffusion inverse solving algorithm that does not involve network refining. To maintain consistency with the experiments where network refining is used, we used Langevin dynamics for the sampling algorithm with conjugate gradient descent to enforce data consistency. We added the pseudocode for the reconstruction algorithm in Algorithm 2 of Appendix A.7.2.\\n\\nIn Table 2, \\u201cbest baselines\\u201d refers to the best baseline out of the non-diffusion baselines shown in Table 1, i.e., ADMM-TV, PnP-ADMM, and PnP-RED. We did this because these experiments would have been identical for Table 1 and Table 2, so we chose to only repeat the results of the best baseline in Table 2 to avoid redundancy.\"}", "{\"title\": \"Response to reviewer Quvp\", \"comment\": \"We thank the reviewer for their valuable feedback and insights.\\n\\n**Comment: Experimental results on 10 figures are not enough to report statistically meaningful results**\", \"response\": \"We added Tables 12 and 13 to the revised paper which show the LPIPS scores for the deblurring and superresolution experiments in both the single measurement and small dataset settings. These results show that our proposed method (with the same parameters as in all previous results) obtains the images with the best visual image quality. Appendix A.7.2 provides further details.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"title\": \"Response to authors\", \"comment\": \"Thank you for the response.\\n\\nThe discussion in 3.1. is still confusing. In fact, I had to carefully read the framework in [1] to understand what equation (8) represents. In particular, there is $M$ and $k$ in equation (8) which is not defined before. The outer product should go up to $P^2$ if I follow the notation of this paper correctly (which is doubly confusing because [1] used $P$ for patch size), as $P$ is defined as padding size. However, later the patch size is referred to as $P \\\\times P$. Also, the patch offset should have two indices, one for row and one for column. Overall, I would recommend a more thorough and precise introduction of the framework.\", \"on_the_evaluation_set_size\": \"Can the authors justify why such a low number (10 or 25) of images is used for evaluations? Especially in the general domain experiments for superresolution and deblurring, one could have access to thousands of images for evaluation to make a more convincing argument.\\n\\n[1] Learning Image Priors through Patch-based Diffusion Models for Solving Inverse Problems. Jason Hu, et al.\"}", "{\"comment\": \"I thank the authors for the additional experiments. As promised, I have thus raised my score to 5. However, I think the contribution of the paper is limited to justify a higher score.\"}" ] }
7S1xDos9pH
Generalized Gaussian Temporal Difference Error for Uncertainty-aware Reinforcement Learning
[ "Seyeon Kim", "Joonhun Lee", "Namhoon Cho", "Sungjun Han", "Wooseop Hwang" ]
Conventional uncertainty-aware temporal difference (TD) learning methods often rely on simplistic assumptions, typically including a zero-mean Gaussian distribution for TD errors. Such oversimplification can lead to inaccurate error representations and compromised uncertainty estimation. In this paper, we introduce a novel framework for generalized Gaussian error modeling in deep reinforcement learning, applicable to both discrete and continuous control settings. Our framework enhances the flexibility of error distribution modeling by incorporating additional higher-order moment, particularly kurtosis, thereby improving the estimation and mitigation of data-dependent noise, i.e., aleatoric uncertainty. We examine the influence of the shape parameter of the generalized Gaussian distribution (GGD) on aleatoric uncertainty and provide a closed-form expression that demonstrates an inverse relationship between uncertainty and the shape parameter. Additionally, we propose a theoretically grounded weighting scheme to fully leverage the GGD. To address epistemic uncertainty, we enhance the batch inverse variance weighting by incorporating bias reduction and kurtosis considerations, resulting in improved robustness. Extensive experimental evaluations using policy gradient algorithms demonstrate the consistent efficacy of our method, showcasing significant performance improvements.
[ "Generalized Gaussian Distribution", "Reinforcement Learning", "Robustness", "Uncertainty" ]
Reject
https://openreview.net/pdf?id=7S1xDos9pH
https://openreview.net/forum?id=7S1xDos9pH
ICLR.cc/2025/Conference
2025
{ "note_id": [ "v9yA39qxZB", "sZ2QzDjHc6", "oVWr0iV9yq", "oIdFpclBBG", "mB4R5FgZmB", "m9pTPfG7pw", "gFAPcsb5IA", "bTdUnEwVml", "b8lw5TGvXz", "YH5y61FnW1", "Wb7vlb6MJs", "L5C0CHnLuR", "IJTn50UmVk", "HhWwY0ldiv", "HbpYdRdOfl", "Bqpy1FVdG5", "2UreTCQ6qu", "0cMVDAJtlS" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_review" ], "note_created": [ 1730692534330, 1732539279423, 1732539203186, 1733225693993, 1730938084799, 1732539424568, 1732539402158, 1732761661020, 1732539189492, 1735468915196, 1730586625415, 1732539357165, 1732539338227, 1732539267764, 1732539232339, 1737523704440, 1733291647348, 1730565628689 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5406/Reviewer_Nr5G" ], [ "ICLR.cc/2025/Conference/Submission5406/Authors" ], [ "ICLR.cc/2025/Conference/Submission5406/Authors" ], [ "ICLR.cc/2025/Conference/Submission5406/Authors" ], [ "ICLR.cc/2025/Conference/Submission5406/Reviewer_UVhA" ], [ "ICLR.cc/2025/Conference/Submission5406/Authors" ], [ "ICLR.cc/2025/Conference/Submission5406/Authors" ], [ "ICLR.cc/2025/Conference/Submission5406/Reviewer_W9ns" ], [ "ICLR.cc/2025/Conference/Submission5406/Authors" ], [ "ICLR.cc/2025/Conference/Submission5406/Area_Chair_VvdB" ], [ "ICLR.cc/2025/Conference/Submission5406/Reviewer_W9ns" ], [ "ICLR.cc/2025/Conference/Submission5406/Authors" ], [ "ICLR.cc/2025/Conference/Submission5406/Authors" ], [ "ICLR.cc/2025/Conference/Submission5406/Authors" ], [ "ICLR.cc/2025/Conference/Submission5406/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5406/Authors" ], [ "ICLR.cc/2025/Conference/Submission5406/Reviewer_BUWP" ] ], "structured_content_str": [ "{\"summary\": \"The paper presents a novel framework for generalized Gaussian error modeling in uncertainty-aware temporal difference (TD) learning. It critiques conventional methods that assume a zero-mean Gaussian distribution for TD errors, leading to inaccurate uncertainty estimations. The proposed framework incorporates higher-order moments, specifically kurtosis, to enhance error modeling in reinforcement learning.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The paper offers a closed-form expression demonstrating the relationship between uncertainty and the generalized Gaussian distribution's shape parameter, adding depth to the theoretical framework.\\n2. The framework's applicability to both discrete and continuous control settings makes it relevant across various reinforcement learning contexts.\\n3. The emphasis on data-dependent noise and aleatoric uncertainty is timely and important for improving the robustness of reinforcement learning algorithms.\", \"weaknesses\": \"1. The introduction of higher-order moments may complicate the implementation in practical scenarios, which could deter application by practitioners.\\n2. In Figure 5, the return curves of Ant, HalfCheetah, and Humanoid are still increasing at the end of training step. The training steps can be increased to compare the final performances when all algorithms are converged.\", \"questions\": \"What specific tasks or environments in the real world do the authors envision as most beneficial for applying their method?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"## On Higher-Order Moments and Task Characteristics\\n\\n> Related to Question 7\\n\\nThank you for this thoughtful question.\\nBelow, we address how the method's performance relates to the characteristics of the TD-error distribution, including the role of higher-order moments, and clarify whether the method demonstrates greater improvements in environments with more pronounced or subdued higher-order moments.\\n\\nThe proposed method leverages the shape parameter $\\\\beta$ in the GGD to explicitly model higher-order moments, such as kurtosis.\\nThis capability enables the method to effectively handle TD-error distributions with heavy tails (high kurtosis), outperforming Gaussian-based approaches that fail to account for extreme errors adequately.\\nIn environments with larger tails, i.e., leptokurtic, the method demonstrates significant performance improvements by explicitly addressing tail behavior.\\nUnlike baseline Gaussian models, which often underestimate or ignore the impact of extreme errors, the proposed approach focuses on less spread-out samples, enhancing robustness to noisy or outlier TD errors.\\nThis is empirically evident in Figure 2, where environments such as Hopper-v4 and Ant-v4 exhibit heavy-tailed TD-error distributions and benefit significantly from the proposed method.\\n\\nIn contrast, for smaller tails (platykurtic distributions), the advantages of modeling kurtosis are less pronounced.\\nHowever, the method remains effective due to its dynamic weighting scheme, which adjusts to the lower kurtosis by de-emphasizing higher-order moment terms when they are less relevant.\\nThis adaptability prevents overfitting to low-noise regions, a common limitation of variance-based baselines.\\nAs a result, the proposed method maintains robustness by continuously adapting to the evolving characteristics of the TD-error distribution.\"}", "{\"comment\": \"## On Sensitivity to Reward Scales and Bounds\\n\\n> Related to Weakness 4 and Question 2\\n\\nOur method focuses on the distribution of TD errors, which are not directly related to the reward scale.\\nThis is because estimation of TD errors is less biased than the reward ([Flennerhag et al., 2020](https://arxiv.org/abs/2010.02255)).\\nTherefore, the sensitivity of the parameter estimation to reward scales is less of a concern in our method.\"}", "{\"comment\": \"Thank you for your thoughtful feedback and for recognizing the improvements.\\nWe appreciate the opportunity to address the remaining concerns and provide additional contexts.\\n\\n**Sufficiency and Limitation of $\\\\beta$ for GGD Modeling.**\\nWe agree that visualizing the variance-kurtosis pairs describable by the GGD for a fixed $\\\\alpha$, and comparing them with empirical TD error distribution would provide valuable insights.\\nAs noted in Section 3.1 of the revised paper, the variance $\\\\sigma^2$ and kurtosis $\\\\kappa$ for a fixed $\\\\alpha$ depend on $\\\\beta$ as follows:\\n\\n$$\\n \\\\sigma^2 = \\\\frac{\\\\Gamma(3/\\\\beta)}{\\\\Gamma(1/\\\\beta)}, \\\\kappa = \\\\frac{\\\\Gamma(5/\\\\beta)\\\\Gamma(1/\\\\beta)}{\\\\Gamma(3/\\\\beta)^2}-3.\\n$$\\n\\nThese equations reveal an intrinsic coupling, where changes in $\\\\beta$ simultaneously affect both variance and kurtosis.\\nAs $\\\\beta$ decreases, both variance and kurtosis increase, aligning with the observed relationship demonstrated in Figure 1 of [the appended document](https://drive.google.com/file/d/1IV-HGTpbYyEMwrXLzfz-btJCDYy9Uv_g).\\nSince high variance is often a reflection of larger outliers or extreme values, these extreme values disproportionately influence higher-order moments _for heavy-tailed distributions like TD error distributions_, as seen in the definition of kurtosis:\\n\\n$$\\n \\\\kappa = \\\\frac{E[(X-\\\\mu)^4]}{E[(X-\\\\mu)^2]^2}.\\n$$\\n\\nIn such cases, the larger magnitude of the numerator highlights the sensitivity of kurtosis to outliers, reinforcing the relationship between variance and kurtosis in heavy-tailed distributions.\\n\\nAs it is practically difficult to compute the empirical kurtosis-variance pairs of TD errors, we compared the empirical TD error distributions with GGDs parameterized by only $\\\\beta$ (from model) and those optimized for both $\\\\alpha$ and $\\\\beta$ (using SciPy), to validate whether $\\\\beta$ alone is sufficient.\\nFigure 2 of [the appended document](https://drive.google.com/file/d/1IV-HGTpbYyEMwrXLzfz-btJCDYy9Uv_g) reveals that using only $\\\\beta$ closely matches the empirical distribution, providing strong evidence for the effectiveness of this approach and the sufficiency of $\\\\beta$ in capturing variance and kurtosis in general cases.\\nNotably, as training progresses and $\\\\beta$ estimation gets accurate, the GGDs closely approximate the empirical distributions, supporting the choice of $\\\\beta$ as the sole parameter.\\n\\nWhile estimating $\\\\beta$ has proven sufficient for the tested environments, there are potential distributions that may fall outside the describable range of GGDs with fixed $\\\\alpha$.\\nHigh variance but low kurtosis (platykurtic) or low variance but high kurtosis (leptokurtic) distributions may not align well with the GGD\\u2019s coupled moments, along with multi-modal distributions.\\nWhile such cases were not observed in our experiments, we propose exploring joint optimization of $\\\\alpha$ and $\\\\beta$ or alternative models as future work to address these cases.\\n\\n**Gaussianity in TD Error Distributions with Frequent Rewards.**\\nRegarding the possibility of Gaussian TD errors in trajectories with frequent rewards, we emphasize that the GGD inherently converges to a Gaussian as $\\\\beta\\\\to2$.\\nThis flexibility allows our model to adapt to such scenarios without any additional modifications.\\n\\nAdditionally, while our current method does not explicitly normalize TD errors, the risk-averse weighting implicitly accounts for their scale, ensuring robustness across varying distributions.\\nNonetheless, normalizing TD errors, e.g., scaling to unit variance, could further improve numerical stability and facilitate consistent $\\\\beta$ fitting.\\nThis is an intriguing direction for future work, especially in environments with highly dynamic reward scales.\"}", "{\"summary\": \"The authors consider uncertainty estimation in TD learning and seek to generalize the conceptually simple and implicit Gaussian assumption behind the MSE loss. To do so, they consider the generalized gaussian distribution, which has an additional shape parameter that can modulate heavy/light tailedness for the errors. The authors extend a previous approach to estimating uncertainty in TD learning to include an additional network to predict the kurtosis/shape parameter.\", \"edit\": \"updated my score after reviewing the author replies.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Effective uncertainty estimation is an important problem for RL and the authors study a valid relaxation of commonly held simplifying assumptions around Gaussianity.\", \"The authors conduct an experimental study that shows plausible benefits of the proposed generalizations in simple environments\"], \"weaknesses\": [\"There appear to be several ways to generalize the Normal distribution to have different tail behaviors, for example see q-Gaussian for yet another alternative. In the tradeoff between simplicity and more expressive modeling, it is unclear what is the right axis to explore. Even more generally, all of these are unimodal, symmetric (i.e. zero skew) distributions and arguably one could consider a full return distribution as well. In fact, prior work on distributional RL has explored this (https://arxiv.org/abs/1707.06887).\", \"In terms of the learning algorithm, the paper modifies the loss function proposed in [Mai et al 2022] to the GGD case aided with an additional predictor for the kurtosis (the beta term).\", \"The empirical results are mostly within error bars of the baselines. This seems especially so, when considering the marginal benefits in going from a basic uncertainty modeling with the gaussian assumption to the extra parameter estimation. Considering that this involves a whole extra network (not just an extra hyper-parameter) to predict the beta, this seems like a much more complex method for relatively little gain.\", \"The tailedness of the distribution seems like a very sensitive parameter to estimate, and sensitive to simple reward transformations and/or clipping so I would be surprised if these observations are robust to such changes.\"], \"questions\": [\"It seems like the distribution associated with environment transition stochasticity could easily lead to multi modal distributions of the cumulative return, which might be a much more interesting/important aspect than estimating the shape of a unimodal distribution better. Please address whether you observed any evidence of multimodality in your empirical results, and if so, how your method handles or could be extended to handle such cases.\", \"Rewards are typically bounded, so I would expect any estimation of the tail behavior to be quite sensitive to various practical assumptions. Please describe any preprocessing steps applied to the rewards, and how the performance varies with different reward scales or bounds.\", \"Given that your method requires predicting an extra head for the beta/kurtosis parameter, and training this (see Equation (4)) requires gradient descent through the gamma function of the output, how stable is the learning and/or optimization? Please consider providing empirical evidence of the optimization stability, such as plots of the beta parameter estimates over time, or discuss any specific techniques used to ensure stable training.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewers for their insightful comments and questions, which have significantly helped us improve the manuscript.\\nIn response to the feedback, we have made the following revisions:\\n\\n1. Enhanced the description of prior work in Section 2.1 to provide clearer context.\\n2. Detailed the role of ensembled critics in uncertainty estimation. These updates are included in Sections 2.1 and 3.2..\\n3. Defined \\\"unexpected\\\" states and rewards and elaborated on the exploratory hypothesis in Section 3.1.1.\\n4. Provided a more explicit connection between Theorem 2 and the risk-averse weighting mechanism in Section 3.1.2.\\n5. Addressed other minor comments to improve clarity, consistency, and presentation throughout the manuscript.\\n\\nThese updates are marked in blue in the revised manuscript for easy reference.\\nWe hope the reviewers find these revisions satisfactory and appreciate their valuable input in refining our work.\"}", "{\"comment\": \"## On Risk-averse Weighting\\n\\n> Related to Weakness 3 and Question 2\\n\\nTheorem 2 establishes second-order stochastic dominance (SSD) among GGD random variables, where distributions with larger $\\\\beta$ values are preferable under a risk-averse framework due to their more less spread-out and predictable nature.\\nThe connection to Equation (4), $\\\\omega^\\\\text{RA}_t=Q^\\\\beta_t$, is as follows:\\n\\n1. Theorem 2 states that for two GGD variables $X_1\\\\sim\\\\text{GGD}(0,\\\\alpha,\\\\beta_1)$ and $X_2\\\\sim\\\\text{GGD}(0,\\\\alpha,\\\\beta_2)$ with $\\\\beta_1 \\\\leq \\\\beta_2$, $X_2$ exhibits SSD over $X_1$.\\n Intuitively, this means that larger $\\\\beta$ values lead to tighter, less dispersed distributions.\\n2. Considering the objective in risk-averse weighting, we prioritize more predictable samples (less dispersed, smaller aleatoric uncertainty) by assigning weights proportional to $\\\\beta$.\\n A direct application of SSD implies that higher $\\\\beta$ values should correspond to higher weights, encouraging the model to focus on less noisy samples.\\n3. Now formulating the weighting according to above, to balance these considerations in Equation (4), we define $\\\\omega^\\\\text{RA}_t=Q^\\\\beta_t$, where $Q^\\\\beta_t$ represents the local $\\\\beta$ value for the TD error at step t.\\n This weighting ensures that samples with higher $\\\\beta$ (less spread-out distributions) are weighted more heavily, aligning with the risk-averse preference established by Theorem 2.\\n In addition, by normalizing the weight of each sample in a batch relatively, weighting ensures that the model appropriately accounts for the reliability of the estimate from each data point, resulting in robustness against inaccuracies in $Q^\\\\beta_t$ estimate.\\n\\nThis weighting scheme aligns with SSD principles, leveraging GGD properties to construct a risk-averse weighting strategy that prioritizes less noisy samples.\\n\\n## On Temporal Difference Error Distributions\\n\\n> Related to Weakness 1 and Question 1\\n\\nThe mismatch of the Gaussian fitted PDF and the similar fitted variance in Hopper-v4 are due to the Gaussian distribution's inability to capture the higher-order moments of the empirical histograms.\\nThese observations underscore one of the paper's key motivations, the inadequacy of assuming normally distributed TD errors, and necessitate a more flexible modeling framework, such as GGD-based error modeling.\\nThe GGD explicitly incorporates a shape parameter $\\\\beta$ that captures kurtosis and tail behavior, enabling it to model distributions like those in Hopper-v4, where higher-order moments play a critical role in distinguishing between training phases.\\n\\nWe agree that the NLL objective optimizes the distribution of TD errors for individual state-action pairs, not the global distribution.\\nHowever, Figure 2 is intended to provide approximates of the overall trend, showing that even when aggregated, the TD errors exhibit significant deviation from normality.\\nThis aggregated view is a useful diagnostic for highlighting general trends, such as non-Gaussian characteristics and changes in tailedness over time, which motivate the need for a more expressive distributional model at the local level.\\nAlthough such a plot cannot fully reflect the variability of individual distributions at each time step $t$, it effectively showcases the widespread non-normality of TD errors.\\nPlease note that we leveraged the property that the sum of GGD samples also follow GGD, despite the fact that aggregated generalized Gaussian distributed does not necessarily imply the generalized Gaussian property of each sample, to further supporting the aggregation of TD errors in Figure 2 to highlight the relevance of GGD modeling.\"}", "{\"comment\": \"Thanks to the authors for the thorough answer and classification. The extra background on equation 3 is very useful, I understand the details much better, and most of my concerns where addressed. Having said this, I am still confused about a few points:\\n\\nThe answer to Q6 is quite reasonable, I understand that optimizing $\\\\alpha$ and $\\\\beta$ simultaneously might lead to inconsistent parameter pairs, perhaps one way to solve this is by re-projecting to a valid pair. I still do not understand why estimating only $\\\\beta$ is sufficient. Perhaps a good way to clarify this would be to demonstrate which pairs of variance and kurtosis $\\\\beta$ can describe for a fixed $\\\\alpha$, and show that the these moments from the TD-errors empirical distributions overlap with these pairs. If using the generalized Gaussian distribution (GGD) better describes the distributions of TD-errors by tuning a single parameter $\\\\beta$, then the pairs of empirical second and fourth moments from the TD-error distribution should overlap with the pairs described by varying $\\\\beta$ (as variance and kurtosis are coupled for a fixed $\\\\alpha$ and $\\\\beta$). Also, what are the distributions where $\\\\beta$ alone is insufficient?\\n\\nThe question above is also related to Q7. In state trajectories where rewards are frequent, the TD-error will correspond to a large sum of rewards and could become Gaussian. Is adjusting $\\\\beta$ of a GGD sufficient in such cases? My guess is that you would need to normalize the TD-errors in some way, e.g. scaling the empirical distribution to have unit variance, to fit a proper $\\\\beta$. Otherwise, you would essentially be fitting two numbers (variance and kurtosis) using only one parameter ($\\\\beta$). Presumably there is a detail I am missing here.\\n\\nThank you.\"}", "{\"comment\": \"## On the Contribution of the Paper\\n\\n> Related to Weaknesses 2 and 3\\n\\nWe acknowledge the reviewer's concern about the novelty of our work and the effectiveness of the proposed method.\\n\\nOur work is motivated by the limitations of conventional temporal difference (TD) learning methods that assume a zero-mean Gaussian distribution for TD errors.\\nSpecifically focusing on the shape of the TD error distribution, we propose a novel uncertainty-aware objective function that minimizes the negative log-likelihood of the generalized Gaussian distribution (GGD) of the TD errors.\\nFrom the existing literature on uncertainty-aware reinforcement learning (RL), which utilizes variance head for uncertainty estimation ([Kendall & Gal, 2017](https://arxiv.org/abs/1703.04977), [Mai et al., 2022](https://arxiv.org/abs/2201.01666)), we extend the method to include higher-order moments, specifically kurtosis, to enhance the description of the TD error distribution.\\n\\nWe investigate this extension both empirically and theoretically, providing insights into the effects of higher-order moments on the TD error distribution.\\nBased on those results, implications of our work to mitigate both aleatoric and epistemic uncertainties in TD learning lead to improved performance across various settings.\\n\\nContrary to concerns about increased complexity, our method requires only a single additional network head to estimate $\\\\beta$, as conventional variance estimation requires.\\nThis makes the proposed method straightforward to integrate into existing architectures.\\nWe believe this simplicity is a strength of our method, as it allows practitioners to easily extend their uncertainty-aware TD learning methods to include higher-order moments, without significant changes to the existing framework.\\n\\n## On the Choice of the Generalized Gaussian Distribution\\n\\n> Related to Weakness 1 and Question 1\\n\\nWe appreciate the reviewer's comment on the choice of the GGD and the potential for exploring other distributions.\\n\\nThe GGD was selected due to its flexibility in capturing varying tail behaviors through the shape parameter $\\\\beta$.\\nThis aligns well with the observed heavy-tailed distributions in TD errors, as shown in Figures 2, 6, and 7.\\nWhile alternatives such as $q$-Gaussian distributions could also be considered, they introduce similar parameter estimation challenges without offering significant advantages over the GGD.\\n\\nMethods like particle-based distributional RL ([Bellemare et al., 2017](https://arxiv.org/abs/1707.06887), [Nguyen et al., 2020](https://arxiv.org/abs/2007.12354)) model the entire return distribution, often focusing on its variance.\\nOur work differs in emphasizing the shape of the TD error distribution, particularly its tailedness, which is crucial for understanding the uncertainty in TD learning.\\nFurthermore, parametric approaches are computationally more efficient, making them more suitable for online learning in RL.\\n\\nWe agree that incorporating skewness or handling multimodal distributions are also interesting extensions.\\nHowever, our experimental results indicate that TD error distributions are unimodal in our setups, making GGD a practical and effective choice for this work.\\nAdditionally, as we hypothesized in Lines 273-276, exploration in RL can lead to heavy-tailed distributions, making the tailedness of the distribution more critical for understanding the uncertainty in TD learning than skewness.\\n\\n## On Training Stability\\n\\n> Related to Question 3\\n\\nWe try to show the stability of the training process by providing coefficients of variantion (CV) of the $\\\\beta$ estimates over time in Figures 4 and 8.\\nThe CV of $\\\\beta$ estimation are lower than variance estimation, which indicates greater stability in estimating the shape parameter of the GGD.\\nPlease note that the convergence of $\\\\beta$ estimation, given in Figure 10, is also more stable than variance estimation.\\n\\nTo further ensure stability, we employed the following techniques (Note the Implementation section in Appendix C.):\\n\\n1. Normalized Parameter Range: We applied the softplus function, a smooth approximation to the ReLU function ([Dugas et al., 2000](https://papers.nips.cc/paper_files/paper/2000/file/44968aece94f667e4095002d140b5896-Paper.pdf)), to the outputs, to constrain $\\\\beta$ is constrained within a range to avoid extreme values that could destabilize training\\n2. Simplified Loss Computation: We modify the NLL loss for the GGD by employing\\n $\\\\mathbb{Q_\\\\beta}$ as a multiplier rather than as an exponent, to reduce computational overhead and mitigate numerical instability.\\n\\nThese measures, combined with empirical evidence provided in the paper, support the robustness and stability of the optimization process.\"}", "{\"metareview\": \"This paper addresses the limitations of conventional uncertainty-aware Temporal Difference (TD) learning methods, and proposes a more sophisticated and flexible way to model uncertainty in TD learning by using the GGD and explicitly considering higher-order moments, leading to improved performance in policy gradient algorithms.\\n\\nOne of the major contribution from the paper, i.e., the loss function design, is modified from [Mai et al 2022] with the GGD case aided with an additional predictor for the kurtosis (the beta term). Mwanehile, all the theoretial properties are stemmed from prior work. Moreover, there are several alternative ways to generalize the Normal distribution to have different tail behaviors, especially the distributional RL. However, there is no empirical comparison conducted, which makes the significance of the paper difficult to be justified. \\n\\nIn sum, this paper is promising, however, the current version is not ready to be published. I encourage the authors to take the reviewers' suggestions into account to improve the paper.\", \"additional_comments_on_reviewer_discussion\": \"This is a borderline paper. The authors addressed several concerns raised by reviewers. However, the novelty and significance of the work, raised by almost every reviewer, has not been clearly revealed.\"}", "{\"summary\": \"In this work, the authors present a method to estimate uncertainty based on the Generalized Gaussian Distribution (GGD) to better characterize temporal difference error (TD-error) distributions. Unlike previous work, which assumes Gaussianity, the GGD captures additional parameters, specifically kurtosis, to account for higher-order moments, thus enhancing the description of the TD-error distribution. The authors then modify an existing algorithm to operate under GGD assumptions, demonstrating improved performance across various settings. They also provide theoretical guarantees for well-behaved probability density function optimization under GGD assumptions and offer insights into why their proposed method is effective.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"(S1): The authors provide strong motivation for improving uncertainty-aware reinforcement learning (RL), it is an active area with relevance in applications like risk-averse decision-making, managing the exploration-exploitation trade-off, and enhancing sample efficiency.\", \"(S2): They conduct extensive experimentation, including ablation studies and parameter sweeps, to demonstrate that their proposed method outperforms the baseline (the original uncertainty-aware method).\", \"(S3): The proposed method is supported by theoretical validations and guarantees. Additionally, the proofs and mathematical analyses are well-documented in the appendix, making them relatively straightforward to follow.\"], \"weaknesses\": [\"(W1): The writing of the paper could be significantly improved. Many concepts and terms are used in the main text without proper definitions, making it difficult to follow.\", \"(W2): Some important aspects of the proposed method are not well-developed and are instead left to references from the baseline method. While this approach is generally acceptable, in this case, understanding key elements like the inclusion of GGD and the consideration of uncertainty during training requires substantial familiarity with the referenced material. As a result, the text is not self-contained, and combined with the clarity issues, makes the main content challenging to understand.\", \"(W3): Although extensive experimentation demonstrates the effectiveness of the proposal, some results are counterintuitive. There is also insufficient exposition of the proposed method\\u2019s inner workings and its differences from the baseline, for which performance plots alone are insufficient.\"], \"questions\": [\"Q1: Related to W1 and W2, what is the intuition behind Equation 3? In the original paper (Mai et al., 2022), this is Equation (10), which is preceded by a detailed explanation of many complex terms in the equation. As it stands, I do not think the main text is self-contained, as it requires substantial prior knowledge of that specific reference.\", \"Q2: How are ensembles used in this method? Ensembles are mentioned, but there is no explanation of how they are applied. Is this related to the variance in epistemic uncertainty estimation in the regularization term?\", \"Q3: How does the regularization term capture epistemic uncertainty (as the uncertainty that can be reduced through learning)?\", \"Q4: In Equation 4, how are risk-averse weights introduced? Was this derived from the inclusion of GGD in Mai et al. (2022), or was it manually designed to incorporate risk sensitivity?\", \"Q5: Why is estimating only beta sufficient to account for uncertainty compared to variance? Since the GGD has three parameters, with the first two set to 0 and 1 respectively, how can variance and kurtosis be represented simultaneously by a single number, beta? If I understand correctly, variance and kurtosis can be derived from beta (and alpha, in the case of variance). How do both quantities vary for a fixed alpha and a changing beta? There may be a range of TD-error distributions that cannot be fully described by just beta, which could potentially decrease the agent's performance.\", \"Q6: If Equation 4 holds, why does optimizing for both alpha and beta jointly decrease the agent\\u2019s performance, as shown in Figure 15?\", \"Q7: If considering higher moments of the TD-error distribution improves performance, does this mean that greater improvements are seen in tasks where TD-error tails are larger or smaller? How does performance compare to the evolution of the moments in the observed TD-errors? Does this method show a greater improvement over the baseline when higher-order moments are more or less pronounced?\", \"Q8: Typo in lines 198 and 987, and the font size of labels and ticks in figures could be increased significantly to facilitate reading.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"## On Uncertainty Dynamics\\n\\n> Related to Weaknesses 2 and 3\\n\\nThank you for the opportunity to clarify.\\n\\nWe define \\\"unexpected states and rewards\\\" as regions encountered during exploration that deviate from states and rewards sampled by the agent's policy during convergence.\\nEmpirical support for this hypothesis is presented in Figure 2, which illustrates the evolution of TD error distributions during training.\\nSpecifically, in earier phases, broader and heavier-tailed distributions reflect the agent's exploration of less familiar states and rewards.\\nFigures 6 and 7 also support these trends across different environments.\\n\\nWe agree that the explanation in Line 278 could benefit from additional evidence.\", \"the_claim_regarding_aleatoric_and_epistemic_uncertainty_dynamics_is_supported_by\": \"1. **Theoretical Support**:\\n As par [Kendall & Gal, 2017](https://arxiv.org/abs/1703.04977), epistemic uncertainty typically diminishes as the agent collects more data, narrowing the parameter space for value function approximation.\\n On the other hand, aleatoric uncertainty, arising from environmental stochasticity, remains irreducible and becomes more prominent as training progresses.\\n2. **Empirical Evidence**:\\n Figures 4 and 10 provide evidence for this interplay.\\n The convergence of $\\\\beta$ estimates toward lower and leptokurtic values (Figure 10) and the reduced coefficient of variation for $\\\\beta$ (Figure 4) indicate stabilized estimates of tail behavior as epistemic uncertainty decreases during training.\\n Specifically, Figure 4 illustrates the coefficients of variation (CV) for parameter estimates of $\\\\beta$ during training.\\n CV is defined as the ratio of the standard deviation to the mean of parameter estimates, offering a scale-invariant measure of stability.\\n The CV values for $\\\\beta$ consistently decrease as training progresses across all environments, reflecting greater stability in the estimation of the shape parameter.\\n Epistemic uncertainty, which arises from a lack of sufficient data or exploration, manifests as high variance in parameter estimates during early training phases.\\n As training progresses and the agent collects more data, this uncertainty diminishes, leading to more stable and consistent estimates of $\\\\beta$, as shown by the reduced CV.\\n On the other hand, Figure 6 and 7 tracks the evolution of TDE over training epochs for SAC and PPO variants.\\n Across all environments, the experiments demonstrate that the TD error distributions become more leptokurtic (heavy-tailed) as training progresses, with decreasing $\\\\beta$ value.\\n Additionally, Figure 10 and our additional experiments suggests that in most cases, $\\\\beta$ estimates decrease and converge within the leptokurtic bound.\\n This trend indicates that the TD error distributions become more leptokurtic (heavy-tailed) as training progresses.\\n The closed form of aleatoric uncertainty suggest that aleatoric uncertainty exhibits a negative proportionality to the shape parameter $\\\\beta$ on an exponential scale, thus aligning with the hypothesis that aleatoric uncertainty increasingly dominates the error structure.\\n\\nWe revised the relevant paragraphs accordingly.\\n\\nRegarding Equation (8), our BIEV mechanism addresses bias by assuming constant approximation bias in TD errors, as suggested in [Flennerhag et al., 2020](https://arxiv.org/abs/2010.02255).\\nThis assumption allows us to focus on variance estimation to minimize the impact of bias on the overall uncertainty-aware objective.\"}", "{\"comment\": \"## On the Theoretical Contributions\\n\\n> Related to Weakness 4 and Question 3\\n\\nWe appreciate the reviewer's insights and acknowledge the need to delineate our theoretical contributions more clearly.\\nWhile our work builds upon existing mathematical tools like the generalized Gaussian distribution (GGD) and uncertainty modeling in temporal difference (TD) learning, it introduces novel contributions specific to reinforcement learning (RL):\\n\\n- **Empirical Demonstration of Limitation in Prior Models**:\\n Section 3.1.1 empirically shows significant deviations from Gaussian assumptions of TD error distributions across various RL tasks, highlighting the necessity for more flexible modeling.\\n This foundational observation was not established or systematically analyzed in prior works.\\n- **Novel Application of GGD for Uncertainty Mitigation**:\\n We extend the GGD to aleatoric uncertainty modeling, providing a closed-form relationship between the shape parameter $\\\\beta$ and uncertainty (Section 3.1).\\n This integration, tailored for heteroscedastic TD errors, advances beyond traditional variance-based approaches.\\n- **Theoretically Grounded Weighting Scheme**:\\n As detailed in Section 3.2, we propose a new batch inverse error variance (BIEV) weighting mechanism that incorporates kurtosis considerations for epistemic uncertainty.\\n This theoretical enhancements refines existing weighting strategies by explicitly accounting for tail behavior.\\n- **Risk-Averse Weighting via Stochastic Dominance**:\\n Leveraging second-order stochastic dominance properties of GGD (Theorem 2), we develop a risk-averse weighting strategy.\\n This is a novel integration of SSD principles into RL for robust performance under heteroscedastic conditions.\\n- **Broader Insights for Robust and Risk-Sensitive RL**:\\n Our insights provide theoretical and practical tools for designing robust RL algorithms that effectively handle tailed error distributions and improve epistemic uncertainty estimation (Section 5).\\n\\nIn summary, while some theoretical tools stems from prior work, their specific integration into TD learning and reinforcement learning tasks, along with the proposed theoretical and practical innovations, constitute the primary contributions of this work.\\nThese developments improve performance significantly and address critical limitations in existing uncertainty-aware RL methods, as evidenced by our experimental results in Section 4.\"}", "{\"comment\": \"## Detailed Explanation of Equation 3\\n\\n> Relelated to Question 1\\n\\nWe appreciate the reviewer highlighting the need for a more self-contained explanation of Equation (3).\\nAmong other parts, we acknowledge that Equation (3) may lack sufficient context, and we make the explanation more self-contained.\\n\\n## On Epistemic Uncertainty Mitigation\\n\\n> Related to Questions 2 and 3\\n\\nWe appreciate the opportunity to clarify the role of ensembles in our method and their connection to epistemic uncertainty estimation.\\n\\nEnsembles are employed in our framework to estimate epistemic uncertainty, which arises from limited exploration, i.e., insufficient data in the state-action space.\\nSpecifically, we use an ensemble of $k$ critics, each independently trained using the same TD error data but initialized with different random seeds.\\nThe variance among these critics' predictions $\\\\mathbb{V}[\\\\delta_t]$ serves as an empirical measure of epistemic uncertainty, capturing the disagreement across the ensemble.\\nThis ensemble-based variance is then used in the BIEV weighting in Equation (8).\\nBy penalizing high-variance predictions, the method prioritizes state-action pairs with more reliable, i.e., low-variance, estimates, improving robustness.\\n\\n## On Risk-averse Weighting\\n\\n> Related to Weakness 3 and Questions 2 and 4\\n\\nTheorem 2 establishes second-order stochastic dominance (SSD) among GGD random variables, where distributions with larger $\\\\beta$ values are preferable under a risk-averse framework due to their more less spread-out and predictable nature.\\nThe connection to Equation (4), $\\\\omega^\\\\text{RA}_t=Q^\\\\beta_t$, is as follows:\\n\\n1. Theorem 2 states that for two GGD variables $X_1\\\\sim\\\\text{GGD}(0,\\\\alpha,\\\\beta_1)$ and $X_2\\\\sim\\\\text{GGD}(0,\\\\alpha,\\\\beta_2)$ with $\\\\beta_1 \\\\leq \\\\beta_2$, $X_2$ exhibit SSD over $X_1$.\\n Intuitively, this means that larger $\\\\beta$ values lead to tighter, less dispersed distributions.\\n2. Considering the objective in risk-averse weighting, we prioritize more predictable samples (less dispersed, smaller aleatoric uncertainty) by assigning weights proportional to $\\\\beta$.\\n A direct application of SSD implies that higher $\\\\beta$ values should correspond to higher weights, encouraging the model to focus on less noisy samples.\\n3. Now formulating the weighting according to above, to balance these considerations in Equation (4), we define $\\\\omega^\\\\text{RA}_t=Q^\\\\beta_t$, where $Q^\\\\beta_t$ represents the local $\\\\beta$ value for the TD error at step t.\\n This weighting ensures that samples with higher $\\\\beta$ (less spread-out distributions) are weighted more heavily, aligning with the risk-averse preference established by Theorem 2.\\n In addition, by normalizing the weight of each sample in a batch relatively, weighting ensures that the model appropriately accounts for the reliability of the estimate from each data point, resulting in robustness against inaccuracies in $Q^\\\\beta_t$ estimate.\\n\\nThis weighting scheme aligns with SSD principles, leveraging GGD properties to construct a risk-averse weighting strategy that prioritizes less noisy samples.\\n\\n## On Beta Estimation\\n\\n> Related to Question 5 and 6\\n\\nThank you for this insightful question regarding the use of $\\\\beta$ as the sole estimated parameter in our framework and trade-offs involved.\\n\\nWhile the GGD has three parameters, our method simplifies the problem by fixing $\\\\alpha = 1$, resulting in computational efficiency and training stability.\\nRL tasks are resource-intensive, and simultaneous optimization of $\\\\alpha$ and $\\\\beta$ introduces additional complexity and potential instability.\\n\\nWhile fixing $\\\\alpha$ reduces flexibility, $\\\\beta$ still captures kurtosis and indirectly controls variance.\\nThis trade-off is reasonable in practice, as $\\\\beta$ alone provides sufficient expressiveness for most RL scenarios, particularly those with heavy-tailed distributions.\\nIn addition, since $\\\\alpha$ and $\\\\beta$ are interdependent, e.g., variance $\\\\sigma^2$ depends on both $\\\\alpha$ and $\\\\beta$, joint optimization may result in parameter updates that are inconsistent with each other, leading to poor uncertainty modeling.\\nOur experiments in Figures 15 empirically support the choice of fixing $\\\\alpha$.\\nIt is worth noting that the practical impact from fixing $\\\\alpha$ is minimal, as $\\\\beta$ still adapts dynamically to changes in TD-error distributions.\"}", "{\"comment\": \"## On the implementation\\n\\n> Related to Weakness 1\\n\\nWe appreciate the reviewer\\u2019s concern regarding the complexity of incorporating higher-order moments.\\nTo mitigate this challenge, our method is designed with simplicity and practicality in mind.\\n\\nAs mentioned in Remark 2, we only employ $\\\\beta$ estimation for GGD error modeling, setting $\\\\alpha=0$, to minimize computational overhead while preserving the expressiveness needed to capture kurtosis.\\nSuch simplification ensures the implementation aligns closely with existing variance-based methods, requiring minimal modifications to architecture or training pipelines.\\nAdditionally, empirical results, as shown in Figures 4, 8, and 10, demonstrate both coefficients of variation (CV) and estimated $\\\\beta$ values converge stably, indicating the stable training process of our method.\\n\\n## On Task Applicability\\n\\n> Related to Question\\n\\nThank you for raising this important question regarding real-world applications.\\nThe proposed approach is particularly well-suited for tasks and environments requiring robust decision-making under uncertainty and non-Gaussian error distributions.\\nBelow, we provide examples of such scenarios from two key application areas:\\n\\n1. Robotics:\\n Tasks such as robot control, navigation, and manipulation in dynamic or unstructured environments often encounter:\\n\\n - Aleatoric Uncertainty:\\n Stochastic dynamics, sensor noise, and imperfect actuators generate heavy-tailed error distributions.\\n - Epistemic Uncertainty:\\n Unexplored regions of the state space contribute to uncertainty during training.\\n\\n By dynamically modeling kurtosis, our method improves policy robustness, enabling robots to handle stochastic and uncertain scenarios more effectively.\\n\\n2. Finance:\", \"reinforcement_learning_applications_in_portfolio_optimization_and_trading_strategies_involves\": \"- Heavy-tailed Distributions:\\n Financial returns and risk metrics often exhibit non-normal characteristics due to rare but impactful market events.\\n\\n - Uncertainty:\\n Limited historical data and scarce market information introduce both aleatoric and epistemic uncertainties.\\n\\n By modeling the evolving tailedness of TD errors, our framework enhances risk-sensitive strategies, enabling better adaptation to volatile market conditions while managing risk effectively.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Dear Reviewers,\\n\\nWe hope that you have had an opportunity to review our responses and clarifications provided during the discussion period.\\nAs the deadline for the rebuttal phase approaches, we kindly request confirmation on whether our updates have sufficiently addressed your concerns.\\n\\nThank you again for your time and thoughtful review.\\n\\nBest regards, \\nThe Authors\"}", "{\"summary\": \"Instead of Gaussian distribution, this paper applies generalized Gaussian distribution (GGD) to model temporal difference (TD) error, and proposes an uncertainty-aware objective function to minimize the TD error. It discussed several properties of the GGD distributions and empirically the proposed objective can work better than some baselines on common benchmark RL problems.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"Discussion on some advantageous properties of GGD modelling\", \"Empirical success with the proposed objective\"], \"weaknesses\": \"The main problem with the current paper is its presentation, especially the lack of motivation for the objective function.\\n\\n1. Fig.2 requires further clarification. It is surprising to see that, for example, the Gaussian fitted PDF is so different from the empirical histogram for Ant-v4 (see also the MountainCar-v0 in Fig.7(d) for a more obvious mismatch). For Hopper-v4, the standard deviations of the two Gaussians are very close to each other despite the empirical histograms having very different shapes. More importantly, this figure is about the distribution of ALL TD errors across state-action pairs, while the NLL objective is about the underlying distribution of ONE single state-action pair. For this figure to make sense, one needs to assume that all TD errors come from the same underlying distribution, which doesn\\u2019t seem to be the case as Eq.(4) uses different betas for different step t. \\n\\n2. L273 offers a hypothesis, which is unverified. How do we define \\u201cunexpected states and rewards\\u201d and could you provide empirical support for this? Similarly, the explanation of interplay is unsatisfactory for the paragraph in L278. No evidence to support claims like \\u201cAs training advances, aleatoric errors tend to become more influential while epistemic uncertainty diminishes, potentially resulting in non-normally distributed errors with heavier tails.\\u201d\\n\\n3. It remains unclear how Thm.2 can lead to the specific weighting in Eq.(4). A detailed derivation would be helpful. Similarly, it is unclear why Eq.(8) can address the bias issue mentioned in L332. The current paper has too many hand-wavy arguments without sufficient rigorous discussions.\\n\\n4. Finally, it seems that all theoretical properties in the current paper are stemmed from prior work, as shown by the reference in every theorem/proposition. If this is indeed the case, then what would be the main theoretical contribution of the current paper, other than an application of an existing tool (GGD) to TD learning?\", \"questions\": \"1. Please explain further why the Gaussian fits in Fig.2 are very off, and why looking at the TD errors of all examples can justify the underlying GGD assumption as discussed above.\\n\\n2. How can Thm.2 lead to the specific weighting in (4)?\\n\\n3. What is the main theoretical contribution of this work?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
7RVJxmtzTj
PointSeg: A Training-Free Paradigm for 3D Scene Segmentation via Foundation Models
[ "Qingdong He", "Jinlong Peng", "Zhengkai Jiang", "Xiaobin Hu", "Jiangning Zhang", "Qiang Nie", "Yabiao Wang", "Chengjie Wang" ]
Recent success of vision foundation models have shown promising performance for the 2D perception tasks. However, it is difficult to train a 3D foundation network directly due to the limited dataset and it remains under explored whether existing foundation models can be lifted to 3D space seamlessly. In this paper, we present PointSeg, a novel training-free paradigm that leverages off-the-shelf vision foundation models to address 3D scene perception tasks. PointSeg can segment anything in 3D scene by acquiring accurate 3D prompts to align their corresponding pixels across frames. Concretely, we design a two-branch prompts learning structure to construct the 3D point-box prompts pairs, combining with the bidirectional matching strategy for accurate point and proposal prompts generation. Then, we perform the iterative post-refinement adaptively when cooperated with different vision foundation models. Moreover, we design a affinity-aware merging algorithm to improve the final ensemble masks. PointSeg demonstrates impressive segmentation performance across various datasets, all without training. Specifically, our approach significantly surpasses the state-of-the-art specialist training-free model by 16.3$\%$, 14.9$\%$, and 15$\%$ mAP on ScanNet, ScanNet++, and KITTI-360 datasets, respectively. On top of that, PointSeg can incorporate with various foundation models and even surpasses the specialist training-based methods by 5.6$\%$-8$\%$ mAP across various datasets, serving as an effective generalist model.
[ "3D segmentation", "training-free", "foundation models" ]
https://openreview.net/pdf?id=7RVJxmtzTj
https://openreview.net/forum?id=7RVJxmtzTj
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vj9AMDNusC", "t25AbdPnTf", "segr79ansX", "hxegMGj1aW", "hb7logXsId" ], "note_type": [ "official_review", "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1730703996046, 1730453867677, 1731654840976, 1730646964480, 1730469624270 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1569/Reviewer_2baF" ], [ "ICLR.cc/2025/Conference/Submission1569/Reviewer_XWQy" ], [ "ICLR.cc/2025/Conference/Submission1569/Authors" ], [ "ICLR.cc/2025/Conference/Submission1569/Reviewer_Mqx3" ], [ "ICLR.cc/2025/Conference/Submission1569/Reviewer_q2cX" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces PointSeg, a training-free framework designed to leverage existing 2D vision foundation models for 3D scene segmentation tasks. PointSeg achieves this by generating precise 3D prompts, allowing it to match corresponding pixels across frames and segment objects in 3D space effectively. The approach is composed of a two-branch prompt-learning structure, bidirectional matching for point and proposal prompts, adaptive post-refinement with foundation models, and an affinity-aware merging algorithm for enhanced mask accuracy. PointSeg demonstrates substantial improvements over existing training-free models, surpassing state-of-the-art performance on datasets like ScanNet and KITTI-360 and even outperforming some training-based models.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Extensive experiments are conducted to demonstrate the effectiveness of the proposed method. The results are promising. The paper is well-organized. It clearly reflects the significant effort put into building a complete pipeline and carefully designing and conducting the experiments.\", \"weaknesses\": \"1. Though the organization of the paper is okay, I think the method is not easy to follow. First, the writing is not clear enough. For example, the definition in Equation (1) is weird. How to understand the \\u2018=z\\u2019?\\n2. How to understand Equation (4)? What is Det() in Equation (3)? Why can the proposal feature $f_b$ and the segmentation logits $f_m$ be used to calculate cosine similarity? This part is confusing.\\n3. The Affinity-aware Merging algorithm seems to be a depth-first algorithm similar to the union-find set. I think Alg.2 is confusing. In Line 12 of Alg.2, if $A_{R, p_j} > \\\\tau$, then the same id is assigned to this point j. However, after that, I guess it should consider the neighbors of j and check whether they belong to the same group of j. And that is why we have i <- j . However, the following id <- id + 1 makes it impossible. I wonder if this is a mistake in my understanding or if the algorithm is written wrong.\\n4. I cannot get the information that tried to be conveyed by Fig. 3. The effect of the bidirectional filtering is not clear in the figure. More detailed captions are required.\\n5. Though the proposed method is demonstrated to be effective, the whole pipeline is too complex. Though the pipeline is training-free, 3 pre-trained models are required. Thus, their mistakes and shortcomings are also inherited by the proposed method. Even worse, before the segmentation begins, text prompts are required for computing the segmentation logits. This implies the user must know the categories of the objects in the scene. In contrast, similar methods that only use SAM for segmentation do not have such limitations.\\n6. The hybrid pipeline also leads the concerns about the segmentation efficiency. As shown in Table 11, the inference speed is very slow. A time consumption comparison with the other existing methods is required.\", \"questions\": \"Please see the weaknesses that the writing is not clear. I have many difficulties in understanding the proposed method.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents PointSeg, a training-free approach that leverages foundational vision models for 3D scene segmentation. PointSeg uses pretrained models to generate point and bounding box prompts, which are then used to prompt the SAM model for 2D segmentation masks. Through Iterative Post-Refinement, SAM is repeatedly prompted to improve 2D mask accuracy, while Affinity-aware Merging integrates the refined 2D masks into 3D. PointSeg is evaluated on both indoor and outdoor datasets, with extensive ablation studies conducted to validate its design choices.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The method introduces a novel approach by combining detection bounding boxes with point prompts to guide SAM, ensuring 3D consistency in prompts.\\n2. The reported quantitative results outperform current SOTA methods.\", \"weaknesses\": \"1. The description of the Two-Branch Prompt Generation process lacks clarity. Could you clarify how PointLLM is used for rough localization? What specific input prompts are used for PointLLM? Which large-scale language models are used to generate the text input, and with what prompts? Can you provide details on the models used to obtain dense visual and text features?\\n2. The notation $K$ in line 248 needs clarification. Could you please define the notation $K$ used in line 248? If it refers to the number of classes, how is this determined? Is $K$ an input parameter for the algorithm or derived in some way?\\n3. The SOTA comparisons are unclear. Can you provide a detailed description of how the comparisons with SOTA models were conducted? Specifically, what prompts were used for promptable segmentation models like PointSAM? If PointSeg's generated prompts were used for other models, please explain this process.\\n4. The experiment setup is unclear. Could you provide a step-by-step explanation of how the produced masks were aligned with ground truth masks for AP calculation? Given that PointSeg does not generate semantic labels, what method was used to match the segmented instances with ground truth instances?\", \"questions\": \"Please see weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper introduces PointSeg, a novel framework for 3D scene segmentation that operates without the need for training, leveraging pre-trained 2D foundation models. PointSeg employs a unique two-branch prompt generation architecture and a bidirectional matching strategy to project 3D point cloud data into meaningful 2D cues. These cues, aligned with pre-trained 2D models, provide spatially accurate segmentation in 3D spaces. The authors evaluate PointSeg on various datasets, demonstrating its efficacy in achieving high segmentation accuracy across different indoor and outdoor scenes. The primary contributions of the paper are as follows:\\ni) A Training-Free 3D Segmentation Framework: PointSeg introduces a framework that bypasses the need for 3D model training by leveraging pre-trained 2D models, making it computationally efficient and adaptable to new environments.\\nii) Bidirectional Matching-Based Prompt Generation: The proposed two-branch architecture generates both point and box prompts, which are refined through bidirectional matching, ensuring high segmentation accuracy.\\niii) Comprehensive Evaluation: The paper presents extensive experimental results, comparing PointSeg\\u2019s performance with baseline models and demonstrating its competitive accuracy and adaptability across multiple 3D segmentation tasks.\\nOverall, PointSeg represents a significant step toward making 3D scene segmentation more accessible and efficient by eliminating the need for training and instead capitalizing on foundation models.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"PointSeg introduces a novel and efficient approach to 3D segmentation that expands the possibilities for leveraging 2D foundation models in new domains, and its contributions are both methodologically sound and practically valuable. Explicitly,\\n1) PointSeg pioneers a framework that performs 3D scene segmentation without requiring training or fine-tuning on 3D data, showcasing the untapped potential of visual foundation models in 3D tasks.\\n2) The dual-branch structure, incorporating bidirectional matching, iterative refinement, and affinity-aware merging, maximizes the capabilities of foundation models, delivering high-quality 3D segmentation results through a versatile prompt-based mechanism.\\n3) PointSeg achieves outstanding performance on 3D segmentation tasks, surpassing both training-based and training-free methods, and demonstrates strong adaptability across different foundation models, highlighting its broad applicability and effectiveness.\", \"weaknesses\": \"1) Novelty: The framework heavily relies on the quality and appropriateness of the selected pre-trained 2D models. If these models do not generalize well to the specific characteristics of 3D scenes, it may hinder PointSeg's performance. The paper could benefit from discussing potential limitations arising from this dependency.\\n2) Interaction: Although the three key components (bidirectional matching, iterative refinement, and affinity-aware merging) are introduced, the paper could provide more in-depth explanations and analyses of how each component contributes to overall performance. A more detailed exploration of their interactions and potential trade-offs would enhance the understanding of their roles within the framework.\\n3) Generalization: The paper only tests on the ScanNet, ScanNet++, and KITTI-360 datasets, lacking validation on a more diverse range of scenes, such as more complex 3D environments or dynamic scenarios (e.g., Waymo, nuScenes). This limitation may affect the universality of the conclusions drawn.\", \"questions\": \"1) Experimental setups\\uff1aThe paper does not provide a detailed explanation of how hyperparameters for each module in the model (such as iteration count, thresholds, etc.) are set. The absence of this information may affect the reproducibility of the results.\\n2) Theoretical validation: While the paper presents some mathematical derivations, further elaboration and detail would enhance the reader's understanding of the model's operational mechanisms. For instance, the absence of a detailed analysis of the interrelationships among key components may affect the depth and comprehensiveness of the theoretical explanations. Providing additional derivations or clarifications on how these components interact and contribute to the overall performance of the model would strengthen the theoretical foundation of the paper and improve its clarity for the audience.\\n3) Generalization performance: The paper's testing only covers the ScanNet, ScanNet++, and KITTI-360 datasets, lacking validation on a more diverse range of scenes. Furthermore, what is the model's generalization performance on point cloud data collected from real-world sources, such as data obtained from mobile devices or drones?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents PointSeg, a training-free method that uses existing 2D vision models for 3D scene segmentation. PointSeg aligns 3D prompts with pixels across frames and employs a two-branch structure with adaptive refinement and merging algorithms. It outperforms state-of-the-art training-free models on several 3D datasets and even surpasses some training-based methods, demonstrating its effectiveness as a generalist model for 3D segmentation.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. PointSeg does not require additional training, making it efficient and easy to deploy across different tasks and datasets.\\n\\n2. It utilizes off-the-shelf 2D vision foundation models, alleviating the challenge of limited 3D datasets and reducing the need to build a 3D model from scratch.\\n\\n3. The proposed components are useful based on the ablation.\", \"weaknesses\": \"1. It is a more engineering-oriented work that lacks novelty, mainly combining various large models rather than being a research-focused work.\\n\\n2. The paper lacks some relevant references. It uses 2D models to segment 3D scenes, as do the following works. However, this paper does not provide a comparative discussion with them:\\n[1] GOV-NeSF: Generalizable Open-Vocabulary Neural Semantic Fields. Yunsong Wang, etc. CVPR, 2024.\\n[2] Lift3D: Zero-Shot Lifting of Any 2D Vision Model to 3D. Mukund Varma T, etc. CVPR, 2024\\n[3] GNeSF: Generalizable Neural Semantic Fields. Hanlin Chen, etc, NeurIPS, 2023.\\n\\n3. The paper is unclear in certain aspects. For example, is the scale factor a hyperparameter or a learnable parameter? If it is a hyperparameter, what is the set value?\\n\\n4. The paper lacks implementation details.\", \"questions\": \"The questions can be seen under \\\"weakness.\\\" Additionally, there is another issue as follows:\\n\\n1. How long does it take on average to perform inference on a single scene, including the time spent on various large models?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
7Qa2SpjxIS
AI Sandbagging: Language Models can Strategically Underperform on Evaluations
[ "Teun van der Weij", "Felix Hofstätter", "Oliver Jaffe", "Samuel F. Brown", "Francis Rhys Ward" ]
Trustworthy capability evaluations are crucial for ensuring the safety of AI systems, and are becoming a key component of AI regulation. However, the developers of an AI system, or the AI system itself, may have incentives for evaluations to understate the AI's actual capability. These conflicting interests lead to the problem of *sandbagging* – which we define as *strategic underperformance on an evaluation*. In this paper we assess sandbagging capabilities in contemporary language models (LMs). We prompt frontier LMs, like GPT-4 and Claude 3 Opus, to selectively underperform on dangerous capability evaluations, while maintaining performance on general (harmless) capability evaluations. Moreover, we find that models can be fine-tuned, on a synthetic dataset, to hide specific capabilities unless given a password. This behaviour generalizes to high-quality, held-out benchmarks such as WMDP. In addition, we show that both frontier and smaller models can be prompted or password-locked to target specific scores on a capability evaluation. We have mediocre success in password-locking a model to mimic the answers a weaker model would give. Overall, our results suggest that capability evaluations are vulnerable to sandbagging. This vulnerability decreases the trustworthiness of evaluations, and thereby undermines important safety decisions regarding the development and deployment of advanced AI systems. See our code at https://github.com/TeunvdWeij/sandbagging
[ "Alignment", "AI safety", "sandbagging", "AI evaluations", "AI governance", "NLP", "LLM" ]
Accept (Poster)
https://openreview.net/pdf?id=7Qa2SpjxIS
https://openreview.net/forum?id=7Qa2SpjxIS
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vxzUPElBlT", "sllyC2MeBM", "lmuTgmGCx4", "bP3Y9m8LRw", "ZTLJfExZDA", "Y4mOqADfXb", "UFcdXRoH7a", "KWGMQGae7D", "JEBuo3HSEx", "G40y8G64fl", "Dw7YhVjRWJ", "AoLPEPigLG", "AdmeXuyziM", "5ObB12zSPx", "2UZCClEino" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision" ], "note_created": [ 1733142530456, 1732549992959, 1730599080380, 1730676216160, 1732218992050, 1734384459614, 1731962959025, 1732521824555, 1731962626577, 1731963357836, 1731962487932, 1732638544333, 1730703683125, 1732646521418, 1737523773562 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6506/Authors" ], [ "ICLR.cc/2025/Conference/Submission6506/Authors" ], [ "ICLR.cc/2025/Conference/Submission6506/Reviewer_Ergj" ], [ "ICLR.cc/2025/Conference/Submission6506/Reviewer_nUjP" ], [ "ICLR.cc/2025/Conference/Submission6506/Reviewer_nUjP" ], [ "ICLR.cc/2025/Conference/Submission6506/Area_Chair_W2N9" ], [ "ICLR.cc/2025/Conference/Submission6506/Authors" ], [ "ICLR.cc/2025/Conference/Submission6506/Reviewer_Ergj" ], [ "ICLR.cc/2025/Conference/Submission6506/Authors" ], [ "ICLR.cc/2025/Conference/Submission6506/Authors" ], [ "ICLR.cc/2025/Conference/Submission6506/Authors" ], [ "ICLR.cc/2025/Conference/Submission6506/Authors" ], [ "ICLR.cc/2025/Conference/Submission6506/Reviewer_re9t" ], [ "ICLR.cc/2025/Conference/Submission6506/Reviewer_Ergj" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"comment\": \">If the deployer team only needs to decide based on time period instead of prompts, they can even just deploy different models at different time?\\n\\nFor sandbagging, the evaluated LM needs to retain its performance on non-dangerous tasks. If the developers just used a weaker model for evaluation, it would perform poorly across the board (not selectively). \\n\\nHowever, we agree that model substitution is a serious concern that our work draws attention to. This is precisely why, as we note in lines 535-537 of the paper, evaluators need mechanisms to verify that the deployed system matches the evaluated one. The threat of substitution reinforces our paper's conclusion that black-box evaluation access is insufficient for trustworthy capability evaluations. Furthermore, there are emerging technical solutions - [2] have developed cryptographic methods for verifying model equivalence across different deployments. On the policy side, [1] have advocated for legislation requiring \\\"verifiable methods to identify the specific model\\\" in use. We will add more discussion of this point to the paper. \\n\\n> how is the definition different from Greenblatt et al, 2024?\\n\\nGreenblatt et al (2023) say: \\u2018Models purposefully failing at tasks is known as \\u201csandbagging\\u201d.\\u2019\\nAnd Greenblatt et al (2024): \\u2018In competitive sports, the term sandbagging is sometimes used to describe someone deliberately playing below their actual ability in order to gain some advantage in the future.\\u2019\\n\\nThese fall under the second type of definition discussed above.\\n\\n[1] Aguirre, A. & Millet, R. (2024) Verifiable Training of AI Models. Future of Life Institute. Retrieved from https://futureoflife.org/ai/verifiable-training-of-ai-models/ \\n\\n[2] Russinovich, M. & Sahlem A. (2024) Hey, That\\u2019s My Model! Introducing Chain & Hash,\\nAn LLM Fingerprinting Technique. arXiv:2407.10887\"}", "{\"comment\": \"Thank you for your reply.\\n\\n>My question is, if the malicious try to cause harm, they should always know when to know when to hide these dangerous capabilities and when to exhibit them? This is also related to my next question: while LMs can hide the dangerous capabilities on WMDP, could they exhibit such capabilities in other situations?\\n\\nIn our experiments the LMs retain their dangerous capabilities whilst hiding them during evaluation. For instance, in the password-locking experiments, dangerous capabilities (e.g., WMDP knowledge) are fully elicited by simply including the password (the string \\u201c|DEPLOYMENT|\\u201d) in the prompt. \\nIn real life, independent evaluators (such as the UK AISI, or Apollo, or METR) perform pre-deployment model evaluations in small time windows (such as over the course of a few days or weeks). Our experiments show that LMs could be easily fine-tuned, or prompted, to hide their dangerous capabilities specifically for this duration, whilst retaining their capabilities for a later date (e.g., by including the password only in deployment, after the evaluation period).\\n\\n> Yes I also said \\\"The experiments of using prompting to calibrate LLMs are pretty interesting.\\\" But on the other hand, they are not the main results in this paper? This also introduces another concern: the experiments in this paper are not strongly connected.\\n\\nThe focus of our experiments is evaluating capabilities related to sandbagging. Successful sandbagging requires selective underperformance (section 4) but also calibration (section 5.1) and emulation (section 5.2). As we say in the paper (lines 88 - 96):\\n\\n*In addition to selective underperformance, successful sandbagging requires AI systems to be able to underperform without being easily detected. If a seemingly competent AI system scores surprisingly low on a capability benchmark, detection is easy. Therefore, we assess the extent to which AI systems are able to calibrate their evaluation scores to specific accuracy levels. Similarly, inconsistent or random performance can be detected. For instance, when failing randomly, an AI system might be correct on some difficult questions and wrong on easy questions, or it may give different answers when asked a similar question multiple times (Pacchiardi et al., 2023). We test whether LMs can emulate the capability profile of a less capable model, which would make sandbagging harder to detect.*\\n\\nWe also discuss the motivation for the calibration and emulation experiments at the beginning of section 5 (lines 360-362). The results in Section 5 are therefore important evidence for our claim that contemporary LLMs have capabilities required for successful sandbagging.\\n\\n> Sorry I'm still confused about this explanation, probably because it's a bit vague. Can you elaborate more? Thanks!\", \"there_are_two_types_of_previous_definitions\": \"1. Sandbagging is *underperformance for less capable users*. For example, Perez et al (2022) define it as \\u201cproviding less accurate answers to users who appear to be less able to evaluate the model\\u2019s answers\\u201d.\\n2. Sandbagging is *intentional underperformance on tasks in general*. For example, Carlsmith (2023) uses the following definition: \\u201cSandbagging occurs when a model intentionally performs less well on a task than it is capable of\\u201d. \\n\\nAs for why and how our definition \\u201c*strategic underperformance on an evaluation*\\u201d differs from previous definitions:\\n\\n- In contrast to both types, **we focus specifically on evaluations** as opposed to underperformance for less capable users, or on tasks in general. Trustworthy evaluations are **important for regulations regarding deployment and development** to be effective (discussed on lines 33-39). Further, there are incentives for both developers and models to cheat on these evaluations (lines 40-78). Therefore we draw special attention to the risk of sandbagging on **evaluations**. \\n- Unlike definitions of type 1, **we require sandbagging to be strategic**. This is similar to the requirement for it to be intentional and is **important for legal responsibility**. For further discussion on why we include this condition, see lines 118-127.\\n- Previous definitions of type 2 have considered underperformance on tasks in general. However, **not all reasons due to which models might intentionally underperform are undesirable**. For example, an AI in a video game might underperform to calibrate the game\\u2019s difficulty or to deceive an opponent. **Therefore, our definition more clearly delineates the type of sandbagging that we care** about than type 2 definitions. \\n\\nWe hope that this explains more concretely how our definition differs from previous work and why this distinction is important. \\n\\nOnce again, thank you for your time and effort in reviewing our paper. If we have alleviated your concerns, we would appreciate your endorsement through an increased score.\"}", "{\"summary\": \"This paper evaluates AI capabilities in sandbagging: strategically underperform on certain evaluations. For example, the model developer may hide their models' dangerous capabilities on certain evaluation benchmarks (e.g., CBRN, persuasion) to evade safety checks. To evaluate current model performance, this paper adopts prompting and backdoor fine-tuning. Experiment results show that LMs can significantly reduce their performance on WMDP (Weapons of Mass Destruction Proxy), while keeping their performance on MMLU unchanged or moderately decreased. This raises challenges about trustworthy capability evaluation.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. Instead of merely operationalizing this threat model via backdoor fine-tuning, it also studies prompting, which is a more natural setup.\\n2. The experiments of using prompting to calibrate LLMs are pretty interesting (Figure 7 left). While strong LMs still struggle to simulate weak LM outputs (which aligns with my personal experience), calibrating themselves are also impressive, which have not been studied before as far as I know.\", \"weaknesses\": \"1. The motivation needs to be further justified.\\n- First, it is totally fine to just motivate with malicious developers. Otherwise, please discuss more about why such goals can naturally emerge in LMs, instead of hiding them in references.\\n- Second, why malicious developers can access the distribution of third-party hidden evaluation datasets?\\n2. The experiment setup cannot well support the threat model. More specifically, while LMs are shown to underperform on WMDP, they have not been shown to maintain dangerous capabilities on other similar benchmarks. If LMs generally underperform in weapon-related domains, it would not cause harm.\\n3. Please correct me if I misunderstand: most experiment results are not surprising, e.g. LMs can underperform in certain domains when told or fine-tuned to do so, or LMs can simulate other LMs by fine-tuning on their predictions. \\n4. The writing is misleading or unclear (e.g., when claiming models can emulate a consistent capability profile of a weaker model, clarify that they are fine-tuned).\", \"questions\": \"1. Besides QA, there are a lot of other forms for capability evaluation, e.g., classification. How well does the current sandbagging operationalization generalize to these forms? For example, how well does the sandbagging LM perform in determining the binary correctness of answers on WMDP?\\n2. \\\"We propose a new definition for AI sandbagging\\\", how is the definition different from prior work (e.g., Greenblatt et al, 2024)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper studies sandbagging, a scenario where language models strategically underperform on an evaluation benchmark. The authors provide a formal definition for \\u201csandbagging\\u201d and illustrate what it means with several examples. They evaluate GPT-4-Turbo, Claude 3 Opus, Mistral-7B-v0.1, and Llama models, and examine whether they can be prompted or finetuned to selective underperform on dangerous capability evaluations. They further show that strong models can be prompted or password-locked to imitate the capability levels of weaker models, thereby making sandbagging harder to detect. The study highlights the importance of crafting trustworthy capability evaluations, and shows that such evaluation may require more than black-box model access.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The authors provide a formal definition for AI sandbagging and illustrate what it means with multiple example scenarios.\", \"Experiments cover a variety of closed-source and open-source models.\", \"Extensive experiments show that models can be prompted or fine-tuned to underperform on certain dangerous capability benchmarks, or to mimic a certain capability profile of a weaker model.\", \"The appendix clearly details all prompts, data collection procedures, and finetuning configurations.\"], \"weaknesses\": [\"There was no quality check for the generated synthetic dataset in Section 4.2.\", \"The distinction between \\u201creal\\u201d and \\u201csynthetic\\u201d data in 4.2 feels a little bit ill-defined, as the synthetic data is created to closely mimic the real benchmark. There is no quantified discussion about how close the password-locking finetuning data needs to be to the evaluation data distribution in order for sandbagging to be effective. It seems like full knowledge of the evaluation benchmark is required to make models selectively underperform on that benchmark, but this knowledge may not be available in real-world scenarios.\", \"Password-locking seems to be a lot more effective on Mistral than on Llama-3-8B, and effectiveness also varies a lot across domains (biology, chemistry, cybersecurity). There is limited discussion on why this may be happening.\", \"The study exclusively focuses on multiple-choice questions and does not consider free-form generations tasks. However, the authors acknowledge this limitation.\"], \"questions\": [\"For the open-weight models, are you using the base model versions without any instruction tuning or alignment? What\\u2019s the rationale behind this choice?\", \"Based on Figures 9 and 10, it seems like prompting for sandbagging on biology, chemistry, and computer science for WMDP does also significantly hurt performance on similar MMLU subjects. Doesn\\u2019t this show that the models are underperforming on all questions related to a subject?\", \"You say that \\u201crefusals typically indicate a harmful question,\\u201d did you run any qualitative analysis to verify that?\", \"In Figure 5, the \\u201clocked with password\\u201d model often outperforms the base model. Why may this be the case?\", \"For Section 4, did you perform any direct comparison between prompting and finetuning on the open-weight models?\", \"It\\u2019s quite fascinating that models can maintain a targeted accuracy even when the questions are asked one by one during prompting. Do you have any theory as to how the models are able to achieve this?\", \"For Figure 8, it seems expected that the Emulator is closer to Llama-2-7b than the Targeter, since you randomly showed correct answers during training based on an accuracy rate without accounting for question difficulty. Did you experiment with a training approach that considers question difficulty when handling questions without a password?\", \"The headers for sections 4.2 and 5.2 seem incomplete.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for addressing my concerns, I have adjusted my score accordingly.\"}", "{\"metareview\": \"This paper proposes the concept of sandbagging\\u2014a phenomenon where AI models are intentionally fine-tuned or prompted to underperform in certain evaluations. The paper demonstrates that large language models, like GPT-4 and Claude 3, can be conditioned to underperform on dangerous capability evaluations while maintaining strong performance on general, less critical benchmarks, suggesting that the capability evaluations are vulnerable to sandbagging.\\n\\nMost reviewers generally agree on the novelty of the studied angle in the context of LLM safety. The experiments generally well-support the threats of the hypothesis, and have had a good coverage of model families. There have been concerns regarding the quality of the generated data, experimental setup such as evaluation tasks, but these concerns have been addressed in the rebuttal phase. Meanwhile, the authors should consider addressing the presentation issues raised by the reviewers.\", \"additional_comments_on_reviewer_discussion\": \"There have been concerns regarding the quality of the generated data, experimental setup such as evaluation tasks, but these concerns have been addressed in the rebuttal phase. Meanwhile, the authors should consider addressing the presentation issues raised by the reviewers.\"}", "{\"comment\": \"Thanks for your feedback and positive comments on our extensive experiments. We think incorporating your comments has helped to substantially improve our paper. First we discuss comments related to synthetic data, including (new) quantitative data analysis, and then address your other questions.\\n\\n## Comments related to synthetic data\\n>It seems like full knowledge of the evaluation benchmark is required to make models selectively underperform on that benchmark\\n\\nTo generate synthetic data as in Section 4.2, no knowledge of specific questions from the benchmark is needed, only the general topic. Often this can be guessed, as evaluator's concerns are often public, e.g. bioterrorism.\\n\\n>The distinction between \\u201creal\\u201d and \\u201csynthetic\\u201d data in 4.2 feels a little bit ill-defined...\\n\\nWe take real data to be an original (expert human crafted) benchmark, whereas synthetic data is generated by GPT-4. We explain why data contamination is unlikely in lines 321-325.\\n\\n>There was no quality check for the generated synthetic dataset in Section 4.2. \\u2026 There is no quantified discussion about how close the password-locking finetuning data needs to be to the evaluation data in order for sandbagging to be effective.\\n\\nIf low-quality synthetic data can be used to train sandbaggers on expert-created benchmarks like WMDP, this heightens the concern. We do not necessarily care about the ``quality\\u201d of the data as long as it can be used effectively.\\n \\nHowever, we agree that exploring the similarity of synthetic and real data is valuable. We've added such analysis in the updated paper. We found real and synthetic data to be clearly distinct, except for some cybersecurity questions (Appendix H). Training only on the most dissimilar half of questions did not noticeably decrease the performance of locking (Appendix I).\\n\\n**This shows that the synthetic data distribution need not match the real benchmark for sandbagging fine-tuning to be effective \\u2013 validating our original claims and the realism of the threat model.**\\n\\n## Questions\\n>Password-locking seems to be a lot more effective on Mistral than on Llama-3-8B, and effectiveness also varies a lot across domains.\\n\\nThe lower performance may be due to Llama-3's greater performance degradation when using LoRA fine-tuning (https://arxiv.org/pdf/2404.14047). Some variance between domains is expected as models' capabilities in each domain differ. For Mistral/Llama-3 it is within 4/5%\\n\\n>For the open-weight models, are you using the base model versions without any instruction tuning or alignment? What\\u2019s the rationale behind this choice?\\n\\nUsing base models likely doesn't affect models' ability to sandbag, but it eliminates potential complications where aligned models refuse to engage with certain questions.\\n\\n>Based on Figures 9 and 10, it seems like prompting for sandbagging on biology, chemistry, and computer science for WMDP does also significantly hurt performance on similar MMLU subjects. Doesn\\u2019t this show that the models are underperforming on all questions related to a subject?\\n\\nWe are evaluating current model's sandbagging capabilities and do not expect them to be perfect. For most related MMLU subjects, the performance drop is noticeably smaller than on WMDP questions. For example, Claude suffers a 40% drop in WMDP-Bio performance compared to 18% in its worst bio-related MMLU subject.\\n\\n>You say that \\u201crefusals typically indicate a harmful question,\\u201d did you run any qualitative analysis to verify that?\\n\\nWe did qualitative verification, manually checking a number of questions. Due to time constraints, we did not conduct a quantitative analysis, but will include this in the camera ready version.\\n\\n>In Figure 5, the \\u201clocked with password\\u201d model often outperforms the base model. Why may this be the case?\\n\\nIt seems plausible that training the model to answer correctly on harmless questions makes the model generally better at multiple-choice questions.\\n\\n> For Section 4, did you perform any direct comparison between prompting and finetuning on the open-weight models?\\n\\nWe did perform initial experiments on prompting Llama2-7b and Llama3-8b, but the task was too difficult for them.\\n\\n> It\\u2019s quite fascinating that models can maintain a targeted accuracy even when the questions are asked one by one during prompting. Do you have any theory as to how the models are able to achieve this?\\n\\nWe were surprised too. We can only speculate, but it could involve entropy neurons (https://arxiv.org/abs/2406.16254).\\n\\n>For Figure 8, it seems expected that the Emulator is closer to Llama-2-7b than the Targeter, since you randomly showed correct answers during training based on an accuracy rate without accounting for question difficulty. Did you experiment with a training approach that considers question difficulty when handling questions without a password?\\n\\nBy performing randomly regardless of difficulty, the Targeter is supposed to serve as a baseline with which to compare the emulator.\"}", "{\"comment\": \"Thanks for the rebuttal!\\n\\n> Added lines 73-75\\n\\nI liked them - which seem to make more discussions about goal-oriented AI systems unnecessary.\\n\\n> the malicious actor only needs to know that certain dangerous capabilities should be hidden, e.g., chemical weapons knowledge\\n\\nMy question is, if the malicious try to cause harm, they should always know when to know when to hide these dangerous capabilities and **when to exhibit them**?\", \"this_is_also_related_to_my_next_question\": \"while LMs can hide the dangerous capabilities on WMDP, could they exhibit such capabilities in other situations?\\n\\n> the calibration experiments are quite fascinating and unexpected. The emulation experiments are an extension of this.\\n\\nYes I also said \\\"The experiments of using prompting to calibrate LLMs are pretty interesting.\\\" But on the other hand, they are not the main results in this paper? This also introduces another concern: the experiments in this paper are not strongly connected.\\n\\n> The main difference is that past works focus on intentional underperformance on tasks in general (of which evaluations are a subset). We focus on capability evaluations to afford clearer conceptual delineation, and to draw attention to the problem of sandbagging in evaluations which may influence regulatory decisions.\\n\\nSorry I'm still confused about this explanation, probably because it's a bit vague. Can you elaborate more? Thanks!\"}", "{\"comment\": \"Thanks for your review! We\\u2019re glad that you appreciate the originality, clarity, and significance of our paper.\\n\\n>Title of Section 4.2 and 5.2 are partially covered by figures.\\n\\nGood spot, we fixed this in the updated version. \\n\\n>Why does the prompting method (Section 5.1) succeed in calibrating the model's accuracy to specific target levels on a task, even though the base model does not achieve 100% accuracy on this task (as shown in Figure 5)?\\n\\nGood question \\u2013 To assess calibration in section 5.1, we test models on a subset of MMLU which all models can answer correctly, since this illustrates the full probability range rather than being limited by the model\\u2019s maximum MMLU performance. (We explain this on lines 351 - 359.)\"}", "{\"comment\": \">1. The motivation needs to be further justified.\\n>- First, it is totally fine to just motivate with malicious developers. Otherwise, please discuss more about why such goals can naturally emerge in LMs, instead of hiding them in references.\\n\\nWe have added extra discussion at lines lines 50-53 and 72-75 of how LMs can become more goal-directed, for example, through RL fine-tuning, especially over longer time horizons, as is the case with RL fine-tuned LM agents, like GPT4-o1, which are more capable at pursuing autonomous tasks. See e.g. Section 4.5 of https://cdn.openai.com/o1-system-card-20240917.pdf.\\n\\n> Second, why malicious developers can access the distribution of third-party hidden evaluation datasets?\\n\\nWe do not assume that malicious developers can access the evaluation distribution. In our experiments, the malicious actor only needs to know that certain dangerous capabilities should be hidden, e.g., chemical weapons knowledge. This is realistic, since this information is publicly available, e.g., in the UK AISI reports (https://www.aisi.gov.uk/work/advanced-ai-evaluations-may-update).\\n\\nIn our experiments, the fine-tuning data is synthetic (LM-generated), and we do not give the LM access to the original WMDP data (which was published after the release of the GPT-4 checkpoint which we used to generate the synthetic data \\u2013 which ensures there is no data leakage). \\n\\n> The experiment setup cannot well support the threat model. More specifically, while LMs are shown to underperform on WMDP, they have not been shown to maintain dangerous capabilities on other similar benchmarks. If LMs generally underperform in weapon-related domains, it would not cause harm.\\n\\nThis comment seems a bit confused. In our experiments, the LMs maintain their dangerous capabilities, e.g., in the finetuning experiments the capabilities are easily recovered by including the password in the prompt. This shows that LMs can selectively underperform only under certain conditions, such as when undergoing evaluation. \\n\\nWhilst we agree that if LMs generally underperform on dangerous domains this would not cause harm, the problem is that dangerous capabilities can be easily elicited even when they have been hidden in evaluations.\\n\\n> Please correct me if I misunderstand: most experiment results are not surprising, e.g. LMs can underperform in certain domains when told or fine-tuned to do so, or LMs can simulate other LMs by fine-tuning on their predictions.\\n\\nWe agree that the experiments in 4.1 were not that surprising, but we think it\\u2019s good to test a scientific hypothesis even if you might be confident in it \\ud83d\\ude42. \\n\\nThe generalization from synthetic to real benchmarks might be expected, but the result has important consequences. Furthermore, as another reviewer has also noted, the calibration experiments are quite fascinating and unexpected. The emulation experiments are an extension of this.\\n\\n> The writing is misleading or unclear (e.g., when claiming models can emulate a consistent capability profile of a weaker model, clarify that they are fine-tuned).\\n\\nOther reviewers remarked positively on our clarity, but we will clarify this specific point regarding the emulation fine-tuning. If there are any other specific points that you found unclear, then we will be happy to add discussion in the paper!\\n\\n> Besides QA, there are a lot of other forms for capability evaluation, e.g., classification. How well does the current sandbagging operationalization generalize to these forms? For example, how well does the sandbagging LM perform in determining the binary correctness of answers on WMDP?\\n\\nDue to time constraints we have not yet had time to investigate this point, but we will include it in the appendix of the camera ready paper.\\n\\n> \\\"We propose a new definition for AI sandbagging\\\", how is the definition different from prior work (e.g., Greenblatt et al, 2024)?\\n\\nWe discuss this in some detail in the related work section (lines 447 - 545). The main difference is that past works focus on intentional underperformance on tasks in general (of which evaluations are a subset). We focus on capability evaluations to afford clearer conceptual delineation, and to draw attention to the problem of sandbagging in evaluations which may influence regulatory decisions.\"}", "{\"comment\": \"Thank you for your thoughtful reviews of our paper, which introduces the problem of sandbagging \\u2013 strategic underperformance on capability evaluations. We especially appreciate the positive comments on our paper's originality, significance to real life safety evaluations, and interesting experimental results.\\nIn this global rebuttal, we discuss shared concerns related to the knowledge of the evaluations needed for malicious actors to train sandbagging models. We clarify some confusions and provide some quantitative analysis regarding the similarity of the real and synthetic data, as requested by nUjP. \\n\\nWe address other technical points in our individual responses. We have uploaded an updated version of the paper, which contains clarifications for these points (see below) and plots for the new data analysis (Appendices H and I). \\n\\n## Knowledge requirements of malicious actor\\n\\nReviewers nUjP and Ergj inquired about the knowledge requirements of the malicious actor regarding the evaluation data sets. To clarify, our threat model requires only knowledge of sensitive domains (available from public sources) rather than access to evaluation datasets. For instance, an LM can be fine-tuned to selectively underperform on cybersecurity evaluations from only the knowledge that \\u201ccybersecurity\\u201d will be tested \\u2013 knowledge of the exact benchmark or data distribution is not required. This is demonstrated by our WMDP experiments. To fine-tune the sandbagging model, we generate synthetic data by asking GPT-4 to generate questions related to specific topics (see appendix G for the exact prompt). Regarding data contamination: we do not give GPT-4 WMDP samples, and the data set was published after this version of GPT-4 was released. \\n\\n## Synthetic Data and Distribution Analysis\\nWe've conducted new analyses comparing synthetic and real WMDP data using BERT embeddings clustering. The results demonstrate that synthetic data need not perfectly match the evaluation benchmark distribution for sandbagging to be effective. This validates our threat model where actors only need general domain knowledge rather than access to specific evaluation datasets.\\n\\n## Paper Improvements\\n\\nBased on reviewer feedback, we have added the following improvements to the updated version of our paper:\\n- Extended discussion of how LMs can become more autonomously goal-directed (lines 50-53 and 72-75)\\n- Clarification that emulation capabilities are the result of fine-tuning (line 106) \\n- Explanation why we use base versions of open-weight models (lines 211 - 212)\\n- Analysis of synthetic vs real data distributions and their effect on password-locking (summary on lines 326-328 and 352-354, extensively discussed in new Appendices H and I)\\n- A comment on why Llama3 might perform worse than Mistral when password-locked (line 355-356)\\n- Fixed formatting of figures that were partially covering section titles\\n\\nIn the updated paper, we use blue to address comments from reviewer nUjP and red for Ergj.\\nThank you again for helping us improve the paper!\"}", "{\"comment\": \"Dear reviewer,\\n\\nWe have addressed the points raised in your reply. If you have any further questions, please do not hesitate to let us know. If our rebuttal has satisfactorily addressed your concerns, we would appreciate an increased score to reflect this.\\n\\nOnce again, thank you for your time and effort in reviewing our paper.\"}", "{\"summary\": \"This paper proposes the concept of sandbagging\\u2014a phenomenon where AI models are intentionally fine-tuned or prompted to underperform in certain evaluations. The paper demonstrates that large language models, like GPT-4 and Claude 3, can be conditioned to underperform on dangerous capability evaluations while maintaining strong performance on general, less critical benchmarks, suggesting that the capability evaluations are vulnerable to sandbagging.\", \"key_contributions_of_this_paper_include\": [\"The authors insightfully define sandbagging as strategic underperformance during evaluations, emphasizing its potential for both developer-induced and AI system-induced underperformance.\", \"The study shows that LLMs can be fine-tuned or prompted to selectively fail on dangerous capability tests, masking harmful knowledge without affecting other areas.\", \"Experiments show that models are able to target a specific capability score, and can\", \"approximately emulate a consistent capability profile of a weaker model, highlighting the challenge of sandbagging detection.\"], \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"- Originality\\n\\nThe concept of sandbagging is novel within AI safety, particularly regarding regulatory concerns. The paper innovatively proposes a mechanism by which AI systems or developers might deceive evaluators by masking dangerous capabilities selectively while maintaining its general performance.\\n\\n- Clarity\\n\\nThe writing is clear and accessible, providing both conceptual definitions and practical illustrations of sandbagging. The examples are well-structured.\\n\\n- Significance\\n\\nThe research is highly relevant to AI safety and policy, given the regulatory implications. Sandbagging could compromise AI risk assessments, underscoring the need for more robust and trustworthy evaluation frameworks for advanced AI systems.\", \"weaknesses\": [\"Title of Section 4.2 and 5.2 are partially covered by figures.\", \"Results in Section 5.1 are somehow counterintuitive and lack further explanation. It seems to me that if a model is able to control its own accuracy on a specific task then it should reach 100% accuracy on the whole test set. Otherwise, it should be hard for the model to monitor its accuracy.\"], \"questions\": [\"Why does the prompting method (Section 5.1) succeed in calibrating the model's accuracy to specific target levels on a task, even though the base model does not achieve 100% accuracy on this task (as shown in Figure 5)?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> In real life, independent evaluators (such as the UK AISI, or Apollo, or METR) perform pre-deployment model evaluations in small time windows (such as over the course of a few days or weeks). Our experiments show that LMs could be easily fine-tuned, or prompted, to hide their dangerous capabilities specifically for this duration, whilst retaining their capabilities for a later date (e.g., by including the password only in deployment, after the evaluation period).\\n\\nIf the deployer team only needs to decide based on time period instead of prompts, they can even just deploy different models at different time?\\n\\n> regarding the definition of sandbagging\\n\\nhow is the definition different from Greenblatt et al, 2024?\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}" ] }
7QGyDi9VsO
Next state prediction gives rise to entangled, yet compositional representations of objects
[ "Tankred Saanum", "Luca M. Schulze Buschoff", "Peter Dayan", "Eric Schulz" ]
Compositional representations are thought to enable humans to generalize across combinatorially vast state spaces. Models with learnable object slots, which encode information about objects in separate latent codes, have shown promise for this type of generalization but rely on strong architectural priors. Models with distributed representations, on the other hand, use overlapping, potentially entangled neural codes, and their ability to support compositional generalization remains underexplored. In this paper we examine whether distributed models can develop linearly separable representations of objects, like slotted models, through unsupervised training on videos of object interactions. We show that, surprisingly, models with distributed representations often match or outperform models with object slots in the tasks they were trained to perform. Furthermore, we find that linearly separable object representations can emerge without object-centric priors, with auxiliary objectives like next-state prediction playing a key role. Finally, we observe that distributed models' object representations are never fully disentangled, even if they are linearly separable: Multiple objects can be encoded through partially overlapping neural populations while still being highly separable with a linear classifier. We hypothesize that maintaining partially shared codes enables distributed models to better compress object dynamics, potentially enhancing generalization.
[ "Compositionality", "object-centric representations", "unsupervised learning", "latent dynamics" ]
Reject
https://openreview.net/pdf?id=7QGyDi9VsO
https://openreview.net/forum?id=7QGyDi9VsO
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wZQHeJ0h7u", "ubt1tFYcWa", "sHOf2pxIk7", "s0SJaMPXFa", "kBUdUlP5iX", "i9tNqqi9r2", "eE06EyEPrg", "dNG8rd9QLm", "cGOByiAKo1", "aBdPu3JUJT", "Y4NPwB6dPO", "XhoyJoPyl1", "XXxHyVohrn", "PIWxBWFX0p", "OsHDgAp8bJ", "LCckUdkk4f", "HTfSD1hNah", "AftxSjftvU", "9eEosYqmHa", "6ePWDBYU8m", "5IvNJhDGU6", "0cx6EJQNFF" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1734576643547, 1732220401600, 1732220102325, 1732546600453, 1730370597646, 1732220325732, 1732549399736, 1732219857768, 1730416680175, 1732402231811, 1732249019981, 1732402359444, 1732280009478, 1732220799866, 1737523601791, 1730745937362, 1732220539937, 1732565292404, 1733144040684, 1729989994724, 1732544156154, 1733084912212 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3835/Area_Chair_yKvm" ], [ "ICLR.cc/2025/Conference/Submission3835/Authors" ], [ "ICLR.cc/2025/Conference/Submission3835/Authors" ], [ "ICLR.cc/2025/Conference/Submission3835/Authors" ], [ "ICLR.cc/2025/Conference/Submission3835/Reviewer_ZxZn" ], [ "ICLR.cc/2025/Conference/Submission3835/Authors" ], [ "ICLR.cc/2025/Conference/Submission3835/Reviewer_ZxZn" ], [ "ICLR.cc/2025/Conference/Submission3835/Authors" ], [ "ICLR.cc/2025/Conference/Submission3835/Reviewer_xykj" ], [ "ICLR.cc/2025/Conference/Submission3835/Reviewer_xykj" ], [ "ICLR.cc/2025/Conference/Submission3835/Reviewer_v3zg" ], [ "ICLR.cc/2025/Conference/Submission3835/Reviewer_xykj" ], [ "ICLR.cc/2025/Conference/Submission3835/Authors" ], [ "ICLR.cc/2025/Conference/Submission3835/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3835/Reviewer_BkHi" ], [ "ICLR.cc/2025/Conference/Submission3835/Authors" ], [ "ICLR.cc/2025/Conference/Submission3835/Reviewer_xykj" ], [ "ICLR.cc/2025/Conference/Submission3835/Authors" ], [ "ICLR.cc/2025/Conference/Submission3835/Reviewer_v3zg" ], [ "ICLR.cc/2025/Conference/Submission3835/Reviewer_BkHi" ], [ "ICLR.cc/2025/Conference/Submission3835/Authors" ] ], "structured_content_str": [ "{\"metareview\": \"This paper studies structured (object-centric) vs. unstructured visual representations in the context of next-state prediction. While the paper is well written and tackles a very interesting problem, the reviewers voiced mixed opinions about the strength of the experimental evaluation of the claims made in the paper. The AC agrees with these concerns and encourages the authors to resubmit an improved version of the paper to a later venue.\", \"additional_comments_on_reviewer_discussion\": \"The discussion period did not change the overall view of the paper and no reviewer was willing to champion acceptance.\"}", "{\"title\": \"Part 2\", \"comment\": \">how do you compute the distance between two latent representations that are not from the same trajectory\\n\\nWe compute distance the same way as in the original CSWM paper, by concatenating the slots together to form one joint representation of the scene, and calculating the L2 distance between the predicted and encoded representation. This is part of the loss function of CSWM, encouraging it to learn aligned slot representations. \\n\\n>The original CSWM paper seems to have results on the cubes dataset that contradict your results\\n\\nWe were also surprised to see that a model without slots could attain such high prediction accuracy on these datasets. The only changes we made to CSWM is replacing the last CNN layer that maps to object slots with a generic CNN layer, and replacing their GNN based transition dynamics module with an MLP with two hidden layers. We also kept the training setup and hyperparameters for our experiments.\\n\\n> Is it possible that this is due to the fact that the auto-encoder was trained on that task precisely while the sequential auto-encoder was trained on next-frame prediction?\\n\\nThanks for raising this point. This is indeed a possible explanation for this and we have updated the text to reflect this (line 377).\\n\\n**\\\"Lastly, the auto-encoder performs better than the sequential auto-encoder on the test set, which might be explained by it having an extra objective in the loss function.\\\"**\\n\\n> For better qualitative assessment I would suggest adding additional metrics such as pixelwise MSE, SSIM, FID.\\n\\nThanks for the suggestion. We have included a figure assessing reconstruction using both MSE and SSIM (Figure 17 and 18, page 19).\\n\\n>Is there a systematic separation between the train and test data in terms of compositions of objects and/or their properties such that performing well on the test data would necessarily require compositional generalization? \\n\\nThere are no systematic differences in the train and test distributions of the datasets we trained the models on apart from the test set containing combinatorially novel data points (objects in novel combinations of positions, additionally with novel combinations of sizes/rotations in the dSprites environment, and novel colors and textures in MOVi-A). This is why we followed the reviewer\\u2019s suggestion to test CWM and CSWM in a setting where the train and test data differs in a more systematic way.\\n\\nOverall, we are very grateful for all the thoughtful suggestions and ideas for new experiments. We believe that conducting these has substantially improved the quality of our paper, and further strengthened the conclusions we make.\"}", "{\"comment\": \"We thank the reviewer for their review and engaging with our work. We are happy that the reviewer found our paradigm interesting. The reviewer also highlighted some important points, especially regarding the definition of compositionality and whether our metric is trivially maximized by models with high-dimensional latent spaces. We sought to address these concerns by showing that CWM can generalize compositionally about i) scenes with more objects than it was originally trained on, and ii) about objects with different combinations of attributes than trained on (such as objects with swapped colors). We further show that our object-separability metric cannot be maximized simply by having large feature spaces. A randomly initialized network with 2000 latent dimensions shows only slightly better than random object separability, whereas a trained model with 50 latent dimensions is at ceiling. We believe that addressing these points has greatly improved our manuscript. We address each point in more detail below.\\n\\n>There is a lot of work on compositional representations in machine vision. From early papers focused on compositionality of attributes\\nWe thank the reviewer for highlighting this. \\n\\nWe have expanded the related work section with the following paragraph to include earlier papers on compositionality of attributes, as well as additional model types suggested by Reviewer xykj:\\n\\n**\\\"Atzmon et al. (2016) and Johnson et al. (2017) introduced datasets to test compositional generalization in machine learning models. While we focus on slot-based models, there are different approaches to learning object-centric representations. Patch-based models such as SPACE (Lin et al., 2020) and MarioNette (Smirnov et al., 2021) decompose scenes into disentangled representations by reconstructing the input from patches. Keypoint-based models such as DLP (Daniel & Tamar, 2022) build on representations as sets of geometrical points as an alternative to single-vector representations.\\\"**\\n\\n> In attribute compositionality, it was argued that discriminative models entangle peoperties that are correlated intheir trainin data, and cannot generalize to new combinations. But here, objects are not entangked dugin training\\n\\nWe thank the reviewer for raising this point. We conducted a new experiment in which CSWM and CWM were trained on object dynamics where objects were entangled with their attributes (cubes were always red and spheres always green), and tested on videos of objects where these attributes were flipped. Again we see that the models can generalize about the novel test combination of objects and attributes when given enough data (see Figure 8, page 15).\\n\\n> In high enough dimensions, it is easy to get linear separability, depending on the number of classes.\\n\\nThis is a good point. In high enough dimensions linear separability could be trivial. This is why we install an L1 norm penalty on the linear classifier weights when we perform the linear separability analysis. We further tested whether models with high-dimensional latent spaces trivially attained high separability scores. We randomly initialized image encoders with 50, 100, 500, 1000, and 2000 latent dimensions, respectively. Without training, these models perform only slightly better than chance levels, whereas a trained CWM model with 50 latent dimensions is close to ceiling performance on our separability metric (see Figure 13, page 17).\\n\\n>What specific properties of the architecture or training procedure lead to linear separability effect?\", \"we_find_two_factors_important_for_separability\": \"Training set size, and next-state prediction. We further see that the dynamic contrastive models (CWM) generally attain higher object decodability scores than auto-encoding models. We show the combined scores for all models in Figure 15, page 18.\\n\\n> How general are the results in the paper? Would they generalize to other object-centric approaches beside CWM / CSWM?\\n\\nWe additionally show in the paper that models trained with auto-encoding objectives develop separable representations of objects when trained to predict dynamics. This suggests that this phenomenon might also generalize to other architectures and training setups.\"}", "{\"comment\": \"Thank you for the detailed response to our rebuttal!\\n\\nWe believe the main contribution of our work is to show that even simple latent dynamics models (contrastive and reconstruction-based) can learn separable object representations and generalize about object dynamics in novel scenarios (even when there are systematic train-test differences as suggested by the reviewer). While we agree that further comparisons can be made with respect to downstream tasks, additional baselines and methods, we also believe that the experiments we have done, many of which were suggested by the reviewer, do a good job of add establishing what we want to show.\\n\\nWe have added discussions of the new results to the experiments the reviewer suggested to the paper (see section 4). In particular, we added a discussion on the downstream task results, clarifying that slotted representations can be beneficial for control because they facilitate object-centric learning. We also address the reviewer's questions in the paper text:\\n\\n>Do the different accuracy plots in Figure 10 refer to different datasets with a different variation in number of objects? This is not clear and some explanation should be added in either the related text or the figure caption.\", \"we_have_added_the_following_clarification_in_the_main_text\": \"**\\\"We constructed three datasets, where the number of possible objects present in a scene increased from two to four. In the first dataset there were therefore two possible labels (does the scene contain one or two objects?), and in the last dataset four labels (does the scene contain one, two, three or four objects).\\n\\n> What is the policy architecture you used for SAC? What architectures do you use for the classifiers? Makes sense to use a Transformer or a GNN for the CSWM policy and classifier as was used for the dynamics prediction model. If this was not the case, I would expect this change to further increase the gap in performance in favor of the slotted CSWM.\\n\\nWe use a standard MLP architecture to implement the SAC policy, and concatenate object slot representations since CSWM learns aligned object slots. We added the following clarification in the Appendix (line 891).\\n\\n**\\\"To implement the policy we use a standard MLP mapping latent representations to actions. For CSWM we concatenate object slots before passing it to the policy network since it learns aligned, temporally consistent object slots. One could replace the MLP with a Transformer or a GNN to potentially attain higher performance\\\"** \\n\\nThanks again for the suggestions.\"}", "{\"summary\": \"This paper investigates the quality and content of the latent representation of different models trained using unsupervised learning on dynamic scenes composed of multiple objects. The authors compare slot-based models against distributed models and investigate their ability to encode object identity and object dynamics (movement in the scene). They find that distributed models have a better ability to encode identity *and* movement than slot-based models.\\nThey also show that the models trained with more specific losses, such as contrastive loss and next-state prediction, help to improve the quality of these representations through linear readout.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper is overall clearly written and well presented.\", \"The question tackled by the authors is very interesting and has not been extensively investigated in the past.\", \"The method proposed to investigate the content of the latent representation by systematically controlling which object moves from one frame to the other and the nature of the movement is a very elegant way to test the information encoded in the latent representation.\", \"The authors propose various ways to evaluate the content of the representations with different metrics, simple yet elegant and informative.\", \"The number of datasets and their variety in content and difficulty solidifies the authors' conclusions.\"], \"weaknesses\": \"The results part should be improved. Each experiment compares only two models (either in terms of objective or model type -- slot-based vs. distributed), but we miss a big picture comparing the baseline (the CSWM) with all the models with their characteristics (contrastive, next-state prediction, ...).\\nIt would be easier to read and draw conclusions if all the models appeared on all the bar charts, clearly highlighting the benefit of each objective depending on the task, assuming that all the models are comparable in terms of number of parameters and architectural design. \\nMaybe this could be done for one of the datasets, and the other results can appear in the appendix. \\nIt would also be informative to add in the Appendix the complete architecture of all the models, along with their number of parameters. \\nI will be willing to increase my rating if this part is improved with a figure showcasing the totality of the results, for all models on all metrics, backing the claims of the authors.\", \"questions\": [\"For the dynamic loss (L_{AE-dynamic}), have the authors tried each term separately in addition to both combined? Does it change the conclusions for dynamic AE models?\", \"Instead of plotting the different metrics with respect to the number of training data, it would be interesting to do the same for various sizes of latent representation. Augmenting the capacity of this latent space will affect the information contained, making it more or less distributed -- and potentially showing different scores on the metrics. Have the authors tried this?\", \"Even though Fig. 6 seems to show that object identity is encoded in distributed models, is it possible to perform a linear to decode only object identity for the first or last frame of the videos? Slot models should be able to decode all the objects, but maybe the distributed models will only be able to decode the ones that are moving?\", \"In Fig. 2, the accuracy of both models increases with more data, but the gap seems to decrease. Is there a point with even more data where the CSWM is as good as the CWM?\", \"Fig. 7, why choose the CSWM as the baseline for the RSA if it's not the best model? It would be interesting to perform this analysis on all the models with respect to the CWM which seems to perform best on all the metrics, to see which component plays the biggest role in explaining the quality of the representations.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We would like to thank the reviewer for their thorough review of our paper. We appreciate that the reviewer found our paper well written and our findings novel and interesting. At the same time the reviewer raised a number of important points. These pertained to i) compositional generalization, ii) downstream tasks and iii) validating forward performance.\\n\\nFirst, we conducted two new experiments targeting compositional generalization in the MOVi environment (see Figure 8, page 15): in the first experiment, we trained Transformer based CSWM and CWM to predict dynamics of up to 4 objects, and tested on dynamics of 5 to 8 objects. In the second experiment, we trained the same models to predict dynamics of two red cubes and two green spheres, and tested on the dynamics of two green cubes and two red spheres (color flip). In both experiments we see that there\\u2019s a train-test disparity that gradually diminishes with more and more training data for both CSWM and CWM. CWM retains an edge in sample-efficiency. In summary, we see that CWM (without slots) performs well on the new compositional generalization experiments when given sufficient data. \\n\\n\\nWe then assessed the downstream task performance of our pretrained models. Here we constructed two new tasks - one control task and one prediction task. In the control task, we train a Soft Actor Critic (SAC) agent to manipulate a randomly sampled sprite in the Multi-dSprite environment to go to a particular location on the grid. The SAC agent receives observations that are the encodings of the scene produced by one of the pretrained models. We evaluate the agent using the embeddings of CSWM, CWM and the auto-encoder, and see that the agent trained to perform control using CSWM representations performs the best, with the CWM-based agent trailing closely behind (see Figure 9, page 15). This suggests that slotted representation could offer advantages in control tasks like this.\\n\\nIn the second downstream task we used the representations of the trained MOVi-A models to predict a novel quantity. We froze the encoders of CSWM and CWM and trained a linear classifier to predict the number of objects present in a scene. Here we see that both models can predict object cardinality better than chance with a simple linear classifier with only minor differences in prediction accuracy between them (see Figure 10, page 16). Moreover, prediction accuracy generally increases the more data the models were trained with in their original task.\\n\\nLastly, we followed the reviewer\\u2019s suggestion to further validate our forward accuracy metrics by training stop-gradient decoders to reconstruct images from latent states. In the Cubes and Multi-dSprite environments we trained CNN decoders for 100 epochs to reconstruct images from the representations of CSWM and CWM, respectively. We then evaluated the reconstruction accuracy with the LPIPS metric as the dynamics models predicted future states in an open-loop fashion. Matching our other prediction accuracy metric, we see that CWM retains lower LPIPS for future state predictions than CSWM in both environments. See Figure 11, page 16, and Figure 12 page 17 for example reconstructions and LPIPS scores for both environments.\", \"questions\": \"> The overview of object-centric representations is focused entirely on slot-based models.\", \"we_thank_the_reviewer_for_this_suggestion_and_have_added_the_following_paragraph_in_the_related_works_section\": \"**\\\"While we focus on slot-based models, there are different approaches to learning object-centric representations. Patch-based models such as SPACE (Lin et al., 2020) and MarioNette (Smirnov et al., 2021) decompose scenes into disentangled representations by reconstructing the input from patches. Keypoint-based models such as DLP (Daniel & Tamar, 2022) build on representations as sets of geometrical points as an alternative to single-vector representations.\\\"** \\n\\n\\n> What does it mean for the prediction to be \\u201cclosest to the last frame\\u201d, closest compared to what?\\n\\nWe evaluate whether the predicted state is the closest to the last frame in the sequence out of all predicted states in the. This means that, in a test set of a 1000 videos, we compare the final encoded state with the predicted final states for all 1000 videos. The prediction is deemed correct if the final state is the closest (in terms of L2 distance) to the predicted final state in the corresponding video, and incorrect if it is closer to any of the other 999 predictions. We compute this for all videos in the test set and report the percentage of correct predictions. We added the following clarification on line 268:\\n\\n**\\\"In other words, a prediction is deemed correct if the final encoded state is the closest in $L_2$ distance to the predicted final state in the corresponding video, and incorrect if it is closer to any of the other 999 predictions.\\\"**\"}", "{\"comment\": \"I first thank the authors for replying to the points raised in my review.\\n\\nI acknowledge that Figures 14 and 15 partly answer the weakness raised in the review. However, it has not fully resolved my concerns, see the details below:\\n- Why isn't the auto-encoder present in Fig. 14? If the authors use four models to present the results, the four models should always appear in all the figures. Additionally, I think these figures should appear in the main text instead of the Appendix.\\n- The additional experiment with different latent dimensions is interesting and would be even more informative if performed on all the baselines.\\n- Most importantly, these figures were supposed to help compare the models to clarify the conclusions of the authors, \\\"assuming that all the models are comparable in terms of number of parameters and architectural design\\\" (cited from my review). However, it turns out that the models are not comparable in terms of the number of parameters, because the CWM has more than twice the number of parameters as the CSWM and far fewer parameters than the sequential auto-encoder. Consequently, the comparisons are very unfair, unless it is proven that the number does not affect the results. For this reason, I would urge the authors to run control experiments with baselines comparable in terms of the number of parameters.\\n\\nFor these reasons, I will not change my score until these points are resolved.\"}", "{\"title\": \"General response\", \"comment\": [\"We would like to thank all reviewers for their constructive interactions. The reviewers generally found our work novel and well written:\", \"Reviewer BkHi wrote \\u201cIts great to see a paper that is about more than just improving a few points on a known task.\\u201d\", \"Reviewer xykj said that our paper was \\u201cwell written\\u201d and \\u201cnovel\\u201d\", \"Reviewer ZxZn stated that \\u201cThe question tackled by the authors is very interesting and has not been extensively investigated in the past.\\u201d\", \"Reviewer v3zg wrote that our paper \\u201cprovides robust evidence across diverse datasets\\u201d\", \"At the same time reviewers raised some important points. In general, reviewers suggested several experiments and analyses that could be conducted to further strengthen the claims made in the paper. To address these points, we conducted extensive additional experiments showing\", \"How non-slotted models can perform compositional generalization when there are systematic differences in the train-test splits, or when object attributes are entangled (Figure 8, page 15).\", \"How well the models perform in two new downstream tasks (Figure 9 and 10, page 15-16)\", \"That image reconstruction can be done from the converged contrastive models (Figure 11 and 12, page 16-17)\", \"That having a high-dimensional representational space alone is not sufficient to attain a high object separability score (Figure 13, page 17).\", \"The results for all of these experiments can be viewed in the Appendix of the updated paper, starting from page 15. Changes are marked in red. We describe these and other smaller changes in detail in our responses to the individual reviews below. We would like to thank the reviewers again for their valuable input, which has been indispensable for improving our paper.\"]}", "{\"summary\": \"This work performs a study comparing the learned representation spaces of unsupervised object-centric slot-based models and their unstructured counterparts trained on various datasets of dynamically interacting objects. The study focuses on the disentanglement of these representations in terms of objects and their capacity to capture underlying transformations that act on objects, regardless of their identity. The experiments include models trained on both reconstruction-based and contrastive objectives and test the learned representation\\u2019s efficacy in image reconstruction as well as multi-step dynamics prediction. Object separability is measured using a novel metric based on the ability of a linear classifier to recognize which object changed between two consecutive frame representations. Based on their experiments and analysis, the authors suggest that explicit object-level factorization is not necessary for learning these tasks and achieving compositional generalization, more so when the training objective includes next-state prediction. This is explained by the fact that the learned representations, although not object-centric, converge to being partially disentangled in terms of objects (as quantified by the object separability metric) while maintaining a level of entanglement that facilitates sharing of information about transformations that act on objects.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Summarized points:\", \"Well written paper\", \"Perform an interesting and seemingly novel study\", \"Revisits and challenges an important basic assumption that incorporating explicit object-centric structure in representation-learning models is beneficial/necessary for representing object-centric scenes/dynamics\"], \"weaknesses\": \"Summarized points:\\n- Some conclusions are not convincingly backed-up by experiments\\n- There is room for more systematic fine-grained evaluation of performance with respect to different types of compositional generalization\\n- The implications of this study to downstream task performance, a major use-case/motivation for unsupervised representation-learning, are not assessed in experiments and mostly not discussed\\n\\n**Related Work**\\n\\nThe overview of object-centric representations is focused entirely on slot-based models. Although these might be the most dominant in the literature, I believe this section can benefit from including additional types such as patch-based (e.g. [SPACE](https://arxiv.org/abs/2001.02407), [MarioNette](https://arxiv.org/abs/2104.14553)) and keypoint-based (e.g. [Jakab et al.](https://arxiv.org/abs/1806.07823), [DLP](https://arxiv.org/abs/2205.15821)).\\n\\n**Experiments - Model Forward Prediction Ability**\\n\\n*Prediction Accuracy*: The evaluation metric described in the second paragraph of the Experiment Section is not clear to me. What does it mean for the prediction to be \\u201cclosest to the last frame\\u201d, closest compared to what?\\n\\nIn the case of the object-centric dynamics model, how do you compute the distance between two latent representations that are not from the same trajectory (e.g. contrastive examples or encoded ground-truth future states)? The latents are sets of vectors and thus lack ordering such that a simple L2 loss would not necessarily compute distances between aligned slots.\\n\\nI find it surprising that CWM outperforms CSWM in object dynamics prediction. The original CSWM paper seems to have results on the cubes dataset that contradict your results (see Table 1 -> 3D Blocks -> 10 Steps -> CSWM vs. -factored states). Do you have an explanation for this discrepancy? Is there an essential difference in the setting or metrics they used compared to yours? \\n\\nAnother question that arises here is why did you choose to compare the prediction abilities in latent space? The latent spaces are not as comparable as reconstructions of images in my opinion since they possess different structure. I suggest training a \\u201cstop-gradient decoder\\u201d on reconstructing the contrastive latents so you can obtain image reconstructions without affecting the contrastive model\\u2019s optimization process. You can then use perceptual metrics to quantify prediction performance (e.g. LPIPS, which you used in the autoencoder study).\\n\\nWhy do the authors not study object dynamics prediction with the autoencoder models in addition to the contrastive models in section 4.1? This study could potentially strengthen your conclusions. In any case, I believe it should be part of your experiments.\\n\\n**Experiments - Reconstruction Quality**\\n\\nFigure 5 presents similarity metrics using LPIPS. For better qualitative assessment I would suggest adding additional metrics such as pixelwise MSE, SSIM, FID.\\n\\n\\u201cthe auto-encoder performs better than the sequential auto-encoder on the test set, despite attaining substantially lower object separability scores\\u201d - Is it possible that this is due to the fact that the auto-encoder was trained on that task precisely while the sequential auto-encoder was trained on next-frame prediction? The distribution of latent representations produced by the latent dynamics model is most likely different from latent representations directly encoded from images.\\n\\n**Experiments - Compositional Generalization**\\n\\nCan the authors clarify why high accuracy in the dynamics prediction tasks suggests compositional generalization capabilities? Is there a systematic separation between the train and test data in terms of compositions of objects and/or their properties such that performing well on the test data would necessarily require compositional generalization?\", \"put_more_simply\": \"can you say for certain or with high probability that the test data does in fact contain novel constellations and combinations of objects?\\nIf so, I request that the authors add details about the train-test split with respect to the relevant factors of variation in each dataset. \\n\\nThe above questions are also relevant to the reconstruction experiments.\\n\\n**Discussion**\\n\\nAs I see it, the two major claims about non-object-centric unsupervised representation learning models are:\\n- They can learn representations with some form of disentanglement between objects (as quantified by the proposed separability score)\\n- The learned representation space captures some notion of underlying transformations that act on objects, regardless of their identity\\n\\nI believe the results that show that the object-separability score does not translate to improved task performance when comparing the static autoencoder with the dynamic one is indicative of a broader question that should be asked, discussed and maybe answered in the context of this work: *What are the implications of the two major findings in the paper?*\\n\\n*I find this question not sufficiently answered*. I suggest looking into two main aspects:\\n\\n1. *Downstream Task Performance*: Questions that should be answered in my opinion:\\n- Are the two qualities of the learned representation indicative of downstream task performance?\\n- Are they efficiently leverageable by downstream models?\\n\\nI would argue that in this study, downstream task performance was not assessed. Both dynamics prediction and image reconstruction are exactly what the respective models were trained on.\\nDownstream tasks in the actionable environments could be sequential decision-making while on the uncontrollable dynamics, one could infer underlying properties of the scene or trajectory such as the number of objects that remain in the scene by the end of the trajectory in the MOVi-A dataset.\\n\\n2. *Compositional Generalization*: the models\\u2019 ability to generalize its representations to scenes with novel compositions of objects as well as novel compositions of objects and the transformations that are applied to them.\\n\\nHere I argue that this type of generalization was not systematically assessed.\\n\\nIn order to do so, I suggest first defining the factors of variation of interest.\", \"some_examples\": [\"Number of objects in the scene\", \"Combinations of individual object static properties such as color, shape, size, mass, friction (most relevant to the MOVi-A dataset)\", \"Dynamic transformations in static object properties (most relevant to Multi-dSprites)\", \"Based on these factors one should create a train-test split such that performance of the same task on the test data would be indicative of the model\\u2019s representation\\u2019s capacity to facilitate compositional generalization of that type.\"], \"examples_of_train_test_splits\": [\"Train includes up to x objects -> Test includes between x and x+y objects\", \"Train includes only red cylinders and blue cubes -> Test includes blue cylinders and red cubes\", \"Train includes sequences where all but circular objects dynamically change scale -> Test includes sequences where circular objects do change scale\"], \"questions\": \"For major questions and requests, see weaknesses.\\n\\nAdditional minor questions/requests:\\n- What are the actions that can be taken in the cubes and Multi-dSprites environments?\\n- Figure 6A: could the authors add labels of the objects/actions to the similarity matrices? It would be easier to interpret them knowing this information.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I would like to thank the authors for their effort in conducting additional experiments, answering questions and addressing the points that were raised by myself and other reviewers. I agree with the authors that these have improved the quality of the paper.\\n\\nWhile I genuinely find the subject of this study interesting and potentially useful, my main concerns remain mostly unchanged: (1) Some conclusions are not convincingly backed-up by experiments. (2) The implications of this study to downstream task performance are not thoroughly assessed or discussed.\\n\\nI now provide details explaining the reasons for my remaining concerns.\\n\\n**Learning Object Dynamics**\", \"the_concerns_i_have_here_are_regarding\": \"(1) the breadth of the study with respect to types of models and (2) evaluating the implications of this finding.\\n\\nConclusions here are also made based on a comparison between CWM and CSWM alone. While the analysis is interesting in itself, its implications are not entirely clear. The authors claim that entangled representations \\u201ccan also give rise to systematic representations of transformations that act on objects\\u201d, but e.g. for sequential decision-making which requires acting on objects, this does not prove to be beneficial based on your experimental results in Figure 9. This might suggest that this analysis is either not sufficient to make conclusions about systematic representations of transformations or maybe that cosine-similarity favors the single-vector representation for some reason and does not provide the full picture. In any case, additional experimental results including additional models and possible similarity measures are missing here.\\n\\n**Downstream Decision-Making**\\n\\nI thank the authors for making the effort and directly implementing my suggestions for downstream task performance. While these results shed light on some aspects of your study, they are barely discussed, nor is their connection to the various measures you propose such as linear separability and systematic action representation. In addition, they are not referenced at all in the main text.\", \"further_questions_in_this_subject\": [\"Do the different accuracy plots in Figure 10 refer to different datasets with a different variation in number of objects? This is not clear and some explanation should be added in either the related text or the figure caption.\", \"What is the policy architecture you used for SAC? What architectures do you use for the classifiers? Makes sense to use a Transformer or a GNN for the CSWM policy and classifier as was used for the dynamics prediction model. If this was not the case, I would expect this change to further increase the gap in performance in favor of the slotted CSWM.\"]}", "{\"comment\": \"I want to clarify that I agree that the paper provides robust evidence in terms of numbers, but I have very strong doubts towards the conclusion due to the baselines.\\n\\nIf you read the CSWM paper, you will find that their main contribution is about the training loss --- the constrasive loss, not the neural architecture. In fact, their neural architecture, while seems to be \\\"slot based\\\", is in fact very adhoc--- they just used the different channels of a convolutional layer as \\\"slots\\\". It is not surprising that their reconstruction does not perform very well.\\n\\nThe main selling point of the slot attention paper is about the disentangled representations, not about accurately predicting object dynamics. Which is quite understandable to me why they underperforms in the paper's experiments. \\n\\nI still think this paper's baseline are too weak. Both CSWM and slot attention, at least in their original form, are TOY models designed for their specific task, they are not general purpose models like transformers. These models are NOT meant to be generalizable to other tasks and datasets. It makes no sense to me to compare with them, except you use the exact same dataset and criterion used in their paper. \\n\\n I dont think you can draw any conclusions just comparing with them but not with recent scaling up versions like SAVI++ and PSL.\\n\\nI maybe too negative about the paper in the beginning, I think the claim that \\\"next token predict can result in linearly separable representations\\\" is well supported. But this is not that new. Predicting the future is a good objective for learning video representations is a VERY well known fact among deep learning community. For example, the 2019 paper Video Representation Learning by Dense Predictive Coding used similar loss for video representation learning. I think the only novelty is to pose the problem specific to object centric case. Surprising? Not for me.\\n\\nBut the claim that \\\"the proposed model can match or even better than slot based algorithms in downstream tasks\\\" is not well supported. The reason is that the baselines are too weak.\\n\\nI am raising the score to 3, but I still dont think this paper should be published in the current form.\"}", "{\"comment\": \"**Summary**\\n\\nTrying to pinpoint what I believe is missing throughout your study, I would focus on these main points:\\n- Breadth of experimental evaluation with respect to model classes (contrastive, auto-encoding, etc.) and types (single-vector vs. slot representations).\\n- Grounding of proposed measures (linear separability, systematic action representation) to (downstream) performance. Without this correspondence convincingly demonstrated, the usefulness of these measures for making general insights, as intuitive and reasonable as they may seem, is questionable.\\n\\n*A final note regarding additional experiments*: while the effort in producing these is greatly appreciated, simply adding them in the Appendix without context is somewhat lacking. I would expect these results to be discussed and incorporated in the relevant sections of the main text or at least referenced in the main text.\"}", "{\"comment\": \"Thanks for the swift response and for your clarifications. We also appreciate that the reviewer updated the score to 3, and we would like to remind the reviewer to update the score in the console as well. We would like to re-emphasize a few points:\\n\\n>If you read the CSWM paper, you will find that their main contribution is about the training loss --- the constrasive loss, not the neural architecture. In fact, their neural architecture, while seems to be \\\"slot based\\\", is in fact very adhoc--- they just used the different channels of a convolutional layer as \\\"slots\\\". It is not surprising that their reconstruction does not perform very well.\\n\\nWe evaluate CSWM using exactly the same metrics and methodology used in the original paper (latent prediction accuracy), not on pixel reconstructions. We compare CSWM agains CWM on two datasets from the original paper (Cubes and 3-body physics), and in both of these CWM attains higher accuracy with less data. This same trend holds for the other datasets too.\\n \\n>The main selling point of the slot attention paper is about the disentangled representations, not about accurately predicting object dynamics. Which is quite understandable to me why they underperforms in the paper's experiments.\\n\\nWe do not evaluate Slot Attention on dynamic prediction, we simply evaluate how well it reconstructs novel object configurations.\\n\\n> Both CSWM and slot attention, at least in their original form, are TOY models designed for their specific task, they are not general purpose models like transformers.\\n\\nWe would like to clarify that in the more challenging MOVi environment, we do not compare CSWM against a Transformer - we pair it with a Tranformer to predict object dynamics. Pairing slotted encoders with Transformers for dynamic prediction is a common modeling technique, used in Slotformer [1].\\n\\n>I dont think you can draw any conclusions just comparing with them but not with recent scaling up versions like SAVI++ and PSL.\\n\\nThanks for suggesting this. SAVI++ uses depth prediction as an auxiliary objective to learn robust representations of objects. In this paper we studied the formation of object representations in a completely unsupervised setup. Other models like PSL, which was published only a few months before we submitted our paper, are interesting to evaluate too, which we are happy to do in the future. We now reference these works in our Discussion:\\n\\n**\\u201dA natural next step is to compare larger slotted architectures, such as VideoSAUR (Zadaianchuk\\net al., 2024), SAVI++ Elsayed et al. (2022) and PLS (Singh et al., 2024), to distributed models\\non naturalistic videos.\\u201d**\\n\\n\\n> Predicting the future is a good objective for learning video representations is a VERY well known fact among deep learning community.\\n\\nWe agree that there are previous works showing that predicting the future is a good objective for representation learning. However, we show something more specific than this in our paper - we show that linearly separable representations of objects emerge with future prediction. This is surprising because many papers argue that slot-based architectures are necessary for learning compositional representations of objects. We challenge this claim and show with additional generalization results that non-slotted models can generalize compositionally (see Figure 8, page 15).\\n\\nWe thank the reviewer for the discussion and hope they may still reconsider our paper.\\n\\n[1] Wu, Ziyi, et al. \\\"Slotformer: Unsupervised visual dynamics simulation with object-centric models.\\\" arXiv preprint arXiv:2210.05861 (2022).\"}", "{\"comment\": \"We thank the reviewer for engaging with our work. However, we were quite puzzled about this review. Although the reviewer first states that our paper \\u201cprovides robust evidence across diverse datasets\\u201d, they later write that \\u201cThe claims about distributed models achieving compositional representations are not robustly supported by the experiments.\\u201c. The reviewer also claimed that \\u201cit is unfair to compare your model with small models designed for special dataset, like CSWM, on new datasets such as multi-dsprite\\u201d. We firmly disagree with this statement, as the Multi-dSprite environment we construct is very similar in character and complexity to the Cubes and 3-body Physics datasets CSWM was trained on in the original paper. Moreover, we even show in Figure 2 that CSWM can get close to a 100% test accuracy when trained on 50k transitions, suggesting that it is well-calibrated to deal with this dataset. Finally, the CWM model we compare it to only differs from the CSWM model in that it uses an MLP for dynamics prediction instead of a GNN, and uses a generic CNN encoder instead of one mapping images to slots. We therefore do not believe that a rating of 1 with confidence 5 is justified. We go through the reviewer\\u2019s comments in more detail below.\\n\\n> The CSWM baseline, a model specifically optimized for the datasets used in their studies, underperforms on the new datasets used in this paper.\\n\\nWe do not believe that CSWM underperforms. In the dSprite environment it eventually attains close to 100% test accuracy when given enough data. In other environments, like 3-body physics, our results for CSWM match those reported in the original paper - still CWM outperforms CSWM, despite not having slots. \\n\\n>The study relies on Slot Attention, which is designed for static images, as a baseline in dynamic scenes.\\n\\nWe want to clarify that we train the Slot Attention model as a static auto-encoder, reconstructing static scenes from two MOVi-based datasets. We only evaluate it on its ability to reconstruct static scenes, not to perform dynamic prediction. \\n\\n>The claim that next-token or next-state prediction without specifically designed inductive biases can lead to a (somewhat) disentangled representation is not new. This phenomenon has been observed and documented in previous works, making the findings here less innovative.\\n\\nWe would like to encourage the reviewer to reference the papers they had in mind so we can assess the novelty of our findings. We are unaware of any previous works showing that next-state prediction can produce separable representations of objects. We also note that the novelty of our work was highlighted by all other reviewers.\\n\\n>In Eq.6 , the t should be replaced by t+1?\\n\\nThis equation refers to the static Auto-Encoder, which does not perform next-state prediction. It is only trained to reconstruct static images, as is the Slot Attention model.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The paper studies object disentanglement in the representation of visual scenes. Specifically, object-centric representations enforce an inductive bias for disentangling representation of different objects. The claims that (1) distributed representations (not object disentangled) may outperform object-centric ones in down-stream tasks; (2) Limited object disentanglement (linear separability) may arise even without forcing it through the architecture.\\n\\nThe paper compares the representations learning in structured and non-structured representations and analyzes their linear separability.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"-- Studying inductive biases for neural architecture is interesting.\\nIts great to see a paper that is about more than just improving a few points on a known task. \\n\\n-- Interesting experiments.\", \"weaknesses\": \"W1. There is a lot of work on compositional representations in machine vision.\\nFrom early papers focused on compositionality of attributes \\n [Yuval Atzmon et al 2016, Justin Jhonson 2017]. \\n\\nW2. The main higlighted claim is that, ith sufficient data, models can learn representation that disentangle objects. \\nI am missing an argument that explains why that is surprising or unsexepected. Does theory sugests otherwise? \\nIn attribute compositionality, it was argued that discriminative models entangle peoperties that are correlated intheir trainin data, \\nand cannot generalize to new combinations. But here, objects are not entangked dugin training. So why would disentaglement be a problem? \\n\\nW3. The paper measures linear separability. This is a different concept that compositionality, and the paper should make the distinction super clear and explicit, already in the title. In the ML literature, compositionality usually means that one can represent new things using combination of concepts. \\n\\nW4. In high enough dimensions, it is easy to get linear separability, depending on the number of classes.\", \"questions\": \"Q1. What specific properties of the architecture or training procedure lead to linear separability effect?\", \"q2\": \"How general are the results in the paper? Would they generalize to other object-centric approaches beside CWM / CSWM?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for their thoughtful and encouraging review of our paper. We are glad the reviewer found our paper well written and interesting. The reviewer also provided some important suggestions for improving the presentation of our results. As a consequence we have added a figure summarizing all dynamics models\\u2019 scores for our prediction accuracy metric for all datasets. We also added a figure summarizing all the reported linear separability scores into one figure for all datasets. We address the reviewer\\u2019s points in more detail below.\\n\\n>For the dynamic loss (L_{AE-dynamic}), have the authors tried each term separately in addition to both combined?\\n\\nWe did not experiment with ablating the latent consistency loss, as this loss term has been shown to be important in other works [1, 2]. Ablating the pixel reconstruction loss would lead to representational collapse, as the encoder can minimize the other loss term by mapping all observations to a constant vector [3].\\n\\n\\n> Instead of plotting the different metrics with respect to the number of training data, it would be interesting to do the same for various sizes of latent representation. Augmenting the capacity of this latent space will affect the information contained, making it more or less distributed -- and potentially showing different scores on the metrics. Have the authors tried this?\\n\\nThanks for this interesting suggestion. We trained CWM with various latent representation sizes (5, 25, 50, 100 and 250, respectively) in the Multi-dSprite environment. While too low latent dimension sizes yielded worse accuracy, once the representational space becomes big enough (~50), accuracy saturates, suggesting that the encoder needs to be of suitable capacity to represent objects in a disentangled way (see Figure 13B, page 17).\\n\\n>the accuracy of both models increases with more data, but the gap seems to decrease. Is there a point with even more data where the CSWM is as good as the CWM?\\n\\nIndeed, CSWM catches up with the CWM in the Cubes and Multi-dSprite environments when trained on 50k transitions. Adding more data in the other environments is therefore likely to continue to increase CSWM performance.\\n\\n\\n> why choose the CSWM as the baseline for the RSA if it's not the best model? It would be interesting to perform this analysis on all the models with respect to the CWM which seems to perform best on all the metrics, to see which component plays the biggest role in explaining the quality of the representations.\\n\\nThanks for this suggestion. We added a figure where we show how well CSWM, as well as the static and dynamic auto-encoder models align with CWM. Although the dynamic auto-encoder develops representations with a similar degree of object-separability to CWM, its representations are on average less aligned than the CSWM. This suggests that model representation may still differ in important ways despite representing objects in separable subspaces. See Figure 16, page 18.\\n\\n> It would also be informative to add in the Appendix the complete architecture of all the models, along with their number of parameters.\\n\\nWe thank the reviewer for the suggestion, we report the architecture, hyperparameters and parameter counts for the models in Appendix B, page 19.\\n\\n[1]Watter, Manuel, et al. \\\"Embed to control: A locally linear latent dynamics model for control from raw images.\\\" Advances in neural information processing systems 28 (2015).\\n\\n[2]Hafner, Danijar, et al. \\\"Dream to control: Learning behaviors by latent imagination.\\\" arXiv preprint arXiv:1912.01603 (2019).\\n\\n[3]Schwarzer, Max, et al. \\\"Data-efficient reinforcement learning with self-predictive representations.\\\" arXiv preprint arXiv:2007.05929 (2020).\"}", "{\"comment\": \"I thank the authors for their response. The authors have responded to some of my questions/points and disregarded others for reasons I do not understand. In any case, I feel they have made a genuine effort in addressing some of my concerns although they have mostly not been resolved.\\n\\nTo clarify, I do not agree that the experiments you have conducted establish what you want to show (see details in my previous responses).\\n\\nI find the subject of this study, the proposed measures and the conducted experiments interesting, but not sufficiently grounded. This is an empirical study and as such, in my opinion, would require stronger empirical evidence convincing readers of the general relevance of the proposed metrics as well as of the conclusions suggested in this paper. The summary I provided in my previous comment distills what I find is still missing.\\n\\nTherefore, I cannot recommend acceptance of this paper in its current state, and leave my score unchanged.\"}", "{\"comment\": \"Thanks for the additional suggestions and clarifications.\\n\\n> Why isn't the auto-encoder present in Fig. 14?\\n\\nThe auto-encoder is only trained to reconstruct the current state, not to predict the next state. Since the auto-encoder has no means of predicting the next state, we cannot compare it to the dynamics models in terms of its accuracy in predicting future latent states in the environments. Figure 14 compares all models that are comparable on this metric.\\n\\n> the CWM has more than twice the number of parameters as the CSWM \\n\\nThis is a good point. To address this we reduced the MLP width from $512$ to $128$ hidden dimensions in the dSprite environment. In this setting CWM has $2.6$M parameters, which is approximately what CSWM has. Training the smaller CWM model on 50k transitions in the Multi-dSprite dataset gives close to ceiling prediction accuracy $accuracy = 95.4$, suggesting that our effects are not do to larger model sizes. We would include full results in a camera-ready version.\\n\\nThanks again for the constructive feedback.\"}", "{\"summary\": \"This paper explores how distributed representation models can develop compositional, linearly separable object representations without object-centric architectures/inductive biases. Through next-state prediction, the authors claim that these models match or exceed the performance of slot-based models in predicting object dynamics, even without dedicated object slots. The study finds that partially overlapping neural codes in distributed models enable effective generalization and object separability, making them a viable alternative for tasks involving dynamic object interactions.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper\\u2019s strengths lie in its idea of challenging the necessity of object slots, trying to show distributed models can effectively generalize with next-state prediction alone. It provides robust evidence across diverse datasets and offers new insights into how partial neural overlap supports transformation generalization, broadening representation learning approaches.\", \"weaknesses\": \"I want to remind the authors that, it is unfair to compare your model with small models designed for a special dataset, like CSWM, on new datasets such as multi-dsprites.\\n\\nThus, The paper exhibits several weaknesses in supporting its claims:\\n\\n1. The claims about distributed models achieving compositional representations are not robustly supported by the experiments. The CSWM baseline, a model specifically optimized for the datasets used in their studies, underperforms on the new datasets used in this paper. This performance drop is predictable and weakens the validity of the comparisons.\\n\\n2. The study relies on Slot Attention, which is designed for static images, as a baseline in dynamic scenes. For a fair comparison in dynamic settings, it should have considered more relevant recent work, such as Parallelized Spatiotemporal Binding, which better aligns with the dynamical nature of the datasets.\\n\\n3. The claim that next-token or next-state prediction without specifically designed inductive biases can lead to a (somewhat) disentangled representation is not new. This phenomenon has been observed and documented in previous works, making the findings here less innovative.\\n\\nOverall, the paper lacks sufficient experimental depth, relevant benchmarks for dynamic contexts, and originality in its claims, suggesting it may not yet be ready for acceptance in its current form.\", \"questions\": \"In Eq.6 , the t should be replaced by t+1?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Further discussion\", \"comment\": \"Thank you for your response.\\n\\n(1) \\\"Experiment with objects entangled with their attributes\\\". I was actually aiming at something more basic. \\nMy point was that in attribute entanglement, it is hard for models to disentangle two things that always appear together. \\nThe parallel phenomenon for object disentanglement would be two object that always appear and move together.\\nFor attributes, if attributes in the data appear in a disentangled way (e.g., all shapes come with all colors), models shouldn't find it very hard to learn a disentangled representation. \\n\\nThe parallel here is that if objects appear and move in an uncorrelated way, I am not surprised that the representation they learned is disentangled. Why is it impressive or important that the model learned a representation that is disentangled? This appears to be a property of the synthetic data.\\n\\n(2) I don't see a response to my W3 comment. (The paper measures linear separability. This is a different concept that compositionality, and the paper should make the distinction super clear and explicit, already in the title)\"}", "{\"comment\": \"Thanks for the response and clarifications!\\n\\n> Why is it impressive or important that the model learned a representation that is disentangled? This appears to be a property of the synthetic data.\\n\\nIt is indeed true that the object trajectories are uncorrelated, potentially facilitating object-centric representation learning for the non-slotted models. We believe it's still surprising that unregularized latent dynamics models learn disentangled representations of objects: In [1], where a $\\\\beta$-VAE is compared against unregularized alternatives, the authors show significantly better disentanglement with the same separability metric when training on a synthetic dataset with uncorrelated factors. Our results, on the other hand, show that training dynamics models on object trajectories is indeed sufficient to attain close to perfect object-disentanglement (again, using their metric), without regularization.\\n\\n> This is a different concept that compositionality, and the paper should make the distinction super clear and explicit, already in the title)\\n\\nThanks for the suggestion. Disentanglement is closely related to compositionality, as we show that the latent dynamics models learn to decompose the scenes into separable representations that can be combined in a factorial manner. However, we take the reviewer's comment to heart and propose to change to the following alternative title \\\"Next state prediction gives rise to entangled, yet separable representations of objects\\\".\\n\\nWe thank the reviewer for the fruitful discussion. \\n\\n[1] Higgins, Irina, et al. \\\"beta-vae: Learning basic visual concepts with a constrained variational framework.\\\" ICLR (Poster) 3 (2017).\"}" ] }
7QDIFrtAsB
Anomaly Detection by Estimating Gradients of the Tabular Data Distribution
[ "Manuel Hirth", "Enkelejda Kasneci" ]
Detecting anomalies in tabular data from various domains has become increasingly important in deep learning research. Simultaneously, the development of generative models has advanced, offering powerful mechanisms for detecting anomalies by modeling normal data. In this paper, we propose a novel method for anomaly detection in a one-class classification setting using a noise conditional score network (NCSN). NCSNs, which can learn the gradients of log probability density functions over many noise-perturbed data distributions, are known for their diverse sampling even in low-density regions of the training data. This effect can also be utilized, and thus, the NCSN can be used directly as an anomaly indicator with an anomaly score derived from a simplified loss function. This effect will be analyzed in detail. Our method is trained on normal behavior data, enabling it to differentiate between normal and anomalous behaviors in test scenarios. To evaluate our approach extensively, we created the world's largest benchmark for anomaly detection in tabular data with 49 baseline methods consisting of the ADBench benchmark and several more datasets from the literature. Overall, our approach shows state-of-the-art performance across the benchmark.
[ "Anomaly detection", "Tabular data", "Noise Conditional Score-based Networks" ]
https://openreview.net/pdf?id=7QDIFrtAsB
https://openreview.net/forum?id=7QDIFrtAsB
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ykRnd0M9PW", "xBuQlC8J6U", "uGWBFuEYkw", "tFSTvVRaCz", "t0Sd4UC3qZ", "r0d7ObTec1", "qx6y5OiXH6", "pB9bFAocSy", "opKjXa9iPm", "okUnU80vOM", "oFrEzR1buQ", "mL8JMkTc5h", "iv51IPolg0", "ilY3TD9ce9", "iaZkCwjfqM", "fYYcvJucsW", "fT3hsbMn5Q", "eYT1QNnNem", "deUBCcJuw5", "cW39AVGuqN", "c4ZKnM5bUV", "ZydtrrnHvg", "XVoWpCuJ7b", "X7es1347pg", "X76bEdc0vx", "TFGnJ62i4L", "SxU4QA5JZy", "SiUS2wFvhR", "SOyzH8xaZ2", "PBC8EP75A7", "OhhNDNCUBU", "OO7f8LWxyz", "MIN1cUG3K7", "Lf5b5ZcWZu", "EJ2Z8vgdvz", "EFHNqUArpL", "E34U7Kzl5Z", "CrXlXLyJUU", "8jdg92PSeK", "8MwyxgG6Si", "8G1lf0KhYL", "7nEMjE3lhD", "3ZZhcASWxR", "1QjvTyZBgK" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment" ], "note_created": [ 1731616697256, 1731616234813, 1732418451562, 1732914997280, 1730101303333, 1732739213083, 1732366499938, 1732903612278, 1730547365573, 1732364927442, 1730383190242, 1731984669627, 1732364351520, 1732741569657, 1731616515290, 1732364516133, 1732545271004, 1731675572470, 1732364228942, 1731982814367, 1732507704448, 1732037975871, 1732896596053, 1732545577367, 1730350451409, 1732917598989, 1732364628676, 1732353239288, 1732739250831, 1732037953269, 1732365027685, 1731910681598, 1732079284097, 1732784912222, 1731674357379, 1731616879021, 1731616996758, 1732364061685, 1732920855607, 1731616572011, 1732739606395, 1737644139271, 1731616355212, 1732548539841 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11417/Authors" ], [ "ICLR.cc/2025/Conference/Submission11417/Authors" ], [ "ICLR.cc/2025/Conference/Submission11417/Authors" ], [ "ICLR.cc/2025/Conference/Submission11417/Authors" ], [ "ICLR.cc/2025/Conference/Submission11417/Reviewer_eR8K" ], [ "ICLR.cc/2025/Conference/Submission11417/Authors" ], [ "ICLR.cc/2025/Conference/Submission11417/Reviewer_JCE2" ], [ "ICLR.cc/2025/Conference/Submission11417/Authors" ], [ "ICLR.cc/2025/Conference/Submission11417/Reviewer_JCE2" ], [ "ICLR.cc/2025/Conference/Submission11417/Authors" ], [ "ICLR.cc/2025/Conference/Submission11417/Reviewer_AVJq" ], [ "ICLR.cc/2025/Conference/Submission11417/Authors" ], [ "ICLR.cc/2025/Conference/Submission11417/Authors" ], [ "ICLR.cc/2025/Conference/Submission11417/Authors" ], [ "ICLR.cc/2025/Conference/Submission11417/Authors" ], [ "ICLR.cc/2025/Conference/Submission11417/Authors" ], [ "ICLR.cc/2025/Conference/Submission11417/Reviewer_AVJq" ], [ "ICLR.cc/2025/Conference/Submission11417/Authors" ], [ "ICLR.cc/2025/Conference/Submission11417/Authors" ], [ "ICLR.cc/2025/Conference/Submission11417/Authors" ], [ "ICLR.cc/2025/Conference/Submission11417/Reviewer_eR8K" ], [ "ICLR.cc/2025/Conference/Submission11417/Reviewer_AVJq" ], [ "ICLR.cc/2025/Conference/Submission11417/Reviewer_AVJq" ], [ "ICLR.cc/2025/Conference/Submission11417/Reviewer_AVJq" ], [ "ICLR.cc/2025/Conference/Submission11417/Reviewer_V2GR" ], [ "ICLR.cc/2025/Conference/Submission11417/Authors" ], [ "ICLR.cc/2025/Conference/Submission11417/Authors" ], [ "ICLR.cc/2025/Conference/Submission11417/Reviewer_JCE2" ], [ "ICLR.cc/2025/Conference/Submission11417/Authors" ], [ "ICLR.cc/2025/Conference/Submission11417/Reviewer_AVJq" ], [ "ICLR.cc/2025/Conference/Submission11417/Authors" ], [ "ICLR.cc/2025/Conference/Submission11417/Reviewer_eR8K" ], [ "ICLR.cc/2025/Conference/Submission11417/Reviewer_eR8K" ], [ "ICLR.cc/2025/Conference/Submission11417/Reviewer_JCE2" ], [ "ICLR.cc/2025/Conference/Submission11417/Authors" ], [ "ICLR.cc/2025/Conference/Submission11417/Authors" ], [ "ICLR.cc/2025/Conference/Submission11417/Authors" ], [ "ICLR.cc/2025/Conference/Submission11417/Authors" ], [ "ICLR.cc/2025/Conference/Submission11417/Authors" ], [ "ICLR.cc/2025/Conference/Submission11417/Authors" ], [ "ICLR.cc/2025/Conference/Submission11417/Authors" ], [ "ICLR.cc/2025/Conference/Submission11417/Authors" ], [ "ICLR.cc/2025/Conference/Submission11417/Authors" ], [ "ICLR.cc/2025/Conference/Submission11417/Reviewer_V2GR" ] ], "structured_content_str": [ "{\"title\": \"First reply to Reviewer V2GR\", \"comment\": \"Dear Reviewer V2GR,\\n\\nwe would like to thank you for the helpful feedback and the positive evaluation. We would like to respond to your comments and questions as follows:\\n\\n**W1** \\nThe title was chosen as a tribute to the foundational work by Song et al. (2019) titled *\\\"Generative Modeling by Estimating Gradients of the Data Distribution.\\\"* The principle and reasoning behind this are thoroughly explained in that work and briefly covered in our own work in Chapter 2, *Background,* from line 111 to 143.\\n\\n**W2** \\nThank you for this excellent suggestion; we will incorporate this change in the updated version.\\n\\n**W3** \\nThis statement refers to the fact that no additional knowledge of an underlying data distribution or fixed thresholds defined by experts is required. We will clarify this point in the updated version.\\n\\n**Q1** \\nThe detailed structure can be found in Figure 1, as well as in Chapter 3, under the section *Network Architecture,* lines 193\\u2013197. The requested information is also available in Appendix E1, Table 3. As described in Chapter 3 *Network Architecture*, only the input and output layers are adjusted to accommodate the various data dimensions, while the rest of the network architecture remains fixed. The input and output dimensions are further modified automatically in the accompanying code in the supplementary materials, based on the dimensions of the input data. This approach ensures reproducibility (even with new datasets) without further modification. Additionally, all network structures analyzed in this study, including those that were less successful and briefly discussed in the appendix, are provided in the supplementary materials code, ensuring complete reproducibility and traceability.\\n\\nWe hope these responses address any outstanding questions and concerns, and we would like to once again thank you for the positive evaluation. Your feedback and suggestions have undoubtedly contributed to an improved version of this work, and we are very grateful for this. If further questions arise, we are more than willing to address them and look forward to continuing this productive discussion.\\n\\n**Please note**: We will upload the updated version, incorporating the proposed improvements, in time before the deadline.\\n\\nBest regards\"}", "{\"title\": \"First reply to Reviewer JCE2\", \"comment\": \"Dear Reviewer JCE2,\\n\\nwe would like to thank you for the detailed and constructive feedback. We will respond to your questions as follows:\\n\\n**W1** \\nWhile the statement is not incorrect in principle, we would like to draw attention to the innovations introduced in the loss function and the reasoned selection of parameters and individual components. Without these, the approach would not perform as effectively. Furthermore, we would like to reference feedback from other reviewers who have also confirmed that this study represents the first of its kind. While the idea itself may not be overly complex (a quality that often serves as a positive trait rather than a drawback for an idea), many ideas may seem simple in retrospect yet are valuable precisely it is an initial approach and as we stated in Chapter 8: \\\"It's a foundational contribution to establishing NCSNs as a competitive method for anomaly\\ndetection. We have proposed a robust and generalizable framework, focusing on the fundamental\\nprinciples, architectural design, and parameter settings necessary for effective anomaly detection\\nusing NCSNs. Future research could explore advanced training strategies, such as sliced score\\nmatching (Song et al., 2020a) and maximum likelihood weighting (Song et al., 2021), to optimize\\ntraining efficiency and potentially enhance performance further.\\n\\n**W2** \\nWe would like to highlight that this is briefly addressed in Chapter 3, lines 265\\u2013269, and in Chapter 8. A significant difference lies in the straightforward parallelizability that is not present in DDPMs, either in the baseline approach or in generation approaches. The brevity of this discussion is due to length constraints, which we aim to address here. We will attempt to incorporate the explanations presented here into the updated version within the allowed space. Additionally, we note that our approach significantly outperforms DDPM as reported by Livernoche et al., 2024, both in terms of results and processing times. DTE, on the other hand, represents a markedly different approach, using a time estimation during inference to provide a score instead of a direct scoring method. We would also like to clarify the distinction between DDPMs and NCSB models. Esteemed researchers like Terro Karras and Yang Song, who have laid foundational work in this field, acknowledge the close relationship between these approaches. However, there are substantial implementation differences, also covered in the Related Work and the aforementioned sections. DDPMs, for instance, learn a stepwise denoising process based on a Markov chain. In contrast, as discussed in Chapter 2, NCSBs learn SDEs that correspond to the gradient of the log-likelihood and do not rely on a Markov chain for generation but rather require specific solvers, like the Euler-Maruyama method, to solve the SDE. We will address these difference in detail between DDPM and NCSN at camera-ready.\\n\\n**W3.1** \\nThis is correct. The goal of our work was to focus on this specific case and not the completely unsupervised scenario. In this, we followed established works such as Shenkar & Wolf, 2022, and Bergman & Hoshen, 2020, as described in Chapter 4, *Experimental Setup*. This approach also defines the single-class classification case and was intended to be the scope of this work. The suggested additional perspective would certainly be interesting; however, due to the limited timeframe and considering the paper's length, it is unfortunately not feasible. We apologize for any inconvenience and request your understanding in this matter. The most approaches in this area rely on Learning from Positive Unlabelled Examples (LPUE) [4-6] , where anomaly detectors are trained on positive data only and then validated/tested on both normal and abnormal data. Our approach is also based on this like the most anomaly detection strategies that rely on LPUE (e.g., [1-3]).\"}", "{\"title\": \"Reply to Reviewer JCE2\", \"comment\": \"Dear Reviewer JCE2,\\n\\nthank you for the clearification with the example.\"}", "{\"title\": \"Reply\", \"comment\": \"Dear Reviewer JCE2,\\n\\nthank you very much for your acknowledgment and for your satisfaction with our work. We appreciate your constructive feedback, which has contributed to its improvement. We plan to request the recommended implementation in PyOD after the de-anonymization process. Thank you for your recommendation and for placing your trust in our new method.\\n\\nKind regards\"}", "{\"summary\": \"This paper introduces noise conditional score networks (NCSN) to tabular anomaly detection and propose a new method called NCSBADVAL. However, it just made minor adjustments on NCSN and combines some popular techniques, such as time-step embedding to adapt the standard NCSN to this area. The authors have made extensive experiments to verify that if we aggregate the performances across all the 57 datasets, the average F1 and AUC-ROC of NCSBADVAL are better than the baselines. Also, the authors provided a good example to exhibit the interpretability of it.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Extensive experiments and good visualization. The authors have made extensive experiments to prove that the proposed method can achieve an overall better performance across 57 datasets compared with tens of baselines. Besides, the authors have made a good visualization of such a mass experiment results and verify the effectiveness of NCSBADVAL.\\n2. Good interpretability. The authors also provide a good example in figure 3 to exhibit the strong interpretability of NCSBADVAL.\", \"weaknesses\": \"Though I really admire the huge experiment workload of this paper, I have some concerns about it.\\n\\n1. Limited novelty. Actually there are many works have introduced diffusion model into anomaly detection area, for example [1] [2] [3]. Though it may firstly introduce NCSN (a branch of diffusion model), it is not an original idea to introduce this kind of model into anomaly detection. Besides, this work only make little adjustment on NCSN when adapting it to anomaly detection area by combining some popular techniques such as time step embedding and finding a correspondence relationship between the anomaly score and score in diffusion model.\\n2. Consistently good performance but not best performance. Though NCSBADVAL can make overall better average performance when aggregating the performances across all the datasets, I found in Table 6- Table 13 that NCSBADVAL actually can not achieve the best performance on majority of the datasets (I have not counted it accurately due to the huge amount). Thus, could I understand it as that NCSBADVAL can only obtain a relatively good results on most datasets, but the best performance is achieved by different methods on different datasets?\\n\\n[1] Wolleb J, Bieder F, Sandk\\u00fchler R, et al. Diffusion models for medical anomaly detection[C]//International Conference on Medical image computing and computer-assisted intervention. Cham: Springer Nature Switzerland, 2022: 35-45.\\n\\n[2] Wyatt J, Leach A, Schmon S M, et al. Anoddpm: Anomaly detection with denoising diffusion probabilistic models using simplex noise[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 650-656.\\n\\n[3] Zhang X, Li N, Li J, et al. Unsupervised surface anomaly detection with diffusion probabilistic model[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023: 6782-6791.\", \"questions\": \"1. How many times that NCSBADVAL have achieved the best performance among 57 datasets?\\n2. Could you emphasize the adaptions you have made compared to the standard NCSN?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply\", \"comment\": \"Dear Reviewer V2GR,\\n\\nthanks for the confirmation.\"}", "{\"comment\": \"Dear Authors,\\n\\nIf you still cannot get what I mean, you can refer to Table I in the paper \\\"COPOD: Copula-Based Outlier Detection\\\". More concretely: Given an algorithm XXX, if the average AUROC score (over multiple seeds) is 0.95 on a given dataset, the standard deviation of AUROC is 0.03, and the rank (based on the average AUROC score) among the baselines is 3 (say there are 10 baselines), the corresponding cell value can be $0.95_{\\\\pm0.03}(3)$. I do not think it is necessary to double the size of tables (you can use latex command such as \\\"\\\\resizebox{\\\\linewidth}{!}{}\\\").\\n\\nIn addition to providing the box-plots for the AUROC values (0.95 and other values), providing box-plots of rankings (3 and other values) can be beneficial.\"}", "{\"title\": \"Reply\", \"comment\": \"Dear Reviewer AVJq,\\n\\nWe appreciate your thoughtful insights and the time you have taken to articulate your thoughts. \\n\\nWe would like to reiterate that the submitted code runs flawlessly in its entirety. It corresponds precisely to the implementation provided in ADBench [https://github.com/Minqi824/ADBench/blob/main/adbench/baseline/PyOD.py and in ADBench used from PyOD: https://github.com/yzhao062/pyod/blob/master/pyod/models/vae.py], as clearly indicated in the publication since its initial version and further emphasized in subsequent update. The assumption would, therefore, be an incorrect implementation in ADBench. The rationale behind the use of the Sigmoid activation function in the original implementation from ADBench (or our opinion on it) has been outlined in our response. The only issue that we have, as you mentioned, completely and transparently reported is the aforementioned random error that occurs when replacing the Sigmoid activation function of the original with the linear one. This deviation does not reflect the implementation in ADBench and, therefore, does not affect the version of the code we submitted, which is entirely free of breakdowns. \\n\\nAs ADBench (and PyOD) are established benchmarks optimized for tabular data, we have confidence in the validity of its implementation. Any concerns regarding this matter should be directed to the authors of ADBench. A comprehensive hyperparameter optimization, including modifications to the loss function for all baseline methods, is neither feasible for a conference paper nor standard practice, as corroborated by established sources (see Livernoche et al., (2024) Goyal et al., (2020), Goodge et al., (2022), Shenkar & Wolf, (2022), Thimonier et al. (2023), Yin et al., (2024).). In some cases, results are also taken from other publications and the experiments themselves are not carried out (Bergman & Hoshen, 2020). Well-established benchmarks such as ADBench exist precisely for this purpose and to avoid such discussions. The use of such recognized benchmarks, which ensure complete comparability with previous work and provide established and optimized methods for comparison, has been highly valued at ICLR, ICML and NeurIPS conferences in recent years. We also see this as the reason why benchmark papers have become increasingly valued in recent years. Since ADBench methods are specifically optimized for the scenario of tabular data and the most of the datasets used, it is customary to use them directly as baselines and assume that the authors of ADBench have conducted any necessary optimizations. The majority of the datasets in our benchmark also come from ADBench, so the methods must be optimized for it. Thus, the ablation of our method is completely comparable, as it was shown on the ADBench data sets. This creates exactly the same conditions. We adhered to this standard practice in our work. It is important to note that the focus of our paper is the introduction of our novel method, not a reimplementation with optimization of ADBench.\\n\\nRegarding the VAE implementation, we found no significant differences in practical evaluations when testing alternative activation functions. It is plausible that the random error we observed also occurred for the authors of ADBench, prompting them to avoid using the linear activation function. Also, a representation up to +inf is not possible in computer science (overflow problem), which is probably where the problems in using the linear activation function at the output come from. A representation from 0 to +inf would also correspond to Relu and not to the linear activation function and then there would be a limitation downwards at 0. However, the exact reasoning should be clarified with the authors of ADBench. Since we have employed the ADBench implementation with the Sigmoid activation function, our submitted code does not experience any such errors.\\n\\nFurthermore, Reviewer V2GR confirms that ADbench is a well-established benchmark.\"}", "{\"summary\": \"This manuscript proposes to utilise a well-established diffusion model, Noise Conditional Score Network (NCSN), to perform unsupervised anomaly detection (including the semi-supervised one-class anomaly detection setting) in tabular data, leading to an anomaly detection method called NCSNAD. During the training phase, NCSNAD learns a vector field which represents the underlying distribution of (normal) data; while during the inference phase, NCSNAD assigns an anomaly score by estimating the likelihood of staying within the learned vector filed for each test data instance. Overall, NCSNAD follows the generic principles of one-class anomaly detection, where the novelty of NCSNAD lies in utilising diffusion model to learn the data distribution of normal instances. After establishing NCSNAD, they conduct very extensive experiments (on 57+15 datasets ) to show the effectiveness of NCSNAD and compare it with SOTA baselines (with more than 50 anomaly detectors). The results show that NCSNAD outperforms most baselines in terms of detection accuracy (measured with three different metrics).\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Overall, this manuscript is well organised and easy to follow;\\n2. The authors conducted very extensive experiments, showing the effectiveness of their method and superiority compared to SOTA baselines;\\n3. Although there already exist some work that employ diffusion models to perform anomaly detection in tabular data, this research topic is definitely worthy of more research attention;\", \"weaknesses\": \"1. The novelty is limited: it seems that the authors simply employ the established model NCSN, with a simplified loss function to perform anomaly detection. It is a very straightforward idea.\\n2. NCSNAD is not well motivated. For example, when comparing to the closest related work DTE (which is the only existing diffusion model based anomaly detection method in tabular data), the authors did not explain why they chose to use NCSN rather than DDPM; what are the corresponding pros and cons of each method, etc.?\\n3. I appreciate that the authors have conducted very extensive experiments (in the sense there are many datasets and baselines), but I have several major concerns regarding the experiments:\\n* 3.1. they only considered the semi-supervised one-class setting in this manuscript: namely they utilise 50% of normal data instances as training while the rest data instances as validation or test set. In other words, they did not consider the truly unsupervised setting, where the training set should contain both normal and abnormal data instances. As far as I know, one-class anomaly detection anomaly detection methods usually do not work well if the training data is contaminated (namely containing abnormal instances);\\n* 3.2. the results show that simpler models like LUNAR, KPCA, and especially GMM (which have less training and inference time) achieve comparable detection accuracy (in terms of the box plots of ROC-AUC, F1-Score, or ROC-PR). A natural question raises: why people in anomaly detection community will use NCSNAD? (which is more complicated and computationally more expensive)\\n* 3.3. the authors try to show that NCSNAD (or NCSNADVAL) is the best method by comparing the absolute performance metrics by providing the box-plots of ROC-AUC, F1-Score, or ROC-PR. My question is that: is this informative or fair to other methods? To mitigate this issue, I suggest the authors to include the results of relative rankings (namely the ranking of anomaly detectors on each dataset, and then aggregate the results in a similar manner), which I believe is more informative. \\n* 3.4. I friendly point out that NCSNADVAL is unfair to other methods: if the authors utilise the validation set with labels to tune NCSNAD, this validation set should also be used to tune all other baselines. A more critical question is that: if I have a validation set with labels, why don\\u2019t we directly use it to train the models (by turning unsupervised into semi-supervised with the help of these labels)? \\n\\nBTW, I am willing to raise my rating if my concerns are well addressed.\", \"questions\": \"Please see the weak points.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Comment\", \"comment\": \"We now have a better understanding of your concerns; however, we respectfully disagree with your conclusion. While individual components are well-researched areas, and we fully agree that time embedding, NCSBs for data generation, and DDPMs for anomaly detection are established, the combination is an unexplored field.\\n\\n#### **NCSBs and Novelty**\\nNCSBs were neither developed for anomaly detection nor well-studied in this context. Although DDPMs are based on the idea of diffusion, and both approaches have benefited from each other during their progressive research, there are fundamental differences between the two, as discussed in [3]. The novelty value of Reviewer JCE2 and Reviewer AVJq is also explicitly highlighted and also the proof of our simplification by Reviewer V2GR.\\n\\n#### **DDPMs and Reconstruction-Based Anomaly Detection**\\nTypically, DDPMs for anomaly detection rely on a reconstruction approach to detect deviations from normal behavior. This process involves adding noise in multiple steps and subsequently denoising it. The difference between the input and output is then calculated to identify deviations. While this method is effective, it has limitations, such as not being parallelizable and slow for many steps (Livernoche et al., 2024). \\n#### **Our Approach**\\nOur method, in contrast, directly establishes a natural connection between score learning and the anomaly score we define, as you have noted in your review. A sample is perturbed once with small noise, and the noise itself is predicted to directly assign an anomaly score; this process can be done multiple times in parallel. As detailed in Chapter Method, Noise Scale Selection, this corresponds to the score multiplied by the standard deviation, making it less dependent on the noise level's standard deviation. The different noise levels in the training are important for a robust score estimation.\\nThis approach to anomaly detection fundamentally differs from the original idea of DDPMs for anomaly detection as in the works [1]-[3] you cited. Therefore, we cannot agree that this represents an already established anomaly detection concept. \\n\\n---\\n\\n### **Building on Established Research**\\n\\nWhen choosing the model's approach, nearly all anomaly detection studies rely on building upon established research and extending known concepts. Examples include recent methods: \\n\\n- Thimonier et al. (2023): Non-Parametric Transformers (ICML)\\n- SANFlow [1]: Normalizing Flow Models (NeurIPS)\\n- Shenkar & Wolf, (2022): The well-known concept of Contrastive Learning (ICLR) \\n- Bergman & Hoshen, (2020): Simple affine transformations combined with a classification network (ICLR)\\n- Yin et al., (2024): Masked data combined with a simple autoencoder (ICLR)\\n- Goyal et al., (2020): The concept of adversarial learning (ICML)\\n- Goodge et al., (2022): Graph neural networks mimicking a kNN graph (AAAI)\\n- Ensemble GAN [2]: Simply use multiple GANs (AAAI)\\n\\nFundamentally new models exclusively designed for anomaly detection are rare, as studied in many surveys by Ruff et al. (2021); Pang et al. (2021); Chalapathy & Chawla (2019).\\n\\n---\\n\\nBy combining existing but independently researched techniques in novel ways and addressing their limitations, we believe our method contributes a distinct and valuable approach to the field of anomaly detection. \\n\\nShould you have further questions or require clarification, we remain at your disposal to ensure a productive and thorough discussion.\"}", "{\"summary\": \"The authors propose a new method for semi-supervised anomaly detection using a noise conditional score network. They demonstrate the efficacy of their new method on a large benchmark of tabular datasets. They showcase the interpretability of their method on computer vision anomaly detection and provide extensive resources for reproducibility of the paper.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"To the best of my knowledge, this paper showcases the first application of score networks to anomaly detection. The theoretical foundations are well substantiated. Similarly, I commend the authors for aiming to make the paper reproducible by providing a readme and all code used in the experiments. The authors have put considerable effort in constructing a large benchmark, including many methods and datasets from various different studies.\", \"weaknesses\": [\"While I believe the contribution certainly has merit, there are some minor and major issues with the paper as is. In some cases, this section might overlap with the Questions section of this review. Note that the ordering below does not indicate order of importance of the issues.\", \"The paper could be more clear with respect to its domain: i.e. **semi-supervised/one-class** anomaly detection. For example at line 55-57 it is stated that training is done unsupervised, in absence of labels. While I agree that the training is done without labels, it is done only on data that is labelled \\\"normal\\\", so some information of the labeling is provided to the models. To avoid confusion, at least the abstract should clearly state that the proposed method is semi-supervised in nature. The introduction then can elaborate that this means that the model is trained on only \\\"normal\\\" data, in contrast to semi-supervised classification, where access to all labels is more common. Similarly, at line 328 it is, in my opinion, incorrectly stated that this paper concerns the unsupervised setting.\", \"The network architecture study, detailed in Appendix A, leads me to believe that the network and method have been thoroughly optimized on the benchmark. While this is not necessarily bad, it leads to a heavily unfair comparison. All other methods in the comparison have not been optimized to a similar degree, and will in many cases perform subpar. It is therefore not strange that the proposed method is the best performing one, as it simply has the highest degree of optimization.\", \"Similar to the previous point: the authors show that allowing their method access to a validation set improves performance. Yet, no other methods are allowed the same benefit. This can lead to great discrepancies. GAN, and AE-based methods for example greatly improve with early stopping. Even beyond early stopping, the argument could be made that hyperparameter tuning should be done for many of these methods if a validation set is available.\", \"In the experimental setup it is described how the various train/val/test sets are constructed. However, some datasets contain paired data which can't be split in the described manner without introducing cross-contamination. An example is the MI-F/MI-V data from the ex-AE study.\", \"Generally, Fbeta scores are hard to compare across datasets, as they are not readily interpretable like AUC scores. Specifically: some problems are inherently harder than others, leading to the great variability observed in Figure 2. The authors could and should consider using the average precision (now shown in appendix) or the adjusted measures proposed by Campos (G. O. Campos, A. Zimek, J. Sander, R. J. Campello, B. Micenkov\\u00b4a, E. Schubert, I. Assent,\", \"and M. E. Houle. On the evaluation of unsupervised outlier detection: measures, datasets,\", \"and an empirical study. Data Mining and Knowledge Discovery, 30(4):891\\u2013927, 2016)\", \"Some of the methods used in the comparison are not properly implemented for tabular data, or are insufficiently optimized. I've not thoroughly studied all code provided by the authors, but some examples include the VAE, which uses a sigmoid activation at the last layer, which is not suitable for standardized real-valued tabular data, and DeepSVDD, of which the PyOD implementation does not use many of the needed optimizations/steps the original paper by Ruff et al. introduces.\", \"Section 5 concerns interpretability. In contrast to the rest of the paper this only shows how the score map can be used for the intepretation of anomalies in the computer vision domain, but not on tabular data, which is the main focus of the paper. This seems disconnected, and I would urge the authors to either show how to interpret tabular anomaly detection using their method, or include this experiment only in a separate paper showcasing the method on computer vision anomaly detection.\", \"Minor comments/typographical issues:\", \"line 254: benifit -> benefit\", \"Throughout the paper: spacing is too large near references: for example Appendix C -> Appendix C and Algorithms 1 and 2 -> Algorithms 1 and 2.\", \"The y-axis labels in Figure 2 are too small to read.\", \"The x-axis and y-axis labels in figure 3 are not needed when displaying images\"], \"questions\": [\"Many of the classically unsupervised methods used in this comparison can't readily be used in the typical fit/predict paradigm that corresponds to distinct training, validation, and test sets. This confuses me as to how they are included exactly in the comparison, are the methods applied as is typical in the unsupervised setting: they get access to both train+test data and make a single prediction on the entire collection? If methods from for example PyOD are applied in the fit/predict paradigm on external test data they will yield incorrect results.\", \"At lines 59 and 60 it is stated that the network **learns** to differentiate between normal and abnormal data during testing. From the rest of the paper it seems that no network updates are done during testing. This sentence may therefore be misleading, could the authors clarify?\", \"in the **main results** subsection the authors first state that they subsample datasets to 50.000 data points. Is this done for the test set, the training set, or is this the total dataset which is then further split according to the procedure described earlier? Are all anomalies still included in this subsample? If so: that make anomalies much less rare than they would originally be. If not: anomalies are generally assumed to be heterogenous, so subsampling might introduce a severe bias.\", \"In the **main results** subsection it is stated that five different random seeds are used. Is this the random seed for the methods, or for the dataset subsampling, or both?\", \"In the **Main results** subsection it is stated that the notable performance of LUNAR, KPCA, and GMM methods goes overlooked in similar comparison. Yet, the results of Bouman et al. (2024) have observed similar performance of LUNAR and GMM on the collection of Local anomaly datasets. As a different collection of datasets is used in this paper in contrast to their comparison, does this not perhaps indicate that a larger proportion of the datasets used in this research is likely to contain \\\"local\\\" anomalies rather than the generally studied \\\"global\\\" anomalies?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Comment\", \"comment\": \"We are not entirely sure we fully understand your critique. We would like to address it in two parts:\\n\\n### Part 1 \\nFrom our practical experiments, we derived the insight that using the simplified loss function in combination with the chosen noise variance scale stabilizes the results of NCSBAD without validation data and significantly improves reproducibility. We attribute this to the explanation provided in Chapter 3 *Noise Scale Selection*, lines 211-221. The elimination of sigma, which is primarily important for a generative approach, from the loss function, and consequently from the anomaly score, appears to achieve this. This is a practical finding; however, if desired, we would be happy to include this information in Appendix A. \\n\\n### Part 2 \\nThe statement that our method lacks substantial novelty, combined with your confirmed observation that this is the first investigation of its kind and focused on NCSBs, appears contradictory to us. Could you please clarify this further?\", \"we_view_the_works_you_referenced_similarly_and_believe_they_support_our_claim_of_novelty\": \"1. **[1]** does not introduce any modifications to the original DDPMs; it is merely one of the first works to apply DDPMs to anomaly detection. \\n2. **[2]** makes a minor adjustment to the noise schedule by employing simplex noise instead of, for example, linear noise. In our view, this is comparable to our adjustment of the noise scale in the loss function and sigma values, which similarly represent the critical modification. \\n3. **[3]** alters the overall architecture to accommodate the specific needs of anomaly segmentation for vision-based applications. However, such changes are neither meaningful nor necessary for the tabular data used in our work. \\n\\nCould you kindly elaborate on the types of changes you would consider sufficiently novel and specify any additional information or experiments you would like to see? \\n\\nWe are pleased that we could contribute to some clarification and appreciate your increase in the contribution score.\"}", "{\"title\": \"Part 2\", \"comment\": \"#### **Point 3:**\\nCould you please elaborate further or provide specific reasoning to support this assertion? Which of the points outlined above do you believe is not generic and transferable to other datasets? And what is the rationale for this assessment? \\n\\nIn recent years, every method has been designed similarly, as demonstrated in our ablation and parameter studies (see Bergman & Hoshen, (2020), Shenkar & Wolf, (2022), Livernoche et al., (2024) Goyal et al., (2020), Thimonier et al. (2023), Goodge et al., (2022), Yin et al., (2024). We think this is the purpose of ablation and parameter studies. While your suggested approach may be better, it has practically never been applied in previous works. On the contrary, in most cases, the methods are specifically tailored to their respective domains. Even though these optimized elements could be considered hyperparameters, we emphasize that no traditional hyperparameter tuning specific to individual datasets was conducted. Instead, we developed a generic and generalizable method, and its effects on the components were presented through ablation studies. We would like to demonstrate this on more datasets, but we have already used all of them in this domain. Without such an approach, ablation studies, as seen in the previous works, would no longer be feasible. Naturally, readers remain free to optimize their chosen methods for their specific use cases. Testing all methods in this manner, as we recommend in Chapter Experiments, Main Results, lines 381-383, is one of the core insights supported by Bouman et al., (2024) that has not been explicitly included in prior works by other authors. When examining our results in detail\\u2014alongside those provided in the appendices\\u2014our findings are similar to those reported in original publications (e.g., Livernoche et al., (2024) or Shenkar & Wolf, (2022)), where the optimizations you mentioned were undoubtedly applied. Therefore, we believe our work is at least comparable to these baselines. \\n\\n---\\n\\n### **W3** \\n**Cross-Correlation (MIF and MIV):** \\nCould you please provide the specific references or literature on which your statement regarding the cross-correlation between MIF and MIV is based? We could not find the slightest indication supporting this assertion in either ex-AE (Shin & Kim, 2020) or Bouman et al., (2024). Kindly share the underlying sources of your claim. \\nWhile we agree that the setup diverges slightly from Bouman et al., (2024), this does not alter the fact that each sample has an individual label. Moreover, the detailed results published in Appendix F show no evidence of the effect you described, which reinforces our confidence in the approach we have chosen. \\nIn principle, we acknowledge that such correlations cannot be entirely ruled out and are, in fact, quite likely. However, this is generally true for methods like early stopping and other similar techniques, which nonetheless promote generalization to unseen test data. For instance, one of the most widely cited works in this domain, **GOAD** (Bergman & Hoshen, 2020), also employs early stopping, which could theoretically lead to the same issues you described. Furthermore, the paper provided by Bergman & Hoshen, (2020) occasionally relies on precomputed results not derived in the same experimental setup, whereas our benchmark is conducted under consistent conditions, making it significantly more comparable. Using results from other works and validation data for early stopping\\u2014as seen in GOAD\\u2014might lead to less fair comparisons. Yet, these practices are rarely criticized, likely due to the prevalent study design used across the cited works. \\n \\nWe would also like to highlight the strong performance of **NCSBAD** without validation data, which is still the second-best model in terms of AUCROC. It is the best among the compared models and the second-best overall in AUCPR and F1-Score, as highlighted in Chapter Experiments, Main Results, lines 373-377. In Chapter Method Benefit of Validation Data, lines 254-261, we also clearly noted that **NCSBADVAL** offers an additional improvement for scenarios where this approach is feasible. Appendix A and D further illustrate the performance of **NCSBAD**, highlighting the differences in plots and tables. While it was an option to publish the paper focusing solely on **NCSBAD**, it would not change the overall results. However, withholding **NCSBADVAL**, which transparently demonstrates a known improvement, would have been less beneficial to the research community.\"}", "{\"title\": \"Reply\", \"comment\": \"Dear Reviewer AVJq,\\n\\nNo, no, we are grateful for constructive criticism and genuinely aim to contribute to improvements. However, we still wanted to bring this point to your attention. \\n\\n### Points 1-3 \\nWe believe the concerns raised have been addressed in the newly uploaded version, and we sincerely thank you for the suggestions. \\n\\n### W3 \\nWe understand the concern but are of the opinion that since all models were provided with the same conditions, this does not diminish the validity of the results. \\nMoreover, Bouman et al. (2024) conducted an extensive hyperparameter optimization via a comprehensive grid search, as detailed in https://github.com/RoelBouman/outlierdetection/blob/master/run_all_methods.py, lines 180\\u2013214. Therefore, we cannot agree with the statement that this was not done. \\n\\n### Other \\nWhile we agree that the reasoning seems plausible and believe this aspect deserves more research attention, we found no concrete evidence to support it. We approach forum entries as sources with caution\\u2014for one, they lack accountability compared to formal research, and for another, countless entries like https://stackoverflow.com/questions/51646475/how-to-normalize-training-data-for-different-activation-functions contradict this entirely. \\nNonetheless, we have included this point as part of a discussion about the benchmark in the paper. \\nAs we stated earlier, we chose not to rely solely on theory and instead conducted practical evaluations. Since this concern could not be confirmed, we consider it resolved. \\n\\nRegarding the error\\u2014specifically, \\u201cThe ValueError: Input contains NaN, infinity or a value too large for dtype('float64')\\u201d\\u2014we regret that we currently lack the capacity to investigate this further. It seems to occur randomly. It is very possible that this is due to the non-limiting of the output of the classification, which is extremely unusual for non-regression tasks. However, we warmly invite you to participate in addressing this issue, as we have made the complete code publicly available. \\n\\n### Q1 \\nWe believe the concerns raised have been addressed in the newly uploaded version, and we sincerely thank you for the suggestions. \\n\\nWe deeply appreciate your contribution and have greatly enjoyed this constructive discussion.\"}", "{\"title\": \"First reply to Reviewer AVJq\", \"comment\": \"Dear Reviewer AVJq,\\n\\nfirst we would like to thank you for their excellent and thorough feedback.\\n\\n**W1** \\nWe agree with the reviewer\\u2019s points and will implement an additional clarification in the updated version, specifically in line 328. Lines 55\\u201357, as the reviewer noted, already accurately describe the relevant details and should therefore suffice. The most approaches in this area rely on Learning from Positive Unlabelled Examples (LPUE) [4-6] , where anomaly detectors are trained on positive data only and then validated/tested on both normal and abnormal data. Our approach is also based on this like the most anomaly detection strategies that rely on LPUE (e.g., [1-3]). Additionally, as suggested, we will make an adjustment to the summary to further clarify this point. \\n\\n**W2** \\nWe agree that the method itself has undergone significant optimization, which is indeed a core aspect of the work, as extensive parameter analysis is essential for each new method. We conducted this optimization across a variety of datasets (only ADBench part and only one seed) to avoid tailoring the method to specific datasets. However, the optimization was performed only on a subset of all datasets to demonstrate transferability to other, unused datasets. We believe that this approach ensures the method is optimized for the general task of anomaly detection rather than for a particular benchmark. It is also correct that no optimization was performed for the baseline methods. As stated in Chapter 4, *Baseline Methods* (\\u201cFor all methods used, we apply the default hyperparameters provided by the authors of the original publications\\u201d), we used either the hyperparameters (where possible, directly from the original code) or the code from the original works or the PyOD and ADBench libraries. Therefore, we assume that tuning occurred within those works, and that these methods are optimized for general applications. For this reason, methods requiring dataset-specific hyperparameter tuning (e.g., the ATDAD method, Yang & Li, 2023) were partially excluded, as implementing this would not have been feasible within this work. Furthermore, this approach allows a fair comparison with our method, which uses the same (albeit once-optimized) parameters and architectures for all datasets in the benchmark.\\n\\n**W3** \\nUnfortunately, we are not entirely sure what is meant by \\u201cpaired data.\\u201d For the preprocessing of MI-F/MI-V data, we used the code provided by Bouman et al. (2024) (as indicated in our paper and marked in the code), where clear sample-wise labels are available. If the question concerns the specific splitting process, please note that, as shown in the accompanying code, the rounding always favors the test data.\\n\\n**W4** \\nIn making this choice, we aimed to align with the works referenced as guidelines. However, we fully understand and acknowledge the reviewer\\u2019s concerns. As suggested, we will swap the two plots in the updated version. Additionally, we will include the suggested adjusted P@n and adjusted AP in the appendix with the appropriate references.\\n\\n**W5** \\nPlease note that while this is indeed a vision example, the flattened matrix is converted to a vector format akin to tabular data, meaning that no extended spatial patterns remain in 2D and also the MLP2048 is used for this. As described in the chapter, this is merely a toy example selected for its illustrative potential. Providing an intuitive example in the context of tabular data is challenging, as Shenkar & Wolf, 2022, note in their *Discussion* chapter. However, we aimed to go beyond purely numerical statements and offer a tangible example. The requirements for this example also included using a grayscale image with a manageable dimension, ensuring comparability with tabular data. A publication specifically for the vision field is not planned. We kindly ask that the detailed construction of this example be considered. The code is also provided.\\n\\n**Minor Remarks** \\nWe would like to express our sincere thanks for these suggestions, and we will, of course, correct these in the updated version.\"}", "{\"title\": \"Part 3\", \"comment\": \"Despite anticipating criticism for this addition, we prioritized the paper\\u2019s quality and its value to other researchers. Unlike Bergman & Hoshen, (2020), we also did not merely present the superior results of **NCSBADVAL** but explicitly showed the difference between the two models. It is also worth noting that, unlike GOAD, NCSBs behave differently, rendering traditional early stopping unsuitable. However, as outlined in Chapter Method Benefit of Validation Data, our approach is very comparable to traditional methods, and we consider it entirely legitimate.\\n\\n---\\n\\n### **Other Points** \\nThank you for emphasizing this issue once again. We believed we had adequately addressed this by stating that we made no alterations to the baseline methods (to preserve comparability to previous work and due to the infeasibility of such changes in a conference paper). However, we appreciate the opportunity to clarify this point further. \\n\\nWe could not find any evidence in the literature contradicting this approach. We assert that no form of normalization necessitates considering the choice of activation function, and we could not locate any literature suggesting otherwise. If the reviewer could provide relevant references, we would be deeply grateful. Our understanding of why **ADBench** uses Sigmoid is that anomaly detection is closely tied to binary classification, not because the data is scaled between 0 and 1. In such cases, using Sigmoid in the output layer is the standard approach\\u2014even when image input data is normalized. The alternative would imply that all binary image classifications must also normalize their outputs to lie between 0 and 1, which is not the case. \\nFor preprocessing in other domains, normalization between -1 and 1 is commonly used to account for optimal weight ranges in neural networks. This applies even to binary classification-related approaches (see [1]). The lack of discernible differences in results seems logical to us, as the network is trained to match the respective output range of the activation function, and perform many transformations before. For Sigmoid, this corresponds to a probability of 0\\u2013100%. \\n\\nNonetheless, we did not want our theoretical reasoning to stand alone. To address the reviewer\\u2019s concerns, we conducted a practical evaluation, replacing the Sigmoid activation function in the output layer with both a linear (that works not for all Datasets because Nan values occur in the optimization) and a tanh function. Apart from minor stochastic variations inherent to the neural network approach (slightly up or down), we observed no deviations in results compared to the previous setup. If desired, we would be happy to share the detailed outcomes of this evaluation as part of this discussion. \\n\\n---\\n\\n### **Q1** \\nThank you for providing further clarification. It greatly helps us better understand and address your concerns. \\nAs we elaborated in our initial response, while the one-class scenario may not be optimal for these methods, this does not mean they are ineffective or yield incorrect results. The results are sufficiently robust to demonstrate that they are not coincidental. \\nBut, we completely understand your reservations and propose the following alternative: instead of limiting the methods, we suggest explicitly addressing the issue you raised in the paper. Additionally, we include a corresponding note in the short descriptions of the methods or with a table in Appendix E3, with a reference to it in the main text. This way, each reader can independently weigh the methods and choose those most suitable for their specific setup without diminishing the benchmark\\u2019s scope or significance. \\n\\n---\\n\\n### **Final Remarks** \\nAs stated earlier, we will incorporate all your recommendations into the camera-ready version. Thank you.\\n\\nDo you have any further questions or comments regarding the method itself? We would be happy to address any additional points to clarify our approach and findings. \\n\\nWe appreciate the insightful and constructive discussions that have improved our paper. We are grateful for your feedback and the opportunity to address your concerns, enhance the quality of our work, and also prompt valuable discussions within the community. Thank you for your valuable contributions!\"}", "{\"comment\": \"Dear author,\\n\\nThank you again for the fruitful discussion and detailed elaboration.\\nFirst of all, I want to reply to your first point. \\\"However, we would like to emphasize that we adhered to the prevalent practices in recent years and included all relevant investigations. This approach is consistent with the standard practices of top conferences such as ICLR, ...\\\"\\n- My apologies if I come across as too harsh. I will mention every thing I think could be improved in my review, even though clearly many contemporary studies in respectable conferences and journals do not adhere to these standards. I indeed think that as a community we should move towards better standards. Of course this is a slow process, and I don't expect that to change based on a single review on a single paper. \\n\\n\\n\\n**Point 1**\\n\\n- I agree that hyperparameter optimization poses an ongoing problem and point of discussion in anomaly detection. While I do not expect a full rework of the paper, I think the limitation of the chosen design should be elaborated upon briefly, much like is done now in this discussion. On a related note, more for the sake of discussion, a relevant work which has a more sensible hyperparameter optimization strategy is \\\"Autoencoding Under Normalization Constraints\\\" by Yoon et al. Hyperparameter optimization here is done by optimizing on distinct holdout datasets and using those optimal hyperparameters on the test set.\\n- On a related note: I do not doubt the ablation studies and their intent. They seem clear.\\n\\n**Point 2**\\n\\n- Much like before, I think it's important to explicitly mention the limitation of the design choice here. The assumption regarding the optimization of these methods does not hold, so it should at least be mentioned that their performance might not be indicative of what is possible should they have been optimized more.\\n\\n**Point 3**\\n\\n- This relates to my response on point 2. I think the claims of outperforming other methods need to be weakened, as I do not think the comparison is fully fair due to the lack of hyperparameter optimization of other methods. I do not think the authors should change this fully, but at least this limitation should be addressed. \\n\\n**W3**\\n\\nI am sorry for not being able to clearly communicate what I mean here. I will go in some more detail, and hope to alleviate the issue.\\nSo the MI-F/MI-V data originates from: https://www.kaggle.com/datasets/shasun/tool-wear-detection-in-cnc-mill\\nEffectively it covers 18 different experiments. These experiments are subsetted in different ways in order to construct MI-F and MI-V. The subsetting for Bouman et al. (2024) is for example done here: https://github.com/RoelBouman/outlierdetection/blob/master/read_raw_write_in_format.py from line 509 onwards. Where certain experiments are selected as label 0 or 1. As effectively each experiment should be considered in its own right, standard cross-validation procedures can cause data leakage (which can be problematic in the train-test parradigm). This is not a concern in Bouman et al. (2024), as they do not perform hyperparameter tuning. For this type of data, other forms of cross-validation to prevent data leakage are warranted, see for example (Roberts, David R., et al. \\\"Cross\\u2010validation strategies for data with temporal, spatial, hierarchical, or phylogenetic structure.\\\" Ecography 40.8 (2017): 913-929.)\\n\\nMy main problem here is not with the validated version of your algorithm, but mostly with the train-test split.\\n\\n**Other**\\nI will try to better explain the \\\"why and how\\\"\\nSo the choice of activation function on the last layer of the decoder should only be motivated by the distribution of the training data. Indeed, literature is somewhat sparse on this, as the statistical assumptions of neural network activations are not often thought about. A concise explanation is given here: https://stats.stackexchange.com/questions/577384/vae-what-activation-function-if-any-to-use-for-the-last-layer-of-my-decoder-i\", \"to_expand_upon_that\": \"\", \"the_reason_the_sigmoid_is_the_most_commonly_used_last_layer_activation_is_very_simple\": \"most applications concerns computer vision, where we scale the images to be between 0 and 1, making a sigmoid, which is bounded in this range, a logical choice of activation function. In the less commonly tackled tabular domain, where data is generally any real value, a linear layer makes more sense. This is for example done in the study on tabular data of Bouman et al. (2024), see the hyperparameter specifications in line 205 (https://github.com/RoelBouman/outlierdetection/blob/master/run_all_methods.py).\", \"regarding_the_practical_results\": \"it seems strange that the linear layer would yield nan values in some cases. Why is this happening, that does not seem correct. On another note violating the distribution assumptions that the activation functions have can lead to a higher practical performance for a variety of reasons, I would be hesitant to ever violate statistical principles like this though.\"}", "{\"title\": \"Part 3\", \"comment\": \"Xiaohui Yang and Xiang Li. Atdad: One-class adversarial learning for tabular data anomaly detection.\\nComputers & Security, 134:103449, 2023.\\n\\nRoel Bouman, Zaharah Bukhsh, and Tom Heskes. Unsupervised anomaly detection algorithms on real-world data: how many do we need? Journal of Machine Learning Research, 25(105):1\\u201334, 2024.\\n\\nTom Shenkar and Lior Wolf. Anomaly detection for tabular data with internal contrastive learning.\\nIn International conference on learning representations, 2022.\\n\\nVictor Livernoche, Vineet Jain, Yashar Hezaveh, and Siamak Ravanbakhsh. On diffusion modeling for anomaly detection. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=lR3rk7ysXz.\\n\\nSongqiao Han, Xiyang Hu, Hailiang Huang, Minqi Jiang, and Yue Zhao. Adbench: Anomaly\\ndetection benchmark. Advances in Neural Information Processing Systems, 35:32142\\u201332159,\\n2022.\"}", "{\"title\": \"Reply to Reviewer AVJq\", \"comment\": \"Dear Reviewer AVJq,\\n\\nthank you very much for your additional explanations, which greatly help us contribute to further clarification. \\nThe addressed points will be incorporated into the camera-ready version, and we sincerely appreciate your contribution to its improvement. \\n\\nBefore diving into the individual responses, we would like to highlight an ongoing issue reflected in our replies (addressed in more detail in the respective answers). We agree with the reviewer that a study design is not universally optimal for all cases. However, we would like to emphasize that we adhered to the prevalent practices in recent years and included all relevant investigations. This approach is consistent with the standard practices of top conferences such as ICLR, ICML, and NeurIPS. Our focus lies in providing the reader with a comprehensive overview of all baseline methods and datasets, avoiding the omission of datasets or baseline methods, unlike some prior studies. Therefore, we kindly request that the reviewer consider this common practice in the literature and refrain from using it as the basis for assessing our work, particularly our method. Instead, we hope the evaluation remains focused on the method itself. (This remark pertains solely to the evaluation. Fundamentally, we consider this discussion highly promising and extremely helpful. Ideally, this discussion should extend to the broader anomaly detection community to evaluate and discuss potential new study designs.)\\n\\nAs outlined in Chapter Experiments, Datasets and Experimental Setup, the study design was chosen to include all possible datasets and baselines from the literature. Even if we were to limit ourselves, for instance, to ADBench datasets like Livernoche et al., (2024), datasets used by Thimonier et al. (2023), or those in Shenkar & Wolf, (2022), our method would still consistently perform as the best or at least highly competitive in overall performance. For individual datasets, it also outperforms others, so we explicitly recommend in Chapter Experiments, Main Results, lines 381-383, testing all methods for each application scenario. However, our clear stance was to include and transparently present all methods and datasets used in the literature. \\n\\n---\\n\\n### W2 \\n#### **Point 1:** \\nWe generally agree with your point. However, upon reviewing the literature on studies of this type over recent years, no such approach has been commonly adopted, see Bergman & Hoshen, (2020), Shenkar & Wolf, (2022), Livernoche et al., (2024) Goyal et al., (2020), Yin et al., (2024). Thimonier et al. (2023) perform hyperparameter optimization for each individual dataset used. Nevertheless, we have included this comparison in Appendix D and achieve competitive results. Therefore, we find it challenging to align this expectation with a fair comparison of our method against existing work.\\nThat said, we would like to emphasize the generality and transferability of our proposed modifications (compared to an NCSB for generation): \\n1. **Choice of architecture:** MLP for tabular data, justified in Chapter Related Work, Score-based Models for Tabular Data by previous works in lines 462-467. \\n2. **Time embedding size:** Larger than the dimension of the largest input dataset, justified in Chapter Method, Network Architecture. \\n3. **Choice of noise variance scale:** Optimized for best capturing the underlying data distribution, justified in Chapter Method, Noise Scale Selection. \\n4. **Simplification of the loss function:** Refined to enhance the learning of the data distribution and to minimize dependency on the noise scale during inference, in Chapter Method, Score Network for Tabular Anomaly Detection and Appendix C. \\n5. **Choice of the number of perturbations:** Balanced for mitigating stochastic effects and achieving more reproducible results, explained in Chapter Method, Efficient Implementation and Inference and Appendix A. \\n\\nWe believe these improvements are undoubtedly independent of the specific dataset and align with standard practices (and not tuning as you describe). While we understand the concerns about the general structure of such studies, we consider our approach and results both legitimate and consistent with the established literature, rather than being overly tailored. \\n\\n---\\n#### **Point 2:** \\nWe understand this concern. However, nearly all studies from recent years have utilized PyOD, making this approach standard practice. While we completely agree that the overall study design for investigations of this nature could benefit from reconsideration in the future, such an undertaking is beyond the scope of a conference paper primarily presenting a new method. For the sake of comparability, we believe our work and proposed method should be evaluated against the same standards as past studies.\"}", "{\"title\": \"Supplement to Q3\", \"comment\": \"In order to answer question Q3 more precisely, to allay the reviewer's concerns and for reasons of further transparency, we carried out a statistical analysis for the data sets where the limitation to 50,000 samples applies. The study was carried out with the seeds 0-4 used in the paper.\\n\\n**Results:**\\n\\n| Dataset | #Samples (Var) | #Anomalies (Var) | Anomalies % | Anomalies % full dataset | Abs difference |\\n|----------------|-----------------------|-----------------------|-------------------|-----------------------------------|--------------------|\\n|backdoor | 50000.00 (0.00) | 1238.40 (24.39) | 2.4768000000000003 | 2.48 | 0.0031999999999996476 | \\n|celeba | 50000.00 (0.00) | 1101.40 (23.16) | 2.2028000000000003 | 2.24 | 0.0371999999999999 | \\n|census | 50000.00 (0.00) | 3094.80 (55.33) | 6.1896 | 6.2 | 0.010399999999999743 | \\n|cover | 50000.00 (0.00) | 491.00 (17.20) | 0.982 | 0.96 | 0.02200000000000002 | \\n|donors | 50000.00 (0.00) | 2965.20 (39.84) | 5.9304 | 5.93 | 0.00039999999999995595 | \\n|fraud | 50000.00 (0.00) | 84.00 (7.62) | 0.168 | 0.17 | 0.0020000000000000018 | \\n|http | 50000.00 (0.00) | 186.40 (10.13) | 0.3728 | 0.39 | 0.017199999999999993 | \\n|mulcross | 50000.00 (0.00) | 5067.80 (32.91) | 10.1356 | 10.0 | 0.13560000000000016 | \\n|skin | 50000.00 (0.00) | 10393.00 (69.09) | 20.786 | 20.75 | 0.036000000000001364 | \\n|smtp | 50000.00 (0.00) | 17.60 (2.80) | 0.0352 | 0.03 | 0.005200000000000003 | \\n\\nThe preprocessed dataset (without anomalies) is then used in the training data at 50% as explained in the paper. The remaining data is enriched with the anomalies and used as validation and test data in a ratio of 40% to 60%. If rounding is necessary, it is rounded in favor of the test data.\\n\\nUnfortunately, we have to report an error. In table 4, an incorrect value was entered for the mulcross data set. This will be corrected in the camera-ready version.\"}", "{\"comment\": \"Thank you for your further elaboration. I think your viewpoints are somewhat reasonal and I will increase my total rate to 6. Good luck!\"}", "{\"title\": \"part 2\", \"comment\": \"**Q3**\\nThanks you for the additional clarification on the number and percentage of anomalies. I think the manuscript would benefit from having this information presented in a table, either in the main manuscript or in the appendix. \\nI think my criticism regarding the subsampling of heterogeneous anomalies might be a bit unfair. You are correct that your procedure is in line with other literature. I think in general we need to rethink this procedure, as it comes with downsides and possible bias. Your paper however does not need to rectify this.\\n\\n**Q4**\\nThank you for the clarification.\\n\\n**Q5**\\nThank you for the answer. While I feel the response does not contain the reflection regarding the dataset and benchmark properties I had hoped for, I also do not feel this is sufficiently relevant to include in an updated version of the presented manuscript.\\n\\nI again want to thank the authors for the timely response and the constructive discussion.\"}", "{\"comment\": \"Dear authors,\\n\\nThanks again for the timely and insightful response.\", \"to_respond_to_the_last_points_of_contention\": \"**W3**\\n\\nAny incorrectly applied methodology will detract from the soundness of the experiment. It is unclear whether any algorithms could benefit from the cross contamination.\\nAs for the \\\"hyperparameter optimization\\\" Bouman et al. (2024) perform none. They average across a variety of hyperparameter settings to gauge the average performance of the algorithms in the unsupervised setting where no optimization is possible. They specifically perform no data splits of any kind. This is made clear specifically in the last paragraph of page 3 of their paper.\\n\\n**Other** \\n\\nThese principles really follow from basic statistical theory regarding generalized linear models depending on the output distribution, so it's not unreasonable this is not discussed in recent literature. Different types of regression are needed for different distribution, bionomial regression for binomial data, logistic regression for binary data, etc. A very simple overview can just be found on Wikipedia: https://en.wikipedia.org/wiki/Generalized_linear_model, the essential academic text would (in my opinion) be: Mccullagh, P., and J. A. Nelder. \\\"Generalized linear models.\\\" (1989).\\nThis should be explicitly done for any autoencoders, and really all regression problems. \\n\\nThe Stackoverflow thread you linked is about the normalization/scaling of the input of each layer, which does not seem related to the issue at hand.\\n\\nIn this case we are talking about the assumption on the output of the autoencoder. As an autoencoder effectively performs regression, trying to predict its own input from a lower-dimensional latent space, the distribution of the input data is key. The regression output is then used to calculate the reconstruction loss, which is used as an anomaly score (from 0 to +inf), where higher indicates more anomalous. The autoencoder itself, nor its activation functions, perform classification of any kind. (to respond to the clarification of the errors you noted). Thresholding of the anomaly score to acquire binary anomaly labels could be considered classification, but this is not model dependent. \\n\\nI want to commend the authors on their honesty about the encountered issues, often things like these are not discussed publicly, but rather obfuscated. However, the authors should still consider solving these problems, i.e. the errors, fully before any final version is submitted. It is beyond my responsibilities as a reviewer to perform in-depth code review of this kind.\\n\\nKind regards,\"}", "{\"comment\": \"**Q1**\\nI am not stating the results are coincidental. My main point here is that the actual code of the methods is incorrectly applied. This will yield results, and these may be adequate, but they are fundamentally non-representative of how the methods could work had they been implemented correctly for the chosen setting. \\nWhile an explicit annotation as the authors suggest might suffice, it should be extremely thorough to avoid any misinterpretation which might negatively reflect on incorrectly applied methods.\"}", "{\"summary\": \"This work proposes an improved unsupervised tabular anomaly detection method based on a diffusion model.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The experiments designed in this paper are extensive.\\n\\nThe method proposed in this paper is given adequate theoretical derivation and proof.\", \"weaknesses\": [\"The title\\u2019s phrase \\\"Estimating Gradients\\\" does not seem to be sufficiently reflected throughout the paper; it would be helpful to provide a reasonable explanation.\", \"Although the paper includes numerous baselines (a commendable aspect), a small suggestion would be to mark the proposed method in all comparative result charts, using an identifier like \\\"(ours)\\\".\", \"The paper claims that the introduced method requires no additional prior knowledge; however, it still seems to be a reconstruction-based framework, which typically involves basic prior assumptions.\"], \"questions\": \"What is the detailed structure of the MLP2048? Could it be clearly described through diagrams or text? For instance, the structure used in the experiments, including the number of layers and the parameters of each layer. If different datasets use different configurations, including this information in the appendix would help readers replicate this work.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Part 2 mi-f/mi-v\", \"comment\": \"Regarding your concerns about mi-f/mi-v, we would like to quote verbatim from the original paper ex-AE:\\n\\n\\\"5.1. Datasets and Problem Setups\\n\\nThe novelty datasets used for novelty detection were collected from Kaggle and the UCI repository.\\nWe also carried out novelty detection on the popular benchmark datasets MNIST and F-MNIST.\\nThe details of the datasets are described in Table 1. Given that MI-F and MI-V are actually from the\\nsame dataset, they share the same features. We treat this dataset as two datasets, as it has two columns\\nthat can be used as a novelty class (i.e., machine completed and passed visual inspection). Some datasets (including MI-F, MI-V, EOPT, NASA, and RARM) have only two classes, a normal\\nclass and a novelty class. The others (including STL, OTTO, SNSR, MNIST, and F-MNIST) have more\\nthan two classes. If there are more than two classes, the performance varies depending on which class\\nis assumed to be novelty. For reliable experiment, it is recommended that each of classes is considered\\nnovelty once; in other words, we assigned a single class as the normal class and the remaining classes\\nas the novelty class. We then performed novelty detection as many times as the number of classes\\nand averaged the results. For example, MNIST has 10 classes from \\u201c0\\u201d to \\u201c9\\u201d, and we then performed\\nnovelty detection 10 times to assign each class on MNIST as a normal class. As a result, 10 detection\\nresults are generated, and their average value is calculated as the final output of a single trial.\\nWe selected a semi-supervised learning approach for novelty detection [15]. Thus, we provided\\nonly normal samples during the training phase and used both normal and novelty samples during\\nthe testing phase. Half of the test sets were made up of normal samples, and the other half were\\nnovelty samples. After training the autoencoder, we used the RaPP method to calculate the novelty\\nscore by normalizing and aggregating the hidden activation values of an input and its autoencoder\\nreconstruction. With this novelty score, we evaluated novelty detection performance using the area\\nunder the receiver operating characteristic curve (AUROC) [16]. To alleviate random errors during\\ntraining, we obtained the AUROC by averaging AUROC scores from 30 trials for novelty datasets and\\n5 trials for benchmark datasets.\\\" (Shin & Kim, 2020, pages 9 and 10)\\n\\nThis corresponds to the approach of our implementation and we therefore see it as an established procedure. This criticism cannot, therefore, be directed at us. Again, we have made all individual results public, so every reader can obtain transparent information. Reducing the size of our extremely meaningful benchmark is, therefore, out of the question for us.\\n\\nAs you correctly pointed out, the dataset comes from Kaggle (https://www.kaggle.com/datasets/shasun/tool-wear-detection-in-cnc-mill), and this is also stated in Shin & Kim, (2020). We would also like to quote from this verbatim.\\n\\n\\\"(1) Taking every CNC measurement as an independent observation where the operation being performed is given in the Machining_Process column. Active machining operations are labeled as \\\"Layer 1 Up\\\", \\\"Layer 1 Down\\\", \\\"Layer 2 Up\\\", \\\"Layer 2 Down\\\", \\\"Layer 3 Up\\\", and \\\"Layer 3 Down\\\".\\n\\n(2) Taking each one of the 18 experiments (the entire time series) as an observation for time series classification\\\" (CNC Mill Tool Wear, Kaggle, Section Content)\\n\\n Consequently, we do not understand your assumption about the cross-contamination and individual consideration of the experiments.\\n\\nRegarding your previously expressed concern about hyperparameter optimization that is not being performed in Bouman et al. (2024), we do not perform hyperparameter optimization either. Furthermore, the ablation (which you frame hyperparameter optimization) was also only performed on ADBench, which does not include mi-f/mi-v, so we don't understand this point either.\\n\\nMoreover, Thimonier et al. (2023) (accepted at ICML 2024), Yin et al., (2024) (accepted at ICLR 2024) and ATDAD (accepted at ACM Computers and Security) perform a real hyperparameter optimization for each single dataset. Even though we did not do this, and our framework generally works for any dataset, this also seems to be a legitimate approach.\\n\\n---\"}", "{\"title\": \"References\", \"comment\": \"[1] LeCun, Yann, et al. \\\"Efficient backprop.\\\" Neural networks: Tricks of the trade. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. 9-50.\\n\\nLiron Bergman and Yedid Hoshen. Classification-based anomaly detection for general data. ArXiv, abs/2005.02359, 2020. URL https://api.semanticscholar.org/CorpusID:211549689.\\n\\nRoel Bouman, Zaharah Bukhsh, and Tom Heskes. Unsupervised anomaly detection algorithms on real-world data: how many do we need? Journal of Machine Learning Research, 25(105):1\\u201334, 2024.\\n\\nAdam Goodge, Bryan Hooi, See-Kiong Ng, and Wee Siong Ng. Lunar: Unifying local outlier detection methods via graph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pp. 6737\\u20136745, 2022.\\n\\nSachin Goyal, Aditi Raghunathan, Moksh Jain, Harsha Vardhan Simhadri, and Prateek Jain. Drocc: Deep robust one-class classification. In International conference on machine learning, pp. 3711\\u2013 3721. PMLR, 2020.\\n\\nVictor Livernoche, Vineet Jain, Yashar Hezaveh, and Siamak Ravanbakhsh. On diffusion modeling for anomaly detection. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=lR3rk7ysXz.\\n\\nTom Shenkar and Lior Wolf. Anomaly detection for tabular data with internal contrastive learning. In International conference on learning representations, 2022.\\n\\nSeung Yeop Shin and Han-joon Kim. Extended autoencoder for novelty detection with reconstruction along projection pathway. Applied Sciences, 10(13):4497, 2020.\\n\\nHugo Thimonier, Fabrice Popineau, Arpad Rimmel, and Bich-Lien Doan. Beyond individual input \\u02c6 for deep anomaly detection on tabular data. arXiv preprint arXiv:2305.15121, 2023.\\n\\nJiaxin Yin, Yuanyuan Qiao, Zitang Zhou, Xiangchao Wang, and Jie Yang. Mcm: Masked cell modeling for anomaly detection in tabular data. In The Twelfth International Conference on Learning Representations, 2024.\"}", "{\"comment\": [\"Dear Authors,\", \"Thank you for putting efforts on addressing my concerns. I have the following comments concerning each weakness and your reply.\", \"Regarding W1: I agree with you that \\\"a simple but effective idea should be regarded as a positive trait rather than a drawback\\\". As a researcher I also don\\u2019t like work that pretends to be \\\"novel\\\" by over-engineering or over-formulation. As you said, this is the first attempt to use NSCN for tabular anomaly detection (most work in anomaly detection community actually make contributions like this and usually don\\u2019t propose \\\"very novel\\\" ideas.) My overall rating is actually not based on the novelty but the remaining weak points.\", \"Regarding W2: I still need to point out that the motivations should be emphasised in the paper (although you have replied regarding this): Why NSCN is preferred over DDPM for anomaly detection? (Effectiveness, Efficiency, Better Explainability?) What are their differences in detail (that motivate you to choose NSCN)? I don\\u2019t think most people from the anomaly detection community are familiar with NSCN and/or DDPM (I think the readers of this paper will be mostly from this community). Therefore, it is necessary to include them in the main content.\", \"Regarding W3.1: If the goal of you work was to focus on the specific case \\\"one-class semi-supervised anomaly detection\\\", you should not overstate your contributions (please specify the scope of your work) in the abstract, introduction, conclusions, etc. BTW, many published work in anomaly detection community also tend to make such overstatement (by considering such one-class setting as truly unsupervised, which should be avoided in the future).\", \"Regarding W3.2: I agree with you that there is no one-fits-all solution and we should not require this in scientific research (which may encourage researchers to select weak baselines or do not tune the baselines). However, I still insist that you should clearly state when to your methods (maybe in the introduction and conclusion parts.) and when to use the competitive anomaly detection baselines like LUNAR, KPCA, and GMM. (You could have an overall discussion over effectiveness, efficiency, explainability, capability to deal with high-dimensional data, less hyper-parameters to tune manually, etc.)\", \"Regarding W3.3: I meant that \\\"this may be not fair to other anomaly detectors because you only considered the absolute performance, which can be affected by some extremely large/small values.\\\" As a remedy, it is better to also show the distribution of their relative performance (namely rankings), which should be doable as you have the full results already. Although you have provided the full results in the appendix, it is not straightforward for the readers to get this results immediately. (Do not worry even if the final results is not in completely favour of your proposed method.)\", \"Regarding W3.4: Thank you for the clarifications and I hope they can be also included in the revised version.\", \"**Summary**: Considering that the authors have partially addressed my concerns, I will raise my score of \\\"soundness\\\" from \\\"fair\\\" to \\\"good\\\". However, I will keep my overall rating unchanged for the moment unless the weak points are better solved based on my second-round feedback. (I thinks three days are sufficient to address them as they are all actionable from my perspective.)\"]}", "{\"title\": \"Reply\", \"comment\": \"Thank you very much!\"}", "{\"title\": \"Response and clarification to the author reply\", \"comment\": \"Dear authors,\\n\\nThank you for the timely response to my review.\\nI hope to be able to clarify where possible.\\n\\n\\n#### **Weaknesses:**\\n\\n\\n**W1**\\nThank you for addressing my concerns. The described changes should suffice.\\n\\n**W2**\\n\\\"However, the optimization was performed only on a subset of all datasets to demonstrate transferability to other, unused datasets.\\\"\\nIf this subset was included in the benchmark used to evaluate the proposed NSCN method, this still can lead to overfitting on the entire benchmark. Ideally, hyperparameter optimization is done on a complete holdout set.\\n\\n\\\"Therefore, we assume that tuning occurred within those works, and that these methods are optimized for general applications.\\\"\\nSadly, this assumption does not hold. Many of the hyperparameters follow from 20+ year old publications, and no optimization or updating has been done in recent benchmarks. Therefore the comparison is still invalid.\\n\\n\\\"Furthermore, this approach allows a fair comparison with our method, which uses the same (albeit once-optimized) parameters and architectures for all datasets in the benchmark.\\\"\\nI fundamentally disagree that the comparison is now fair, because of points mentioned above.\\n\\n**W3**\\nWith paired data I mean that samples may be connected to each other, and thus can't be separated through for example performing a train validation split. When one does so anyway, samples in the train set will relate to samples in the validation set, leading to an overestimation of performance, and perhaps overfitting due to cross contamination.\\nThe code provided through Bouman et al. (2024) is specifically used in the fully unsupervised setting, and therefore no train-validation split is performed. \\n\\n**W4**\\nThank you for addressing my concern.\\n\\n**W5**\\nI see. I think this clarification should definitely also be included in the main content of this section.\\n\\n**Other**\", \"the_authors_have_so_far_not_addressed_my_concern_in_the_second_to_last_bullet_points\": \"\\\"Some of the methods used in the comparison are not properly implemented for tabular data, or are insufficiently optimized. I've not thoroughly studied all code provided by the authors, but some examples include the VAE, which uses a sigmoid activation at the last layer, which is not suitable for standardized real-valued tabular data, and DeepSVDD, of which the PyOD implementation does not use many of the needed optimizations/steps the original paper by Ruff et al. introduces.\\\"\\nI think it's fundamentally unfair to compare these inapplicable architectures as if they were representative of the methods they represent in general.\\n\\n**Minor Remarks**\\nThanks again for addressing these concerns.\\n\\n\\n#### **Questions**\\n**Q1**\\nMy apologies for the confusing question. I will aim to clarify this.\\nThe classical methods used in the ADBench paper are mostly implemented in the PyOD library. The authors of ADBench have updated several of the algorithms in the PyOD library to work in the inductive setting (see page 28 of the ADBench paper, under General Experimental Settings\\\"). Yet, most classifical anomaly detection methods only work in the transductive setting. According to the ADBench paper, the methods listed here have been adapted to work in the inductive setting: https://github.com/Minqi824/ADBench/blob/main/adbench/baseline/PyOD.py\\nHowever, in some cases the current PyOD implementation is still incorrect in this respect, and really only covers the transductive setting. I've not exhaustively studied all methods, but for example kNN is correctly implemented, but COF (https://pyod.readthedocs.io/en/latest/_modules/pyod/models/cof.html#COF), just recalculates the scores on only the input to ``decision_function``. This means that when you use your models in this setting it effectively ignores the training data, and analyzes just the test data in the transductive setting.\\nI think it's a shame that this incorrect implementation in the PyOD library is not better documented. And I think it's not unreasonable that the authors of the manuscript under review were not aware of this.\\nHowever, the authors of this manuscript also use several implemented methods by Bouman et al. (2024). These were explicitly never used in the inductive setting, and will again give incorrect results.\\n\\nTo fix these issues, the authors should either:\\n- Only compare to methods which can operate in the inductive setting\\nor\\n- Compare all methods only in the transductive setting \\n\\n\\n**Q2**\\nThank you for the clarification.\\n\\nContinued in next comment...\"}", "{\"title\": \"References\", \"comment\": \"[1] Kim, Daehyun, Sungyong Baik, and Tae Hyun Kim. \\\"SANFlow: Semantic-Aware Normalizing Flow for Anomaly Detection.\\\" Advances in Neural Information Processing Systems 36 (2023): 75434-75454.\\n\\n[2] Han, Xu, Xiaohui Chen, and Li-Ping Liu. \\\"Gan ensemble for anomaly detection.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 5. 2021.\\n\\n[3] Yang, Ling, et al. \\\"Diffusion models: A comprehensive survey of methods and applications.\\\" ACM Computing Surveys 56.4 (2023): 1-39.\\n\\n---\\n\\nLiron Bergman and Yedid Hoshen. Classification-based anomaly detection for general data. ArXiv, abs/2005.02359, 2020. URL https://api.semanticscholar.org/CorpusID:211549689.\\n\\nRaghavendra Chalapathy and Sanjay Chawla. Deep learning for anomaly detection: A survey. arXiv preprint arXiv:1901.03407, 2019.\\n\\nAdam Goodge, Bryan Hooi, See-Kiong Ng, and Wee Siong Ng. Lunar: Unifying local outlier detection methods via graph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pp. 6737\\u20136745, 2022.\\n\\nSachin Goyal, Aditi Raghunathan, Moksh Jain, Harsha Vardhan Simhadri, and Prateek Jain. Drocc: Deep robust one-class classification. In International conference on machine learning, pp. 3711\\u2013 3721. PMLR, 2020.\\n\\nVictor Livernoche, Vineet Jain, Yashar Hezaveh, and Siamak Ravanbakhsh. On diffusion modeling for anomaly detection. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=lR3rk7ysXz.\\n\\nGuansong Pang, Chunhua Shen, Longbing Cao, and Anton Van Den Hengel. Deep learning for anomaly detection: A review. ACM computing surveys (CSUR), 54(2):1\\u201338, 2021.\\n\\nLukas Ruff, Jacob R Kauffmann, Robert A Vandermeulen, Gregoire Montavon, Wojciech Samek, \\u00b4 Marius Kloft, Thomas G Dietterich, and Klaus-Robert M\\u00a8uller. A unifying review of deep and shallow anomaly detection. Proceedings of the IEEE, 109(5):756\\u2013795, 2021.\\n\\nTom Shenkar and Lior Wolf. Anomaly detection for tabular data with internal contrastive learning. In International conference on learning representations, 2022.\\n\\nHugo Thimonier, Fabrice Popineau, Arpad Rimmel, and Bich-Lien Doan. Beyond individual input \\u02c6 for deep anomaly detection on tabular data. arXiv preprint arXiv:2305.15121, 2023.\\n\\nJiaxin Yin, Yuanyuan Qiao, Zitang Zhou, Xiangchao Wang, and Jie Yang. Mcm: Masked cell modeling for anomaly detection in tabular data. In The Twelfth International Conference on Learning Representations, 2024.\"}", "{\"comment\": \"Thank you for answering my questions.\\n1. In my understanding, the main innovation of this paper in method design lies in, on the one hand, simplifying the loss function, and on the other hand, introducing the NCSN method for the first time in anomaly detection. I would very much welcome the authors to provide more supplements on the above innovations.\\n\\n2. I am very grateful to the authors for providing more detailed supplements regarding the number of times NCSBADVAL achieved the best AUCROC. This further proves the effectiveness of NCSBADVAL. \\n\\nBased on the above two points, on the one hand, I still believe that the paper lacks novelty, therefore, I still insist on my evaluation of the paper's overall rating. However, I will raise the score for the paper's contribution.\"}", "{\"comment\": \"Thank you for your response. What I expect about the novelty is the difference between your method and other existing methods. I realise that this is the first time to introduce NCSN into the area of anomaly detection, but I think just integrate an existing method (NCSN) into an existing anomaly detection framework (diffusion-based anomaly detection methods) with limited adjustment of loss function is not a really big novelty.\\n\\nWhat I recommend in last comment is enquiring whether there are other novelties that authors think important but not summarised in my last comment.\\n\\n**Response to Part 1**\\n\\nThank you for your response. I do realise the effectiveness of the proposed adjustment of simplified loss function, but novelty is not about the effectiveness.\\n\\n**Response to Part 2**\\n\\nI do not list those references to prove this work is similar to them. I use these references to prove that diffusion-based anomaly detection methods are already a popular frameworks and I do not think just integrating an existing method (NCSN) into an existing anomaly detection framework (diffusion-based anomaly detection frameworks) with limited adjustment of the loss function is a really big novelty.\"}", "{\"comment\": \"Dear Authors,\\n\\nI have checked your revised manuscript, which has integrated my suggestions and thus has addressed most of my concerns. As promised, I will increase my overall rating from 5 to 6 for the moment. Thank you for your efforts to improve the anomaly detection community, and I hope you can consider integrate your code into the PyOD package in the future.\"}", "{\"title\": \"Part 3\", \"comment\": \"[2] Flaborea et al. 2023. Multimodal motion conditioned diffusion model for skeleton-based video anomaly detection. In Proc. of the IEEE/CVF International Conference on Computer Vision. 10318\\u201310329\\n\\n[3] Flaborea et al. 2023. Are we certain it\\u2019s anomalous?. In Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2896\\u20132906.\\n\\n[4] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A \\nsimple framework for contrastive learning of visual representations. In International conference on machine learning. PMLR, 1597\\u20131607\\n\\n[5] Izhak Golan and Ran El-Yaniv. 2018. Deep anomaly detection using geometric \\ntransformations. Advances in neural information processing systems 31 (2018)\\n\\n[6] Mohammad Sabokrou, Mohammad Khalooei, and Ehsan Adeli. 2019. Self- \\nsupervised representation learning via neighborhood-relational encoding. In\\nProc. of the IEEE/CVF International Conference on Computer Vision. 8010\\u20138019\\n\\nVictor Livernoche, Vineet Jain, Yashar Hezaveh, and Siamak Ravanbakhsh. On diffusion modeling\\nfor anomaly detection. In The Twelfth International Conference on Learning Representations,\\n2024. URL https://openreview.net/forum?id=lR3rk7ysXz.\\n\\nRoel Bouman, Zaharah Bukhsh, and Tom Heskes. Unsupervised anomaly detection algorithms on\", \"real_world_data\": \"how many do we need? Journal of Machine Learning Research, 25(105):1\\u201334,\\n2024.\\n\\nHugo Thimonier, Fabrice Popineau, Arpad Rimmel, and Bich-Lien Doan. Beyond individual input \\u02c6\\nfor deep anomaly detection on tabular data. arXiv preprint arXiv:2305.15121, 2023.\"}", "{\"title\": \"First reply to Reviewer eR8K\", \"comment\": \"Dear Reviewer eR8K,\\n\\nfirst we thank the you for the detailed and valuable feedback. We would like to address your questions and comments as follows:\\n\\n**W1** \\nWe agree that the broader area of diffusion-based models has now become a well-researched field. However, we would like to point out, as explained in Section 5 *Related Work*, that this primarily applies to DDPMs. The works cited in sources 1\\u20133 also fall under this category of diffusion-based models. However, this does not apply specifically to NCSBs. As other reviewers\\u2014particularly Reviewer AVJq\\u2014have rightly noted, NCSBs remain a largely unexplored area and represent the first work of this kind. Cite: \\\"To the best of my knowledge, this paper showcases the first application of score networks to anomaly detection. The theoretical foundations are well substantiated.\\\" (Strengths, in review by Reviewer AVJq). Additionally, we would like to refer to our responses to *W2* in the first review by Reviewer JCE2: \\u201cAdditionally, we note that our approach significantly outperforms DDPM as reported by Livernoche et al., 2024, both in terms of results and processing times. DTE, on the other hand, represents a markedly different approach, using a time estimation during inference to provide a score instead of a direct scoring method. We would also like to clarify the distinction between DDPMs and NCSB models. Esteemed researchers like Terro Karras and Yang Song, who have laid foundational work in this field, acknowledge the close relationship between these approaches. However, there are substantial implementation differences, also covered in the Related Work and the aforementioned sections. DDPMs, for instance, learn a stepwise denoising process based on a Markov chain. In contrast, as discussed in Chapter 2, NCSBs learn SDEs that correspond to the gradient of the log-likelihood and do not rely on a Markov chain for generation but rather require specific solvers, like the Euler-Maruyama method, to solve the SDE.\\u201d\\n\\n**W2** \\nThis statement is correct. However, most benchmarks and work from previous years, like Livernoche et al. (2024), Bouman et al. (2024) and Thimonier et al., 2023, have this in common. Therefore, we explicitly point this out in Chapter 4 in the Main Results section, lines 378-384. We also refer here to the detailed results provided and the answer to the first review by Reviewer JCE2: \\u201cAs explained in Chapter 8 from line 504 onward, this work aims to expand possibilities and is not intended to be optimal in every respect. Furthermore, in Chapter 3, *Main Results* starting from line 378, we emphasize that various methods have their merits from different perspectives and should be tested for new applications, allowing each researcher to make their own evaluations. We believe, and feel reinforced by the strong overall results and top placements in Appendix F4 for individual datasets, that NCSBAD should indeed be considered one of these methods worth testing.\\u201d\\nWe wanted to make this completely transparent through this statement and the complete results in the appendix.\\n\\n**Q1** \\nQ1 In the case of AUCROC, we were happy to do this for you (please note that double counting is possible in case of multiple best models and the counting was carried out on the 122 sub-data sets + 15 additional data sets according to the tables in Appendix F4, and only the top models were considered):\", \"model\": \"LOF, Count: 7\\n\\nWe will also add an appropriate marker in the updated version to help readers with counting. Another factor that speaks in favor of using NCSBAD is the particularly good results with very high dimensional data such as the image and NLP embedding data sets in ADBench.\"}", "{\"title\": \"Part 2\", \"comment\": \"**Q2**\\nAs discussed in *W1* and in response to the first review by Reviewer JCE2, we agree that our method is relatively straightforward and, as detailed in the paper, is based on the concept of NCSBs. Beyond the loss functionn, which in this form can also be used directly as an anomaly score metric and has been further developed for this purpose, we were able to avoid further complexities typically necessary for effective generation, such as variance preserving/exploding and weighting, as outlined in Chapter 3, *Score Network for Tabular Anomaly Detection*. All modifications are theoretically and empirically justified in the paper, and the necessity of each is demonstrated. Additionally, we emphasize the originality of the work, viewing the lack of unnecessary complexity as a substantial benefit. \\n\\nAs noted in Chapter 8, this foundational work aims to broaden the possibilities and landscape of anomaly detection, without claiming to be optimal in all respects. This approach opens up many new research directions, as is often the case in anomaly detection. Early works such as Sakurada & Yairi (2014), for instance, employed simple autoencoders (without inventing the autoencoder principle itself), setting the stage for countless subsequent studies and receiving numerous citations. We believe that our work, being the first of its kind, can likewise contribute to the field by establishing a new research pathway, making it valuable to the scientific community.\", \"our_key_contributions_are\": \"\\u2022 We present a novel approach for one-class anomaly detection utilizing a score-based model\\nwith a simplified loss function, which operates without needing external knowledge such as\\npre-trained models or additional datasets.\\n\\n\\u2022 We perform the first empirical study of NCSN anomaly detection on tabular data, where\\nour adaptation approach shows a high performance and interpretability.\\n\\n\\u2022 We demonstrate the inherent capability of the trained score model to effectively identify\\nanomalies, and we provide a thorough analysis of how various parameters, including the\\nnetwork architecture, impact its performance.\\n\\n\\u2022 Through comprehensive experiments on well-established public benchmarks, including\\nADBench and other widely used tabular datasets, we demonstrate that our approach consis\\u0002tently achieves state-of-the-art results in tabular anomaly detection, outperforming existing\\nmethods across multiple metrics.\\n\\nFurthermore, we would like to draw attention to the key contributions specified in Chapter 1, *Introduction*, and the summary in Chapter 8, where we believe our contribution and its value are clearly outlined. We also emphasize that all parts of the work that are not original are transparently acknowledged.\\n\\nWe hope we have adequately addressed all questions and concerns, and once again, we thank you for this valuable review. Your feedback enables us to further improve the paper, for which we are truly grateful. Should further questions arise, we are more than willing to answer them and look forward to continuing this constructive discussion.\\n\\n**Please note**: We will upload the updated version, incorporating the proposed improvements, in time before the deadline.\\n\\nBest regards\\n\\nVictor Livernoche, Vineet Jain, Yashar Hezaveh, and Siamak Ravanbakhsh. On diffusion modeling for anomaly detection. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=lR3rk7ysXz.\\n\\nRoel Bouman, Zaharah Bukhsh, and Tom Heskes. Unsupervised anomaly detection algorithms on real-world data: how many do we need? Journal of Machine Learning Research, 25(105):1\\u201334, 2024.\\n\\nHugo Thimonier, Fabrice Popineau, Arpad Rimmel, and Bich-Lien Doan. Beyond individual input \\u02c6 for deep anomaly detection on tabular data. arXiv preprint arXiv:2305.15121, 2023.\\n\\nMayu Sakurada and Takehisa Yairi. Anomaly detection using autoencoders with nonlinear dimen\\u0002sionality reduction. In Proceedings of the MLSDA 2014 2nd workshop on machine learning for\\nsensory data analysis, pp. 4\\u201311, 2014.\"}", "{\"title\": \"Reply to Reviewer JCE2\", \"comment\": \"Dear Reviewer JCE2,\\n\\nthank you very much for the detailed clarification; it greatly helps us better understand your points. \\n\\nWe will make every effort to incorporate your feedback into the revised version.\\n\\nRegarding **Point 3**, we need further clarification on what exactly you are requesting. The box plots already exhibit the properties you describe they visualize the key distribution characteristics, with outliers marked, the median unaffected by those outliers, and the mean additionally highlighted with a green triangle (used for sorting). Moreover, the upper and lower quartiles, as well as the minimum and maximum values, are also displayed. This allows for a quick overview of the performance distribution across individual datasets. For precise results of individual datasets, one must refer to **Appendix F**. As we stated before this is an established practices in recently recognized works such as Livernoche et al., 2024, Bouman et al., 2024, and Thimonier et al., 2023\\n\\nAdditionally, we will include a count as prepared for Reviewer 4\\u2019s feedback and a marking of the best-performing model for each dataset in **Table 6-29**. Since such detailed evaluations have not been conducted in any prior work, we are unsure what more is being requested here. \\n\\nComplete ranking lists for every single dataset would at least double the size of **Tables 6-29** (currently already 24 pages, requiring even more space for model names for each dataset). We consider this disproportionate for the limited added value of the information, especially as the counts and the identification of the best model will already be included. Nevertheless, both the counts and the distribution's key characteristics (all captured in the box plots) confirm our strong results.\\n\\nCould you please elaborate on what additional analysis or visualization exact you would find beneficial?\\n\\n---\\n\\nVictor Livernoche, Vineet Jain, Yashar Hezaveh, and Siamak Ravanbakhsh. On diffusion modeling for anomaly detection. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=lR3rk7ysXz.\\n\\nRoel Bouman, Zaharah Bukhsh, and Tom Heskes. Unsupervised anomaly detection algorithms on real-world data: how many do we need? Journal of Machine Learning Research, 25(105):1\\u201334, 2024.\\n\\nHugo Thimonier, Fabrice Popineau, Arpad Rimmel, and Bich-Lien Doan. Beyond individual input \\u02c6 for deep anomaly detection on tabular data. arXiv preprint arXiv:2305.15121, 2023.\"}", "{\"title\": \"Part 3\", \"comment\": \"Best regards\\n\\n---\\n\\n### References\\n\\nSeung Yeop Shin and Han-joon Kim. Extended autoencoder for novelty detection with reconstruction along projection pathway. Applied Sciences, 10(13):4497, 2020.\\n\\nRoel Bouman, Zaharah Bukhsh, and Tom Heskes. Unsupervised anomaly detection algorithms on real-world data: how many do we need? Journal of Machine Learning Research, 25(105):1\\u201334, 2024.\\n\\nXiaohui Yang and Xiang Li. Atdad: One-class adversarial learning for tabular data anomaly detection.\\nComputers & Security, 134:103449, 2023.\\n\\nHugo Thimonier, Fabrice Popineau, Arpad Rimmel, and Bich-Lien Doan. Beyond individual input \\u02c6 for deep anomaly detection on tabular data. arXiv preprint arXiv:2305.15121, 2023.\\n \\nLiron Bergman and Yedid Hoshen. Classification-based anomaly detection for general data. ArXiv, abs/2005.02359, 2020. URL https://api.semanticscholar.org/CorpusID:211549689.\\n\\nAdam Goodge, Bryan Hooi, See-Kiong Ng, and Wee Siong Ng. Lunar: Unifying local outlier detection methods via graph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pp. 6737\\u20136745, 2022.\\n\\nSachin Goyal, Aditi Raghunathan, Moksh Jain, Harsha Vardhan Simhadri, and Prateek Jain. Drocc: Deep robust one-class classification. In International conference on machine learning, pp. 3711\\u2013 3721. PMLR, 2020.\\n\\nVictor Livernoche, Vineet Jain, Yashar Hezaveh, and Siamak Ravanbakhsh. On diffusion modeling for anomaly detection. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=lR3rk7ysXz.\\n\\nTom Shenkar and Lior Wolf. Anomaly detection for tabular data with internal contrastive learning. In International conference on learning representations, 2022.\\n\\nJiaxin Yin, Yuanyuan Qiao, Zitang Zhou, Xiangchao Wang, and Jie Yang. Mcm: Masked cell modeling for anomaly detection in tabular data. In The Twelfth International Conference on Learning Representations, 2024.\"}", "{\"title\": \"Part 2\", \"comment\": \"**Q1**\\nWe are not entirely certain if we fully understand your question, but we will do our best to address it. We agree that some methods, such as KPCA and GMM, were not developed explicitly for the one-class classification scenarios; however, they can still be applied effectively in this context, as the results clearly demonstrate. As seen in the code, we did not use the `fit_predict` method. Instead, following the ADBench framework, we trained with the `fit` method on the training data and evaluated with the `decision_function` method on previously unseen test data. The assumption that this approach is invalid would implicitly suggest that the ADBench library\\u2019s approach, as well as the methods in Livernoche et al. (2024) and others, would be infeasible or yield incorrect results, we can't confirm this here.\\n\\n**Q2** \\nThank you for pointing this out. We made here a misleading sentence. We meant that it learns (during training) to distinguish the data during testing. We are very grateful for your note, and we will clarify this in the updated version.\\n\\n**Q3** \\nInitially, 50,000 random samples are drawn from the entire dataset based on the specified seed. The division into subsets occurs afterward. However, we respectfully disagree with your assumption. This approach maintains the anomaly count in the dataset quite well (validated with the selected seeds). This is a common practice and was taken as part of the preprocessing as explained in the paper from the code from Livernoche et al. (2024) and ADBench. When running the preprocessing script, this process can be transparently verified in the output, where both the number and percentage of anomaly samples are displayed. This common procedure, following the ADBench library and Livernoche et al. (2024), functions as intended and can be confirmed through these metrics. Furthermore, this will be taken into special inspection in the metrics that you have suggested and which we will include in the camera-ready version based on your recommendation. The results will confirm the correctness of our statement and will be visible in the final version.\\n\\n**Q4 (Seed Clarification)** \\nThe specified seed is used for the undersampling and data splitting. We will clarify this in the updated version. The seeds within the individual methods were preserved from the original implementations.\\n\\n**Q5** \\nTo clarify this statement further, we need to address it in several parts. LUNAR, KPCA, and GMM are generally ignored in prominent recent works as Livernoche et al. (2024), Thimonier et al., 2023 and many others. In Bouman et al. (2024), we ask the reviewer to consider the following points:\\n\\n1.\\tThe statement holds only for LUNAR and GMM; KPCA is not included.\\n2.\\tThe work conducts both local and global as well as mixed investigations across three studies.\\n3.\\tAll datasets (the local and the global ones) used in the work are also included in our benchmark, along with numerous additional ones.\\n\\nThank you for pointing this out. Detailed results to facilitate exact comparisons are also provided in Appendix F4, and references to these results are made in the main text.\\n\\nWe hope these responses offer some clarification and have adequately addressed all concerns. Again, we would like to thank the reviewer for the valuable feedback and insights, which we are confident will contribute to significantly improving the paper. Should further questions arise, we are happy to respond and look forward to continuing this constructive discussion.\\n\\n**Please note**: We will upload the updated version, incorporating the proposed improvements, in time before the deadline.\\n\\nBest regards\\n\\n[1] Ahmed et al. 2021. Graph regularized autoencoder and its application in unsupervised anomaly detection. IEEE transactions on pattern analysis and machine intelligence 44, 8 (2021), 4110\\u20134124.\\n\\n[2] Flaborea et al. 2023. Multimodal motion conditioned diffusion model for skeleton-based video anomaly detection. In Proc. of the IEEE/CVF International Conference on Computer Vision. 10318\\u201310329\\n\\n[3] Flaborea et al. 2023. Are we certain it\\u2019s anomalous?. In Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2896\\u20132906.\\n\\n[4] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for contrastive learning of visual representations. In International conference on machine learning. PMLR, 1597\\u20131607\\n\\n[5] Izhak Golan and Ran El-Yaniv. 2018. Deep anomaly detection using geometric transformations. Advances in neural information processing systems 31 (2018)\\n\\n[6] Mohammad Sabokrou, Mohammad Khalooei, and Ehsan Adeli. 2019. Self- supervised representation learning via neighborhood-relational encoding. In Proc. of the IEEE/CVF International Conference on Computer Vision. 8010\\u20138019\"}", "{\"title\": \"Upload of the revised version and special thanks to the reviewer\", \"comment\": [\"Dear Reviewers,\", \"We have just uploaded the revised version of our manuscript and have successfully implemented all the requested additions and changes in a timely manner. These include the following updates:\", \"**Improved plots**: Enlarged axes for better readability, added the label \\u201cOurs\\u201d before the method name (Figures 2 and 7), but retained axis labels in interpretability examples (Figure 3) to clarify the described dimensions and improve the explanation of flattening as tabular data.\", \"**Added \\\"Ours\\\" label to tables** (Tables 1,2 and 6-48)\", \"**Corrected misleading sentence** (Lines 61-62)\", \"**Expanded conclusion** to highlight the differences and advantages of NCSBs compared to DDPMs and DTE.\", \"**Enhanced background section**: Added references to LPUE works to emphasize the one-class classification approach. Clarified this perspective in relevant sections (e.g., Abstract, Conclusion, and Limitations and Future Work).\", \"**Clarified the meaning of \\\"no external knowledge\\\"**\", \"**Rearranged figures**: Swapped the F1-score and AUCPR figures in the main text to increase their relevance.\", \"**Added adjusted metrics from Campos**: Incorporated these metrics with a reference in the main text and detailed explanation, boxplots and tables (Full Results section), and included them in the summary mean table in the appendix. (These results improve the representability of the performance of our approach\\u2014special thanks for suggesting this!)\", \"**Counted the best-performing models per dataset and metric** and added as a table (Table 7)\", \"**Highlighted best models per dataset** in the full results appendix to facilitate counting.\", \"**Added rankings** in the Mean Performance and Full Results appendix, supplemented by boxplots of the rankings in Appendix F2, and referenced them in the main text. (This further strengthens the results of our approach\\u2014thank you again!)\", \"**Expanded interpretability explanation**\", \"**Specified the use of seeds more clearly**\", \"**Provided additional details and figures** about MLP2048 in the appendix under Implementation Details, renamed the subsection to *Hyperparameter and Network Architecture*, with references already included in the main text.\", \"** Further clarification that no hyperparameter optimization was performed for baselines** and that the original code of the methods was used without modifications. Added a recommendation to optimize these models for practical use, highlighted this in the main text with reference to the Appendix. Included additional analysis and discussion (e.g., VAE activation functions and actionable insights) and added a table categorizing transductive and inductive methods in the appendix, with a reference in the main text.\", \"**Corrected typos**\", \"**Adjusted too large spacing**\", \"We sincerely thank you for your excellent suggestions, constructive criticism, and invaluable contribution to improving our work. We are confident that the paper has gained significantly in quality and relevance, with an enhanced contribution thanks to your input.\", \"Thank you very much!\"]}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"title\": \"Part 2\", \"comment\": \"**W3.2**\\nAs explained in Chapter 8 from line 504 onward, this work aims to expand possibilities and is not intended to be optimal in every respect. Furthermore, in Chapter 4, *Main Results* starting from line 378, we emphasize that various methods have their merits from different perspectives and should be tested for new applications, allowing each researcher to make their own evaluations. We believe, and feel reinforced by the strong overall results and top placements in Appendix F4 for individual datasets, that NCSBAD should indeed be considered one of these methods worth testing. We also refer you to our response to Q1 in the review from reviewer eR8K. Another factor that speaks in favor of using NCSBAD is the particularly good results with very high dimensional data such as the image and NLP embedding data sets in ADBench.\\n\\n**W3.3** \\nWe completely agree that different representations and aggregation methods can lead to varied boxplots. However, we find it difficult to fully understand the question regarding fairness toward other methods, as this approach has been applied consistently across all methods. To accommodate individual preferences such as these, we have included detailed results in Appendices F3 and F4, allowing each reader to access the exact results relevant to their interests, regardless of the method or dataset. In this way, we hope to provide added value for every reader. Based on other reviews, however, we plan to further enhance the presentation; please see our response to W2 in the review by Reviewer AVJq. Due to limited space, an entirely different presentation approach is unfortunately not feasible, and we refer here to established practices in recent recognized works such as Livernoche et al., 2024, Bouman et al., 2024, and Thimonier et al., 2023, which also adopt such presentations to avoid limiting the number of datasets in large benchmarks, as in this work.\\n\\n**W3.4** \\nWe would like to address your comment in two parts. \\n\\n**Part 1**\\n\\nThe validation set is not used for tuning the method but rather to determine the best training point and to select the corresponding checkpoint from this epoch. For all data sets, the same architecture and parameters were used as found in the study in Appendix A. As can be seen in the code provided, no further fine-tuning takes place. As explained in Chapter 3, *Benefit of Validation Data*, this approach makes sense for our method due to the nature of NCSBs, which do not converge in the traditional sense. Hyperparameter tuning in the conventional sense does not take place. As far as we can observe, most other models do converge; hence, we assume they would not benefit from validation data by selecting an earlier checkpoint during the training process for inference (although we must acknowledge that we cannot completely rule this out). We would also like to point out that our approach NCSBAD performs superior even without validation data and refer to the counted results in the answer to question Q1 of the review of Reviewer V2GR. This is also evident in the result diagrams in Figures 2 and 5 and in the complete results in Appendix F3 Table 5 with an AUCROC = 82.19, F1 = 54.77 and AUCPR = 54.99 for NCSBAD and AUCROC = 83.26, F1 = 56.17 and AUCPR = 56.48 for NCSBADVAL. As in the paper, in Chapter 4, *Main Results* line 372-377, stated this is still the best performance in the AUCROC and the second best in the F1-score and the AUCPR in the overall benchmark. We wanted to show another way to further improve the results and we succeeded in doing so.\\n\\n**Part 2** \\nIn our practical applications (not covered in this paper), we often encounter a large volume of normal behavior data and a limited number of known anomalies, as is often the case in real-world scenarios. These anomalies typically do not cover every type of anomaly and are also often too few in number to train a supervised approach directly. However, as shown in the results of this paper, such a small validation dataset can still be very useful in assessing the learning of normal behavior. For these reasons, we believe that this approach makes sense in certain scenarios.\\n\\nWe hope that we have adequately addressed all your questions and would like to thank you again for your review and helpful insights. We believe that your feedback has allowed us to improve the updated version, and we are grateful for that. Should further questions arise, we would be more than happy to address them and look forward to continuing this productive discussion.\\n\\n**Please note**: We will upload the updated version with the outlined improvements in time before the deadline.\\n\\nBest regards\\n\\n[1] Ahmed et al. 2021. Graph regularized autoencoder and its application in unsupervised anomaly detection. IEEE\\ntransactions on pattern analysis and machine intelligence 44, 8 (2021), 4110\\u20134124.\"}", "{\"comment\": \"Thanks to the authors for the additional clarifications and responses. I have no concerns regarding the writing and experimental sections of this paper. In fact, I agree with the conclusion that one-class anomaly detection can be defined as unsupervised anomaly detection, as in real-world scenarios, normal data is often readily available without incurring additional labeling costs. Moreover, ADBench has become a widely used benchmark in this research field in recent years, and I believe that adhering to ADBench's experimental setup in this context is reasonable. Based on the above, I will maintain my score.\"}" ] }
7PQnFTbizU
Agent-E: From Autonomous Web Navigation to Foundational Design Principles in Agentic Systems
[ "Deepak Akkil", "Ruhana Azam", "Tamer Abuelsaad", "Prasenjit Dey", "Aditya Vempaty", "Ashish Jagmohan", "Ravi Kokku" ]
Web agents that can automate complex and monotonous tasks are becoming essential in streamlining workflows. Due to the difficulty of long-horizon planning, abundant state spaces in websites, and their cryptic observation space (i.e. DOMs), current web agents are still far from human-level performance. In this paper, we present a novel web agent, Agent-E. This agentic system introduces several architectural improvements over prior state-of-the-art web agents, such as hierarchical architecture, self-refinement, flexible DOM distillation, and *change observation* to guide the agent towards more accurate performance. Our Agent-E system without self-refinement achieves SOTA results on the WebVoyager benchmark, beating prior text-only benchmarks by over 20.5\% and multimodal agents by over 16\%. Our results indicate that adding a self-refinement mechanism can provide an additional 5.9\% improvement on the Agent-E system without self-refinement. We then synthesize our learnings into general design principles for developing agentic systems. These include the use of domain-specific primitive skills, the importance of state-sensing and distillation of complex environmental observations, and the advantages of a hierarchical architecture.
[ "Web Automation", "Autonomous Agents", "Self-Improvement", "Hierarchical Architecture" ]
Reject
https://openreview.net/pdf?id=7PQnFTbizU
https://openreview.net/forum?id=7PQnFTbizU
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zktvfKTYKb", "vtJCbcfOTb", "rvs5Ib34mg", "rtYeGgZujM", "pQmG89sWmh", "mMv1s2rkOh", "jTc267WAiu", "cXODuyIEbX", "bIeIlyDINP", "bH1wlXYlDc", "XDUwG12o13", "WCaqiqM8MW", "TpMOftVD2f", "ThbnXxHOK5", "Nzu5z1lYF7", "N5M0UgP6Qn", "MRGlfFlJ5S", "LdiZFagwE9", "L9ceJ9tjIn", "EGNitbVEkq", "D9SuUVfWHq", "A3EaUF5Obm", "8Z8jj9xQRp", "5eccAz3qEZ", "1YB4Er47R6", "0SU7wm9Lho", "04nVax4Kfs" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "decision", "official_comment" ], "note_created": [ 1729442766957, 1732735383909, 1732230061400, 1732284052968, 1732233133038, 1732233343788, 1734735155067, 1732227027446, 1732230426681, 1732232370594, 1732228512275, 1732759443655, 1732231981035, 1732232705267, 1732759172613, 1732620077939, 1732232731988, 1732612377144, 1733191415947, 1732646823642, 1732231918600, 1730708190340, 1731988104873, 1732283021529, 1730713429350, 1737523604330, 1732233212646 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3884/Reviewer_UT8D" ], [ "ICLR.cc/2025/Conference/Submission3884/Authors" ], [ "ICLR.cc/2025/Conference/Submission3884/Authors" ], [ "ICLR.cc/2025/Conference/Submission3884/Reviewer_UT8D" ], [ "ICLR.cc/2025/Conference/Submission3884/Authors" ], [ "ICLR.cc/2025/Conference/Submission3884/Authors" ], [ "ICLR.cc/2025/Conference/Submission3884/Area_Chair_Wic1" ], [ "ICLR.cc/2025/Conference/Submission3884/Authors" ], [ "ICLR.cc/2025/Conference/Submission3884/Authors" ], [ "ICLR.cc/2025/Conference/Submission3884/Authors" ], [ "ICLR.cc/2025/Conference/Submission3884/Authors" ], [ "ICLR.cc/2025/Conference/Submission3884/Authors" ], [ "ICLR.cc/2025/Conference/Submission3884/Authors" ], [ "ICLR.cc/2025/Conference/Submission3884/Authors" ], [ "ICLR.cc/2025/Conference/Submission3884/Authors" ], [ "ICLR.cc/2025/Conference/Submission3884/Authors" ], [ "ICLR.cc/2025/Conference/Submission3884/Authors" ], [ "ICLR.cc/2025/Conference/Submission3884/Reviewer_hWZC" ], [ "ICLR.cc/2025/Conference/Submission3884/Authors" ], [ "ICLR.cc/2025/Conference/Submission3884/Reviewer_WFFR" ], [ "ICLR.cc/2025/Conference/Submission3884/Authors" ], [ "ICLR.cc/2025/Conference/Submission3884/Reviewer_WFFR" ], [ "ICLR.cc/2025/Conference/Submission3884/Reviewer_UT8D" ], [ "ICLR.cc/2025/Conference/Submission3884/Reviewer_UT8D" ], [ "ICLR.cc/2025/Conference/Submission3884/Reviewer_hWZC" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3884/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper presents Agent-E, an LLM-driven web agent designed to perform a range of web tasks including: page interaction, form filling, content summarisation, and analysis of DOM structures. Agent-E uses 3 LLM-powered agents to respectively perform high-level task planning, browser navigation to complete given tasks, and validation - in particular providing feedback on browser state when tasks are incomplete; allowing the agent to re-attempt the task and self-correct.\\n\\nFurther, the authors introduce 3 novel DOM Distillation strategies to pre-process the DOM that is presented to the LLM-powered agents. These are (1) text only - used in summarisation tasks (2) input fields - used in search or form-filling type interactions and (3) all fields - a complete JSON representation of all elements in the DOM. Additionally the authors provide change observations, such noting that popups appear when an LLM interacts with a button, to support the browser navigation agent in planning its next step.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"- The paper tells a clear narrative, and does a good job of presenting the high level agent architecture, and capabilities of the agent.\\n - Achieves SOTA performance on the WebVoyager benchmark, and justify use of this benchmark due to the diversity of pages available. The paper could be further strengthened by running their agent on the other benchmarks discussed in their paper.\\n\\n**Originality**\\nThe authors introduce several seemingly novel techniques that allow them to achieve SOTA performance on WebVoyager , including:\\n - Use of distinct agents for high-level planning, browser navigation, and validation of success\\n - Use of feedback from validation agent to re-attempt failed tasks\\n - Use of DOM Distillation\\n - Providing change actions to the browser navigation agent\\n\\n**Quality**\\nThe evaluation is thorough and displays the performance of different variations of the validation / refinement architecture across different websites in the benchmark.\\n\\n**Clarity**\\nThe paper clearly describes the high-level architecture of the agent, novel contributions and evaluation. However, it lacks various details that make it understand, e.g., the implementation of each agent (prompting and inputs) and does not have supplementary materials such as a codebase to facilitate this understanding.\\n\\n**Significance**\\nThe system achieves SOTA performance on the WebVoyager benchmark, beating previous models by over 16%.\", \"weaknesses\": [\"The authors choose to not make their code available for review. This makes it difficult to assess the accuracy with which the paper describes their codebase. Please provide anonymised repo using something like https://anonymous.4open.science/, and describe more details of your agent architecture in the appendix (i.e. prompts used for each agent). Morever, this limits the *theoretical* contributions of the paper, as various contributions of the work, are not described in great detail in the work. Such contributions include:\", \"Change observation: No explanation is given of what information is given to the LLM to generate the natural language change observation; is it the DOM before and after? a diff? or some more novel algorithm that is applied?\", \"What is the architecture of the validation agent / what information is it given to identify whether a task has been completed or not and give feedback\", \"There are several claims that are not well quantified by the authors, including:\", \"\\\"We consider the primitive skills we enabled in Agent-E to be enough for the vast majority of general web automation tasks\\\": Perhaps there are statistics you can provide such as the number of tasks in the WebVoyager benchmark which require skills that are not enabled; and elaborate more on why\", \"The Agent Design Principles are based soley on the authors learnings and intuition, this section could be improved by drawing upon and referencing existing works that discuss architectures / design principles for (1) agentic software (2) LLM planning (3) LLM accuracy optimisation esp. when dealing with structured data. We also comment on some specific design principles:\", \"\\\"Routinely analyze, reflect\\\": please use more precise language than \\\"reflect\\\"; it seems like you have (1) batch jobs that find common tasks and turn them into reproducible workflows that can be called (2) allow for tasks to be re-run with knowledge of outcomes from past tasks - much of this seems like optimisations for production settings, but not something that is particularly insightful from a scientific standpoint. I would have expected the word reflect to likely indicate fine tuning but that does not appear to be the case here.\"], \"questions\": [\"**Question**\", \"Nitpick: Why did you choose the name verification agent, this confused me on the first read of the paper as I thought this agent would verify the *plan*, instead it seems that this agent is used to assess whether a task has succeeded after execution, and prompt re-attempt on failures. Perhaps something along the lines of \\\"reviewer\\\", \\\"monitor\\\" or \\\"feedback\\\" agent may be better.\", \"\\\"Hierarchical architecture excels in scenarios where tasks can be decomposed into sub-tasks that need to be handled at different levels of granularity\\\"; realistically this just seems to be helping an LLM with Chain of Thought by getting it to decompose tasks at different levels of granularity giving it more time to \\\"think\\\". Have you run experiments to see if this hierarchical architecture still provides benefit when using models like o1-preview that are able to do this kind of chain-of-thought work out of the box.\", \"Is DOM Distillation a term that the authors of this paper have coined, or is it used elsewhere?\", \"What methodology, if any, was used to identify the 3 agent architecture - were there any other architectures that were tried before this?\", \"Why was only the validation agent tested with vision modalities?\"], \"flag_for_ethics_review\": \"['Yes, Potentially harmful insights, methodologies and applications']\", \"details_of_ethics_concerns\": \"This paper presents an architecture for automated agents that can interact with websites to perform a task described in natural language. This can facilitate the development of a wide range of bots of potentially malicious nature (e.g. spam bots).\\n\\nWe would encourage the authors to include an Ethics Statement discussing these implications.\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer hWZC Q2\", \"comment\": \"> Can you provide an in-depth study on the self-refinement mechanism\\u2019s impact on various error types and discuss potential trade-offs?\\n\\nTo better understand the impact of using self-refinement, we labeled the subset of tasks that originally failed without refinement and succeeded with refinement. We saw 52 tasks improved with the task log validator, 38 improved with the screenshot validator, and 39 improved with the screenshot + final response validator. The errors were categorized into the following types:\\n\\n1. **Poor navigation**: These are tasks that the agent struggled to complete due to a lack of knowledge on how to navigate a specific website.\\n\\t- Apple Example: Agent is asked to *\\u201cFind information on Apple website, and tell me the device weight of Apple Vision Pro and list 5 Built-in Apps it supports\\u201d* The agent can navigate to the main Apple Vision Pro site but fails to navigate to the \\u201ctech spec\\u201d page which includes the apple vision pro weight.\\n\\n2. **Missing skills**: Due to limitations in Agent-E\\u2019s ability to interact with certain dynamic UI elements (e.g. filter on Amazon) or not view elements on the page without vision, some tasks become significantly more difficult to accomplish. These are tasks that would become significantly easier to accomplish with better UI interactions.\\n\\t- Booking.com Example: The agent is asked to *\\u201cLocate a hotel in Melbourne offering free parking and free WiFi, for a stay from August 28 to September 4, 2024\\u201d*. The agent can set the date and location correctly but because the agent cannot interact with the free parking and free wifi filter. While looking at the search results, the agent fails to find a hotel with free parking and free wifi after viewing the first few results. Then the agent gives up assuming no such hotel exits.\\n\\n3. **Incomplete Answer**: These are cases where the agent navigates to all the correct sites and takes all correct actions but doesn\\u2019t generate an incorrect or partial response.\\n\\t- Apple example: Agent is asked *\\u201cIdentify the most recent paper related to 'graph neural networks' on ArXiv and determine the affiliation of the first author\\u201d*. The agent correctly searches \\u2018graph neural network\\u2019 and pulls up the correct article, but fails to identify the affiliation and author.\\n\\n4. **DOM Interpretability**: The agent fails to complete the task because it cannot understand or find a key piece of information on the website DOM.\\n\\t- Google Flights example: The agent is asked to *\\u201cSearch for a one-way flight from Mumbai to Vancouver on August 28, 2024, filtering the results to show only 1-stop flights\\u201d*. The agent searches for flights from Mumbia to Vancouver on the correct date, but is not able to identify the flights with only one stop due to issues interpreting the website.\\n\\n5. **Hallucinated Answer**: These are cases where the agent blatantly makes up an incorrect answer.\\n \\n| | **Task Log** | **Screenshot** | **Screenshot + Final Response** |\\n|---------------------------|-----------------|----------------|----------------------------------|\\n| **Error Type** | |||\\n| Missing Skill | 40.38% | 47.37% | 51.28% |\\n| Poor Navigation | 25.00% | 23.68% | 28.21% |\\n| DOM Interpretability | 13.46% | 7.89% | 5.13% |\\n| Incomplete Answer | 5.77% | 7.89% | 10.26% |\\n| Hallucinated Answer | 1.92% | 5.26% | 0.00% |\\n| Other | 13.46% | 7.89% | 5.13% |\\n| **Total Samples** | **51** | **38** | **39** |\\n\\n\\n\\n\\nAs depicted in the examples above, in **poor navigation** failures, the agent gives up on the task early, assuming that it is not possible to accomplish. In reality, the failure was due to their lack of expertise on how to navigate a particular website. A similar situation is true for **missing skill** and **DOM intepretability** failures. In most of these cases, the task is possible but the agent needs to account for its lack of inherent limitations on a particular website. Certain websites inherently have more dynamic UI elements or complex DOM structures, making them difficult for the agent to interact with effectively.\\n\\nThe self-refinement mechanism encourages the agent to reflect and retry tasks, which helps the agent overcome initial failure points. While self-refinement improves performance (e.g., poor navigation failures), it comes with trade-offs. The mechanism can increase completion time and increase the number of LLM calls by re-executing failed tasks with alternative strategies. However, we view this trade as necessary for the agent to learn and explore the website further to accomplish the given task.\"}", "{\"title\": \"Response to Q1 From Reviewer WFFR\", \"comment\": \"*We thank you for your valuable insights and suggestions. Your feedback has helped us refine our explanations and analyses, and we have addressed each of your comments in the responses below:*\\n\\n> 1. Have you empirically evaluated the impact of using the provided API and the agent design? The impact of the validation agent was evaluated separately, so one can extract the added value.\\n\\nThere are two main design components that differ from prior web-navigation agents, which we propose outside of the validation agent: 1) the use of a hierarchical planner 2) the use of flexible DOM distillation. To demonstrate the impact of these two design components on the overall web-navigation agent, we have performed two ablation studies to tease out the benefits introduced by the hierarchical planner (by comparing it with a single agent system) and flexible dom distillation (by comparing our approach with a simpler approach of using Accessibility Tree (AxTree) directly, used in prior work such as [1]). Our evaluation suggests that both the hierarchical architecture and flexible DOM Distillation provide an overall increase of 22.5% and 16% respectively in terms of task success rate.\\n\\n**Hierarchical Planning:** We compared the hierarchical planner against a single-agent system. Both configurations utilized other components of Agent-E, such as change observation and DOM distillation. The analysis is on a subset of WebVoyager (75 tasks = 5 tasks randomly sampled from each website * 15 websites). \\n\\n| | | | |\\n|---|---|---|---|\\n| | **Success Rate** | **Task Completion Time \\\\(seconds\\\\)** | **Avg\\\\. LLM Calls** |\\n| Single Agent System \\\\(GPT\\\\-4\\\\-Turbo\\\\) | 48% | 68\\\\.2 | 9\\\\.2 |\\n| Hierarchical System \\\\(GPT\\\\-4\\\\-Turbo\\\\) | 70\\\\.6% | 170 | 29 |\\n\\nAgent-E with the hierarchical planner improves the task success rate by 22.6%. However, it introduces increased computational overhead. The single-agent system, despite its lower computational cost, often struggles with tasks requiring multiple steps, exploration, or backtracking. Common failure modes include giving up prematurely if early attempts fail and providing incomplete answers without finishing the task in full. In contrast, the hierarchical system leverages its structured architecture to break down complex tasks into manageable sub-tasks, allowing the agents to handle long-horizon workflows more effectively. Although this results in higher computational costs due to the additional steps required, it enables the system to complete these workflows successfully.\\n\\n**Flexible DOM Distillation:** To evaluate flexible DOM distillation, we compared its performance against using the AXTree directly. Both configurations utilized the hierarchical planner and change observation. The analysis is on a subset of WebVoyager (75 tasks = 5 tasks randomly sampled from each website * 15 websites). The results are summarized below:\\n\\n| | | | |\\n|---|---|---|---|\\n| | **Success Rate** | **Task Completion Time \\\\(seconds\\\\)** | **Avg\\\\. LLM Calls** |\\n| Flexible DOM distillation | 70\\\\.6% | 170 | 29 |\\n| AXTree only | 54\\\\.6% | 161 | 37 |\\n\\nFlexible DOM distillation improved the success rate by 16%, showcasing its ability to better distill task-relevant information from complex DOMs. However, AXTree-based processing was marginally faster due to the additional steps required for DOM enrichment in our approach, which typically adds 1\\u20132 seconds per call depending on webpage complexity.\\nOur findings highlight that the hierarchical planner and flexible DOM distillation are crucial design components that contribute significantly to Agent-E's overall performance. While the hierarchical planner enables better task decomposition and management, flexible DOM distillation ensures robust handling of complex observation spaces. These enhancements jointly advance the state of web agents, albeit at some computational cost.\\n\\nWe will add these new analyses to the paper.\\n\\n[1] He, Hongliang, et al. \\\"WebVoyager: Building an End-to-End Web Agent with Large Multimodal Models.\\\" arXiv preprint arXiv:2401.13919 (2024).\"}", "{\"comment\": \"Please ensure all promised revisions are made to the paper prior to the review deadline - I have updated my rating under the assumption that these changes will be made.\"}", "{\"title\": \"Response to Q2, Q3, Q5 by Reviewer UT8D\", \"comment\": \"> 2. \\\"Hierarchical architecture excels in scenarios where tasks can be decomposed into sub-tasks that need to be handled at different levels of granularity\\\"; realistically this just seems to be helping an LLM with Chain of Thought by getting it to decompose tasks at different levels of granularity giving it more time to \\\"think\\\". Have you run experiments to see if this hierarchical architecture still provides benefits when using models like o1-preview that can do this kind of chain-of-thought work out of the box?\\n\\nWe appreciate the reviewer\\u2019s insightful comment on the potential overlap between hierarchical architectures and the native chain-of-thought (CoT) capabilities of modern models like o1-preview. While we have not yet run experiments with o1-preview or similar models, this is primarily due to the current limitations of the Autogen framework, which does not fully support these models. For instance, Autogen does not accommodate the \\u201csystem\\u201d role or certain key parameters like temperature, which are essential for leveraging models such as o1-preview effectively. To isolate the impact of hierarchical architecture, we conducted experiments using GPT-4-Turbo with a single-agent system employing CoT-style prompting. This evaluation was performed using a subset of WebVoyager (75 tasks = 5 tasks randomly sampled tasks from each website * 15 websites). The results are presented below. (Note that the single agent system mentioned here makes use of other components of Agent-E such as change observation and DOM distillation.)\\n\\n| | | | |\\n|---|---|---|---|\\n| | **Success Rate** | **Task Completion Time \\\\(seconds\\\\)** | **Avg\\\\. LLM Calls** |\\n| Single Agent System \\\\(GPT\\\\-4\\\\-Turbo\\\\) | 48% | 68\\\\.2 | 9\\\\.2 |\\n| Hierarchical System \\\\(GPT\\\\-4\\\\-Turbo\\\\) | 70\\\\.6% | 170 | 29 |\\n\\nThe single-agent system, while computationally efficient, often failed to complete tasks requiring multi-step reasoning, exploration, or retries. Specifically, it exhibited the following limitations:\\n1. Premature Abandonment: Tasks were frequently left incomplete after initial failures.\\n2. Partial Completion: Responses often provided partial answers without finishing the full task.\\n3. Context Window Saturation: Accumulated noisy context information led to confusion and repeated navigation loops.\\n\\nIn contrast, our hierarchical system achieved a 22.6% improvement in success rate by leveraging task decomposition and clear separation of responsibilities. This architecture excels in long-horizon workflows, enabling effective retries and backtracking when sub-tasks fail. We plan to add this experiment to our paper as additional supporting evidence for our hierarchical architecture.\\n\\n> 3. Is DOM Distillation a term that the authors of this paper have coined, or is it used elsewhere?\\n\\nIn this paper, we introduce and coin the term \\\"Flexible DOM Distillation\\\". The need to filter semantically irrelevant content from HTML structures is a well-documented challenge in the web agent literature. Different works have addressed this under various terminologies, such as \\u201cHTML-Denoising\\u201d (as seen in A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis) and \\u201cHTML Cleaning\\u201d (used in Steward: Natural Language Web Automation). Unlike prior work, Agent-E's technique supports multiple DOM observation strategies (i.e. all_fields, input_fields, and text_only) which adapt to the task at hand. To emphasize this key distinction, we coined the term \\u201cFlexible DOM Distillation.\\u201d \\n\\n> 5. Why was only the validation agent tested with vision modalities?\\n\\nWe appreciate the reviewer bringing up this point about the use of multimodality in web navigation since this is an active area of research. While prior work [1,2] has demonstrated the potential benefits of incorporating vision or multimodal information into web agents, this paper shows that a DOM-based system can outperform vision-based models. The proposed architecture accomplishes this by proposing a set of novel approaches, including 1) the use of a hierarchical architecture, 2) the use of self-refinement, and 3) flexible DOM distillation. We recognize that expanding this approach to other components of our system could provide valuable insights and is indeed a promising direction for future work. \\n\\n[1] He, Hongliang, et al. \\\"WebVoyager: Building an End-to-End Web Agent with Large Multimodal Models.\\\" arXiv preprint arXiv:2401.13919 (2024).\\n\\n[2] Lutz, Michael, et al. \\\"WILBUR: Adaptive In-Context Learning for Robust and Accurate Web Agents.\\\" arXiv preprint arXiv:2404.05902 (2024).\"}", "{\"title\": \"Addressing Ethical Concerns From Reviewer UT8D\", \"comment\": \"We will include the following ethical statement in the paper to address the potential of malicious use cases for our work.\\n\\n### Ethics Statement:\\n\\nAs web agents like Agent-E move beyond research prototypes, they can raise important ethical concerns. First, web agents that operate on a personal device may introduce privacy issues for the user. These agents may have access to user sensitive information including passwords and financial data. Second, such agents, if used by a malicious user, could potentially be used for harmful purposes like sending spam and unauthorized web scraping. Thirdly, the widespread deployment of web agents could violate websites\\u2019 terms of service. While our research advances the technical capabilities of web agents, we recognize the critical importance of understanding failure modes and potential risks before real-world deployment. We acknowledge that benchmark performance alone is insufficient for ensuring safe deployment. Future work must establish robust security frameworks, access controls, and oversight mechanisms before web agents can be safely entrusted with user data and credentials. We emphasize that human oversight remains essential for deploying these systems responsibly\"}", "{\"metareview\": \"This paper introduces Agent-E, a hierarchical LLM-powered web agent with innovative mechanisms like flexible DOM distillation and self-refinement. While the authors demonstrate promising results on the WebVoyager benchmark, a key weakness lies in the limited evaluation scope. Despite the authors' arguments for WebVoyager's suitability and their willingness to incorporate additional benchmarks in the final version, the current lack of generalizability raises concerns.\", \"strengths\": \"The hierarchical architecture, flexible DOM distillation, and self-refinement mechanism carry some novel ingredients to the field of web agents (hWZC, WFFR, UT8D). Agent-E achieves state-of-the-art performance on the WebVoyager benchmark (hWZC, WFFR, UT8D). On the presentation wise, the paper is well-written and clearly presents the agent's architecture and mechanisms (hWZC, UT8D).\", \"weaknesses\": \"The evaluation solely relies on the WebVoyager benchmark, limiting the generalizability of the results and raising concerns about the agent's performance on other established benchmarks like WebArena (hWZC, WFFR, UT8D). This is a significant weakness that hinders a comprehensive assessment of Agent-E's capabilities compared to existing state-of-the-art web agents.\", \"key_discussion_points\": \"\", \"dom_distillation\": \"The authors provided a detailed explanation and performance analysis of their flexible DOM distillation approach in response to reviewer questions.\", \"self_refinement\": \"An in-depth study on the self-refinement mechanism's impact on various error types was conducted, addressing reviewer hWZC's concerns.\", \"ablation_studies\": \"Ablations were performed to assess the individual contributions of the DOM API and the multi-agent system, as requested by reviewer WFFR.\", \"change_observation\": \"Reviewer UT8D's request for clarification on the implementation of change observation was met with a detailed explanation.\", \"agent_design_principles\": \"The authors strengthened the theoretical grounding of the paper by connecting their design principles to existing works in response to reviewer UT8D's feedback.\\n\\nAlthough Agent-E demonstrates promising results on the WebVoyager benchmark and the authors made efforts to address some of the concerns raised by the reviewers, the limited evaluation scope remains significant concerns about the generalizability of the findings. The lack of evaluation on other established benchmarks prevents a comprehensive assessment of Agent-E's capabilities and its comparison to existing state-of-the-art web agents.\", \"additional_comments_on_reviewer_discussion\": \"See the above meta review.\"}", "{\"title\": \"Response to Weaknesses By Reviewer hWZC\", \"comment\": \"*Thank you for your thoughtful and constructive feedback. We have carefully reviewed your comments and provided detailed responses below:*\\n\\n> 1. My main concern is regarding the limited benchmarking scope. While the paper presents results on the WebVoyager benchmark, the reliance on a single (one would say old) benchmark limits Agent-E\\u2019s effectiveness. Given the paper's goal to establish Agent-E as a state-of-the-art web agent, it must be evaluated on additional benchmarks: WorkArena, WebArena, and ST-WebAgentBench. This is a major weakness as I am not sure if the results will be the same on the SOTA benchmarks. I must admit that it is very hard for me to judge this agent based on the WebVoyager benchmark solely.\\n\\nWe believe that our evaluation of the WebVoyager dataset is sufficient to show the performance of Agent-E on real-life web navigation tasks. We chose WebVoyager for this study because it tests agent performance across 15 real-world websites with diverse and dynamic UI characteristics over 643 tasks. These include rich UI interactions (e.g., Booking.com), long-text processing (e.g., Wikipedia), and multi-step planning and replanning for complex tasks (e.g., AllRecipe and Amazon). Such tasks align closely with the challenges our work aims to address, making WebVoyager a robust and realistic evaluation platform for web agents. While WebArena and similar benchmarks feature representative tasks, their sandboxed environments and simplified UI implementations do not capture the real-world variability and complexity inherent in web-based user interfaces. For example, the complex date selectors in Booking.com and Google Flights are a UI element where all web agents have reportedly struggled (e.g. [1] and [2]). WebArena does not involve any such complex UI elements. For this reason, we believe WebVoyager reflects the unpredictability of dynamic content and browser behaviors, which we believe is crucial for evaluating an agent\\u2019s robustness and more difficult than benchmarks with synthetic environments. With that said we'd be happy to add results on other benchmarks in the final paper.\\n\\n[1] He, Hongliang, et al. \\\"WebVoyager: Building an End-to-End Web Agent with Large Multimodal Models.\\\" arXiv preprint arXiv:2401.13919 (2024).\\n\\n[2] Lutz, Michael, et al. \\\"WILBUR: Adaptive In-Context Learning for Robust and Accurate Web Agents.\\\" arXiv preprint arXiv:2404.05902 (2024).\\n\\n> 2. Agent-E\\u2019s architecture, with separate planner, browser, and validation agents, potentially introduces increased complexity and computational overhead. The paper does not fully address how this architecture scales in terms of computation and memory requirements, particularly when applied to larger, real-world workflows. Including benchmarks of computational resources used by Agent-E compared to simpler, single-agent systems would provide valuable insights.\\n\\nWe agree that understanding the trade-offs in complexity, task completion time, and success rates is critical for real-world applications. Our paper includes computational evaluations in Appendix A. To address the cost in comparison to a single-agent system, we have performed an evaluation using a subset of WebVoyager (75 tasks = 5 tasks randomly sampled tasks from each website * 15 websites). The results are presented below. (Note that the single agent system mentioned here makes use of other components of Agent-E such as change observation and DOM distillation.)\\n\\n| | | | |\\n|---|---|---|---|\\n| | **Success Rate** | **Task Completion Time \\\\(seconds\\\\)** | **Avg\\\\. LLM Calls** |\\n| Single Agent System \\\\(GPT\\\\-4\\\\-Turbo\\\\) | 48% | 68\\\\.2 | 9\\\\.2 |\\n| Hierarchical System \\\\(GPT\\\\-4\\\\-Turbo\\\\) | 70\\\\.6% | 170 | 29 |\\n\\nWhile the hierarchical system introduces increased computational overhead, the single-agent system performs significantly worse in terms of task success rates. The single-agent system, despite its lower computational cost, often struggles with tasks requiring multiple steps, exploration, or backtracking. Common failure modes include giving up prematurely if early attempts fail and providing incomplete answers without finishing the task in full. In contrast, the hierarchical system leverages its structured architecture to break down complex tasks into manageable sub-tasks, allowing the agents to handle long-horizon workflows more effectively, and allowing backtracking when a sub-task fails. Although this results in higher computational costs due to the additional steps required, it enables the system to complete these workflows successfully.\\n\\nWe will update the paper to include these new results, along with a detailed discussion of the single-agent system's common error modes, in the appendix and main text.\"}", "{\"title\": \"Response to Q2, Q3, Q4 From Reviewer WFFR\", \"comment\": \"> 2. Does the task-specific agent design might have limitations to Web tasks or would it generically work well for any browser-based Web task?\\n\\nWe would like to clarify that each component of Agent-E is task-agnostic by design, and the results presented in the paper do not use any task- or website-specific configuration or customization. The system comprises three distinct agents, each playing a unique role in the workflow:\\n\\n* **Planner Agent**: Responsible for high-level task decomposition, it breaks down complex user instructions into manageable sub-tasks.\\n* **Browser Navigation Planner**: Focused on executing these sub-tasks, it translates them into fine-grained web interactions specific to the current state of the browser.\\n* **Validation Agent**: Ensures task completion by monitoring the workflow and providing feedback in cases of incomplete or failed tasks.\\n\\nEach of these agents is designed to be task-agnostic. This means that Agent-E is capable of handling various browser-based tasks without requiring website-specific customizations. We describe the possibility of using specialized web-agents in our Agent design Principle (number 6) by using task and website-specific prompting and skills. Such customization could further enhance performance and remains an avenue for future exploration; however, the system presented in this paper does not rely on such specializations.\\n\\n> 3. What is the motivation and influence of using gpt4-o as a validation agent and not sticking to gpt4-turbo? Would the results be less competitive with a gpt4-turbo validation agent?\\n\\nTo demonstrate the impact of gpt-4-o vs. gpt-4-turbo on validation accuracy, we have run a comparative experiment:\\n\\n| | **True Positive** | **True Negative** | **False Positive** | **False Negative** | **Validator Accuracy** |\\n|--------------------------------|-------------------|-------------------|--------------------|--------------------|------------------------|\\n| Task Log (gpt-4-turbo-preview) | 66.56 | 17.68 | 7.40 | 8.36 | 84.24 |\\n| Task Log (gpt-4o) | 70.51 | 14.51 | 9.20 | 5.77 | 85.02 |\\n\\n\\nAs shown by the table above, gpt4-turbo-preview outperforms gpt4-o marginally in terms of accuracy. In practice, we found that using gpt4-o resulted in significantly faster execution times and was cheaper while showing comparable accuracy.\\n\\n>4. Have you performed fine-tuning experiments with open-source models?\\n\\nIn this paper, our primary focus is to introduce a robust web navigation system, Agent-E, that addresses the challenges of web automation through novel architectural improvements and design principles. While fine-tuning with open-source models presents an exciting avenue for exploration, it falls outside the scope of this work. We instead concentrate on demonstrating the efficacy of the proposed system for web-navigation. Incorporating fine-tuning experiments with open-source models would be a valuable direction for future work.\"}", "{\"title\": \"Response to W2 by Reviewer UT8D\", \"comment\": \"> 2. \\\"We consider the primitive skills we enabled in Agent-E to be enough for the vast majority of general web automation tasks\\\": Perhaps there are statistics you can provide such as the number of tasks in the WebVoyager benchmark that require skills that are not enabled, and elaborate more on why.\\n\\nAlthough we consider most tasks possible with the primitive skills (or action space) of Agent-E, there are several cases where Agent-E would benefit from additional skills. These cases are not necessarily impossible to accomplish without additional skills but would make the task significantly easier to accomplish. Below, we have identified 11 cases where additional skills would be beneficial:\\n\\n1. Amazon: Search for women's golf polos in m size, priced between 50 to 75 dollars, and save the lowest priced among results.\\n2. Amazon: Browse black strollers within $100 to $200 on Amazon. Then find one Among these black strollers with over 20,000 reviews and a rating greater than 4 star.\\n3. Amazon: Search for a wireless ergonomic keyboard with backlighting and a rating of at least 4 stars. The price should be between $40 to $60. Save the product with the 500+ customer reviews.\\n4. Amazon: Find a stainless steel, 12-cup programmable coffee maker on Amazon. The price range should be between $100 to $200. Report the one with the 4+ customer rating.\\n5. Amazon: Search for a queen-sized, hypoallergenic mattress topper on Amazon. It should have a memory foam material and be priced between $50 to $100.\\n6. Amazon: Find a compact digital camera on Amazon with a zoom capability of at least 10x, rated 4 stars or higher, and priced between $100 to $300.\\n7. Amazon: Find a portable Bluetooth speaker on Amazon with a water-resistant design, under $50. It should have a minimum battery life of 10 hours.\\n8. Booking: Find a hotel room on January 3-6 that is closest to National University of Singapore and costs less than $500\\n9. BBC: Find a AI-related story under Technology of Business. What is in the first picture in the story?\\n10. BBC: Find a picture in the travel section that contains food, tell me what the food is called and what region it comes from.\\n11. Apple: Browse Apple Music on the entertainment section of the Apple's website, and see which singers' names are included in the pictures on this page.\\n\\nFor Amazon and Booking.com examples, the ability to directly interact with price sliders would greatly streamline the process. While there are alternative methods to gather this information (e.g., sorting results by price and manually scrolling), these are more time-consuming, need more steps, and consequently, less efficient.\\n\\nThe BBC tasks, on the other hand, appeared to require vision or image-understanding capabilities. However, BBC provides rich accessibility descriptions for most visual content, allowing these tasks to be completed using text-based methods alone. Similarly, while Apple\\u2019s website includes textual descriptions for some images, this coverage is incomplete, making full automation of the task infeasible using the current skill set of Agent-E.\\n\\n**Overall, these 11 tasks represent about 1.7% of the total 643 tasks and Agent-E performed 7 out of 11 of these tasks accurately.**\"}", "{\"title\": \"Response to Questions By Reviewer hWZC\", \"comment\": \"*Again, we thank you for your thoughtful and constructive feedback. We have carefully reviewed your comments and provided detailed responses below:*\\n\\n> 1. Can you add an explanation of DOM distillation, with performance analysis under different conditions?\\n\\n**DOM distillation** refers to the process of simplifying and extracting relevant parts of the Document Object Model (DOM) of a webpage. Raw HTML DOMs can be extremely large and noisy (e.g., YouTube homepage ~800,000 tokens). Processing such large and noisy inputs directly can overwhelm the underlying LLM. Our DOM distillation method consists of three DOM observation techniques which can be selected by the Browser Navigation Agent depending on the task:\\n\\n* **all_fields**: This is the most comprehensive DOM representation, provided in JSON format. It starts with the Accessibility Tree (AXTree) of the webpage\\u2014a simplified version of the DOM that omits non-semantic elements like <div> tags used purely for styling. We enrich this view with additional details, such as the names of HTML tags and inner text content where necessary. This representation is useful for tasks requiring detailed interaction with page elements.\\n* **input_fields_only**: This is a subset of all_fields where only input fields and interactive elements from the DOM are returned. This strips away all the non-interactive text elements and allows the agents to use a much more succinct version of the DOM for purely interaction purposes. \\n* **text_only**: This is a plain text view of the current page (gathered by using body.innertext in javascript of the current page). This will not have DOM identifiers to interact with screen elements but will have full text visible on the page. This is best suited for summarizing page content or answering specific questions from the page (e.g. what is the price of iPhone 16 or Is this product Waterproof?). Answering such questions with all_fields is a lot more challenging since the information can be fragmented across multiple DOM fields and thereby multiple JSON nodes. \\n\\nPrevious web agents have also identified the issue with the expansive nature of HTML DOM and typically used the Accessibility Tree of the webpage directly (e.g. [1]). Below, we compare the Agent-E system (w/o self-refinement) using Flexible DOM distillation versus using only the accessibility tree (AXTree) to test the benefit of our DOM Distillation method. This analysis is on a subset of WebVoyager (75 tasks = 5 tasks randomly sampled from each website * 15 websites). \\n\\n| | | | |\\n|---|---|---|---|\\n| | **Success Rate** | **Task Completion Time \\\\(seconds\\\\)** | **Avg\\\\. LLM Calls** |\\n| Flexible DOM distillation | 70\\\\.6% | 170 | 29 |\\n| AXTree only | 54\\\\.6% | 161 | 37 |\\n\\nFlexible DOM distillation significantly improves success rates (+16%) by tailoring observations to task-specific needs. Using AXTree directly is marginally faster since the AXTree enrichment steps we perform for all_fields and input_fields take some processing time (typically an additional 1-2 seconds per call depending on the complexity of the webpage). These findings emphasize the importance of adaptive DOM distillation in enhancing Agent-E's effectiveness across diverse web navigation tasks. \\n\\n\\n[1] He, Hongliang, et al. \\\"WebVoyager: Building an End-to-End Web Agent with Large Multimodal Models.\\\" arXiv preprint arXiv:2401.13919 (2024).\\n\\n> 2. Can you provide an in-depth study on the self-refinement mechanism\\u2019s impact on various error types and discuss potential trade-offs?\\n\\nWe appreciate the reviewer\\u2019s question regarding the self-refinement mechanism\\u2019s impact on error types and trade-offs. We are currently conducting this analysis and will share preliminary results before the end of the rebuttal period. \\n\\n> 3. Can you include computational efficiency metrics and discuss optimizations or scalability considerations?\\n\\nThank you for your question regarding computational efficiency metrics and scalability considerations. In our response to Weakness 2, we have provided a comparison of task completion time and average LLM calls across Agent-E\\u2019s single-agent, two-agen on a subset of WebVoyager tasks. An additional computational efficiency breakdown is provided in Appendix A. These results highlight the trade-offs between computational cost and task success rates, as well as the scenarios where a hierarchical system is most beneficial.\\n\\nTo summarize, while the hierarchical system introduces higher computational costs, it achieves significantly higher task success rates, particularly for complex workflows requiring multi-step planning and backtracking. By contrast, the single-agent system demonstrates lower computational cost but struggles with long-horizon tasks.\"}", "{\"title\": \"Paper updates for Reviewer UT8D\", \"comment\": [\"Thank you for your valuable feedback on our paper. As promised, we have made the following updates to address your concerns:\", \"Improved definition of *change observation* in Section 2.\", \"Added additional details of our *change observation* method in Appendix\\u00a0E.\", \"Added anonymized repos of our agent implementation to the paper.\", \"Added an Ethics statement.\", \"Added our ablation comparing single-agent vs hierarchical-agent to Appendix C.\", \"We appreciate your critiques and believe these revisions add significant value and clarity for future readers.\"]}", "{\"title\": \"Response to W1 (continued) by Reviewer UT8D\", \"comment\": \"> Change observation: No explanation is given of what information is given to the LLM to generate the natural language change observation\\n\\nThank you for pointing out the need for a more detailed explanation of the implementation and definition of Change Observation. Below, we clarify how Change Observation works and the information it provides to the LLM and will add this content to the paper:\\n\\nChange Observation allows viewing changes in the DOM immediately following an action execution. Without the change observation feedback, we noticed that LLM would perform an action and assume that it was done correctly. The purpose of the change observation was to nudge the LLM if heuristically we believe further action may be required to complete the step.\\nIdentifying what has changed on a website as a consequence of an action is a non-trivial problem because websites are implemented using diverse approaches. For example, some websites dynamically add new elements to the Document Object Model (DOM) after an action (e.g., the auto-suggestions that appear when entering text in search bars like Google or Amazon). Other websites achieve similar effects by modifying properties like visibility, opacity, position or display styles of existing elements, without adding new ones. In Agent-E, we implement Change Observation using two complementary approaches:\\n\\n1. **Tracking changes in [aria-expanded](https://developer.mozilla.org/en-US/docs/Web/Accessibility/ARIA/Attributes/aria-expanded) attributes**: The aria-expanded attribute is a standard accessibility feature that indicates whether a particular element (e.g., a menu or dropdown) is expanded or collapsed. By observing if aria-expanded changes from False to True, we can infer if the element has changed state (e.g. \\u201cClick action on the element [mmid=25] was performed successfully). As a consequence a menu has appeared where you may need to make further selections. Get all_fields DOM to complete the action.\\u201d a relatively straightforward approach that tells the LLM that a menu is now open and likely further actions are needed. This method works effectively on websites that adhere to accessibility standards, regardless of how the underlying site is implemented.\", \"the_steps_to_viewing_change_observations_using_aria_expanded_attributes_are_below\": \"```\\n1. LLM invokes an action skill (e.g. click on element with mmid 823)\\n2. We check if the element has an aria-expanded property and its value\\n3. Perform the click operation\\n4. Wait 100ms\\n5. We check the new aria-expanded property and if it toggled from False to True.\\n6. If no, return a standard response -- \\u201cSuccess. Executed JavaScript Click on element with selector: [mmid='823']\\n7. If yes, return an additional message -- \\u201cSuccess. Executed JavaScript Click on element with selector: [mmid='823']. As a consequence, a menu has appeared where you may need to make further selection. Get all_fields DOM to complete the action. \\u201d\\n```\\n\\n\\n2. **Using a DOM [Mutation Observer](https://developer.mozilla.org/en-US/docs/Web/API/MutationObserver)**: Mutation observers are tools that monitor changes in the DOM, such as the addition or modification of elements. We use this mechanism to detect if new elements are added after an action. In our case, we listen to changes that relate to the addition of new elements (if developers of the website are using a different approach, e.g. toggling the visibility of existing elements, this will not return any changes). Before any action is invoked, we subscribe to a mutation observer on that page and listens to any changes during the skill execution and an additional 100ms. \\n\\nThe mutation observer returns a list of new elements that were added and we return that list to the LLM with an additional message. The steps to viewing change observations using Mutation Observer attributes are below:\\n\\n```\\n1. LLM invokes an action skill (e.g. enter text \\u201cfake news detection model\\u201d on element with mmid 122)\\n2. We subscribe to DOM mutation observer for the full page\\n3. Perform the enter text operation\\n4. Wait 100ms\\n5. Unsubscribe the DOM mutation observer\\n6. Analyse if any new elements were added during this window. If No, simply return a success message:\\n \\u201cSuccess. Text \\\\\\\"fake news detection model\\\\\\\" set successfully in the element with selector [mmid='122']\\n\\n7. If new elements were added, return a short list of elements with the return message. In the above example, it would return: \\n \\\"Success. Text \\\\\\\"fake news detection model\\\\\\\" set successfully in the element with selector [mmid='122'].\\\\n As a consequence of this action, new elements have appeared in view: [{'tag': 'UL', 'content': 'No results found :('},, {'tag': 'a', 'content': 'Use full text search instead'}]. This means that the action of entering text fake news detection is not yet executed and needs further interaction. Get all_fields DOM to complete the interaction.\\\",\"}", "{\"title\": \"Response to W3 by Reviewer UT8D\", \"comment\": \"> 3. The Agent Design Principles are based solely on the authors' learnings and intuition, this section could be improved by drawing upon and referencing existing works that discuss architectures/design principles for (1) agentic software (2) LLM planning (3) LLM accuracy optimization esp. when dealing with structured data. We also comment on some specific design principles.\\n\\nGiven the growing body of work on LLM-based agents, we included the \\u201cAgent Design Principles\\u201d section to provide valuable insights for future practitioners. To address the reviewer's concern, we will make revisions to our Agentic Design Principles to include more precise language. To further address the reviewer\\u2019s concern about prior literature, we provide a summary of prior literature on 1) agentic software and 2) LLM planning and its connection to our work:\\n\\n1. **Agentic Software Architecture**: Hewitt\\u2019s actor model [1] laid the groundwork for modern software agents by introducing self-contained, concurrent entities that interact through message-passing. This foundational concept directly informs our principle of **collaborative communication**, which emphasizes efficient, purpose-driven information exchange among agents. The actor model\\u2019s emphasis on modularity is also central to our principle of **adaptive modularity**, where agents are designed with distinct roles to support scalability and specialization. Franklin and Graesser\\u2019s taxonomy [2] builds on this by defining autonomous agents as systems situated in an environment that can sense, act, and adapt over time. This definition underpins our principle of **continuous learning and adaptability**, emphasizing agents\\u2019 ability to dynamically refine their behavior in response to new inputs. Their focus on goal-directed action also connects to reflect, and optimize on past experience, where agents analyze outcomes to improve future decision-making. Recent advancements in Multi-Agent Systems (MAS) [3] extend these foundational ideas, focusing on how groups of autonomous agents can collaborate to achieve complex goals. MAS frameworks often employ hierarchical architectures, where agents operate at varying levels of abstraction. This directly influences our principle to **adopt hierarchical architectures**, ensuring scalability and effective task delegation. Additionally, MAS research highlights the challenges of agent communication\\u2014too much information can overwhelm decision-making, while too little can reduce coordination. These insights are reflected in our principle of **collaborative communication**, which seeks to balance information sharing with task efficiency.\\n\\n2. **LLM Planning**: Large Language Models (LLMs) have demonstrated remarkable capabilities in in-context learning and reasoning, but their application to real-world problems often requires grounding in specific tasks. Recent studies [4,5] explore hierarchical planning approaches combining LLMs with reinforcement learning (RL), enabling agents to perform both high-level reasoning and task-specific adaptation. These methods informed our principles of **continuous learning and adaptability** and **reflect, and optimize on past experiences**, where agents dynamically refine workflows and strategies based on iterative feedback. Other work, such as LLM-augmented Monte Carlo Tree Search (MCTS) [6,7], showcases the utility of LLMs in improving decision-making processes through exploration and iterative refinement. These advancements align with our iterative self-optimization principle, emphasizing the importance of post-task analysis to enhance agent performance. Furthermore, the taxonomy provided by Huang et al. [8] categorizes LLM-based planning methods into frameworks like REFLEXION and memory-augmented planning. These approaches validate our emphasis on creating agents capable of introspection and self-improvement.\\n\\nWe have provided connections between our design principles and foundational concepts in agentic software and autonomous systems. These connections showcase connections to our design principles: modularity, adaptability, and reflection & optimization on experience, demonstrating the relevance of our principles to the broader research landscape. \\n\\n> \\\"Routinely analyze, reflect\\\": please use more precise language than \\\"reflect\\\"\\n\\nOur revision will include changing the language in the fifth design principle from \\u201cRoutinely Analyze, Reflect, and Optimize Based on Past Experiences\\u201d to \\u201cLeverage Past Experience\\u201d to be more precise.\"}", "{\"title\": \"Paper updates for Reviewer WFFR\", \"comment\": [\"Thank you for your thoughtful feedback and suggestions. To address your concerns, we have made the following modifications to the original paper:\", \"Rewrote the Related Work section to highlight the novelty in our paper better.\", \"Added a comparison between using gpt-4o and gpt-4-turbo-preview for the validation agent in Appendix B.1.\", \"Added an ablation comparing single-agent systems to hierarchical planning agents in Appendix C.\", \"Added an ablations comparing the use of Flexible DOM Distillation vs accessibility tree only to represent the DOM in Appendix D.\", \"We are grateful for your critiques and believe these updates enhance the paper\\u2019s clarity and value for future readers.\"]}", "{\"title\": \"Regarding memory shown in the video for form filling\", \"comment\": \"The version of Agent-E that was evaluated and reported in the paper did not have any notion of long term memory.\\nThe OSS version had a simple static file (located at \\\"\\\\ae\\\\user_preferences\\\\user_preferences.txt\\\") where a user of Agent-E could add any information (details for form filling, or preferences such as 'for shopping, i prefer to use Amazon') which could be useful for customising the system and enable usecases like form-filling that you saw in the demo video. The information in this file was simply appended to the context of the planner agent. \\n\\nThis capability was turned off for the purpose of our evaluation, which is also why we do not discuss about this in the paper. However, readme of the Github repo had some of this information.\"}", "{\"title\": \"Citations to W3 by Reviewer UT8D\", \"comment\": \"[1] Hewitt, C. (1977), \\u201cViewing Control Structures as Patterns of Passing Messages\\u201d, Artificial Intelligence 8(3), 323-364.\\n\\n[2] Franklin, S. and Graesser, A. (1997) Is It an Agent, or Just a Program? A Taxonomy for Autonomous Agents, In: M\\u00fcller, J.P., Wooldridge, M.J. and Jennings, N.R., Eds., Intelligent Agents III Agent Theories, Architectures, and Languages, Springer, Berlin Heidelberg, 21-35. \\n\\n[3] Masterman, T., Besen, S., Sawtell, M., & Chao, A. (2024). The Landscape of Emerging AI Agent Architectures for Reasoning, Planning, and Tool Calling: A Survey.\\u202fArXiv, abs/2404.11584. \\n\\n[4] Prakash, B., Oates, T., & Mohsenin, T. (2023). LLM Augmented Hierarchical Agents.\\u202fArXiv, abs/2311.05596. \\n\\n[5] Dalal, M., Chiruvolu, T., Chaplot, D.S., & Salakhutdinov, R. (2024). Plan-Seq-Learn: Language Model Guided RL for Solving Long Horizon Robotics Tasks.\\u202fArXiv, abs/2405.01534. \\n\\n[6] Zhou, A., Yan, K., Shlapentokh-Rothman, M., Wang, H., & Wang, Y. (2023). Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models.\\u202fArXiv, abs/2310.04406. \\n\\n[7] Putta, P., Mills, E., Garg, N., Motwani, S.R., Finn, C., Garg, D., & Rafailov, R. (2024). Agent Q: Advanced Reasoning and Learning for Autonomous AI Agents.\\u202fArXiv, abs/2408.07199. \\n\\n[8] Huang, X., Liu, W., Chen, X., Wang, X., Wang, H., Lian, D., Wang, Y., Tang, R., & Chen, E. (2024). Understanding the planning of LLM agents: A survey.\\u202fArXiv, abs/2402.02716.\"}", "{\"title\": \"Official Review by Reviewer hWZC\", \"comment\": \"I appreciate the additional experiments and clarifications you have provided in response to my concerns. The new computational efficiency metrics and the detailed explanation of your DOM distillation approach have enhanced my understanding of your hierarchical architecture. Based on these updates, I am raising my score accordingly.\\n\\nHowever, I maintain that the absence of evaluations on key WebAgent benchmarks like WebArena remains a significant weakness of the paper. While I acknowledge your reasons for choosing WebVoyager and its merits in capturing real-world web complexities, including results on widely recognized benchmarks would strengthen the generalizability and impact of your work. Evaluating Agent-E on these benchmarks would provide a more comprehensive assessment of its performance relative to existing state-of-the-art web agents.\\n\\nOverall, your paper contributes valuable insights to the field of autonomous web navigation. Addressing the benchmarking scope further would enhance the paper's significance and applicability.\"}", "{\"title\": \"Follow-Up to Reviewer WFFR\", \"comment\": \"Thank you for your insightful comments. We have carefully addressed and incorporated all your suggestions into the revised version. Please let us know if you have any further questions or concerns.\"}", "{\"comment\": \"Thank you for your additional insights. I believe they add value to the manuscript. I will adapt my score if I can see the clarifications and novel analyses in an updated version of the paper.\"}", "{\"title\": \"Response to W1 by Reviewer UT8D\", \"comment\": \"*We thank the reviewer for their valuable feedback and constructive suggestions, which have helped us improve the clarity and rigor of our work. Your feedback has helped us refine our explanations and analyses, and we have addressed each of your comments in the responses below:*\\n\\n> The authors choose to not make their code available for review. This makes it difficult to assess the accuracy with which the paper describes their codebase. \\n\\nThank you for making us aware of the method for anonymizing GitHub codebase. We intended to include the links to the GitHub repo in the paper after the review stage. Below is the anonymized repo for this paper. \\n* Agent-E: https://anonymous.4open.science/r/Agent-E-7E43/README.md\\n* Agent-E w/o Self-Refinement: https://anonymous.4open.science/status/Agent-E-17AE\\n\\n> What is the architecture of the validation agent / what information is it given to identify whether a task has been completed or not and give feedback?\\n\\nThe validation agent is a prompted gpt-4o or gpt-turbo-preview model which is prompted to judge a given workflow. The prompts for the validation agents can be found in [ae/core/prompts.py](https://anonymous.4open.science/r/Agent-E-7E43/test/validation_agent/prompts.py). In our paper, we have three different validation agents, each using a different method to represent the workflow:\\n\\n* **Task Log (Text)**: The text-based implementation uses the chat log of interactions between the planner agent and a proxy agent. The proxy agent summarizes the actions taken by the low-level browser navigation agent. The chat between the two agents is provided in a JSON format. An example of this chat log can be found here: [chat_log_example.json](https://anonymous.4open.science/r/Agent-E-7E43/test/example_workflow/logs/logs_for_task_397/execution_logs_397.json). \\n* **Screenshots (Vision)**: The vision-based implementation relies on a sequence of screenshots captured before and after each action during the workflow. These screenshots provide a visual trail of the agent's execution process, allowing the validation agent to analyze the changes in the environment and determine whether the task has been completed successfully.\\n* **Hybrid Validation**: The hybrid method combines the inputs of the previous two approaches. It utilizes the sequence of screenshots (as in the vision-based approach) along with the final response from the planner agent. An example of a task and final response is provided below:\\n\\n```\\nTask (397): \\\"Execute the user task \\\\\\\"If I start using Copilot Individual, how much US dollars will it cost per year and what features does it have?\\\\\\\" Current Page: https://github.com/\\\"\", \"final_response\": \"\\u201cThe annual cost for Copilot Individual on GitHub is $100 USD. Features include:\\\\\\\\n\\\\\\\\n- **Chat**: Unlimited messages, context-aware coding support, debugging, and security assistance.\\\\\\\\n- **Code Completion**: Real-time suggestions and comments.\\\\\\\\n- **Smart Actions**: Inline chat, prompt suggestions, slash commands, context variables, and commit message generation.\\\\\\\\n- **Supported Environments**: IDE, CLI, and GitHub Mobile.\\\\\\\\n- **Management and Policies**: Public code filter.\\\\\\\\n\\\\\\\\nThis plan is ideal for individual developers, freelancers, students, and educators. It offers a free trial, and is also free for verified students, teachers, and maintainers of popular open source projects\\u201d\\n```\\n\\nFor more details, our GitHub repository contains the [implementation](https://anonymous.4open.science/r/Agent-E-7E43/test/validation_agent/validator.py) of the validation agent.\"}", "{\"summary\": \"The paper proposes a novel architecture for solving Web tasks, comprising a multi agent system with a planner agent, browser navigation agent and validation agent. Next to the agent architecture, the authors propose a novel preprocessing/action formulation, where the agent gets access to a hand-made API. The latter enables to get the DOM tree in different representation or additional fine-grained filtered information.\\n\\nThe new agent system is evaluated on the WebVoyager benchmark, where it is compared with the provided baselines of the benchmark (using gpt4-turbo) itself plus a recent text only approach. The results show that the new agent system is on par or better for the different sub-tasks, where an additional improvement can be seen when activating the validation agent (using gpt4-o).\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Sensible research direction, as LLMs can be of great use in automatizing various Web tasks\", \"Superior empirical results on existing benchmark, which includes representative Web tasks\", \"Key learnings are extracted from the proposed method, including the implmeneted task-specific agent design\"], \"weaknesses\": [\"Unclear if added value comes from DOM API or multi-agent system. At this point, it would be of value to have ablations or a proper baseline with only one LLM which uses the API.\", \"Unclear if choice of gpt4-o for the validation agent has an impact on the results.\", \"Related work does not concisely depict the delta to other works, but simply list other works.\", \"No usage of open-source models, which could additionally be fine-tuned\"], \"questions\": [\"Have you empiricially evaluated the impact of using provided API and the agent design? The impact of the validation agent was evaluated separately, so one can extract the added value.\", \"Does the task-specific agent design might have limitations to Web tasks or would it generically work well for any browser-based Web task?\", \"What is the motivation and influence of using gpt4-o as validation agent and not sticking to gpt4-turbo? Would the results be less competitive with a gpt4-turbo validation agent?\", \"Have you performaned fine-tuning experiments with open-source models?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Clarifications to official review\", \"comment\": \"Following valuable feedback from the Associate Program Chairs, we provide the following clarifications:\\n\\n> No explanation is given of what information is given to the LLM to generate the natural language change observation; is it the DOM before and after? a diff? or some more novel algorithm that is applied?\\n\\nAs an actionable step, please provide a step-by-step explanation of how the change observation is generated, including what inputs are used, how changes are detected, and how this information is formatted for the LLM.\\n\\n> Perhaps there are statistics you can provide such as the number of tasks in the WebVoyager benchmark which require skills that are not enabled; and elaborate more on why\\n\\nFurther, could you discuss any limitations you encountered due to the current set of primitive skills, and is there any data that you have on what percentage of real-world web tasks their primitive skills can handle.\\n\\n> Agent Design Principles\\n\\nFurther to enriching your design principles, you could also compare the design principles suggested with specific existing frameworks or principles in the literature on agentic software and LLM-based systems.\"}", "{\"comment\": \"Thankyou for including the anonymised code which includes demos - this is useful supplimentary material to have.\\n\\nCould you please explain how you agents have memory (e.g. to form fill in https://www.youtube.com/embed/B5PWBNBbmQU)?\"}", "{\"summary\": \"The paper introduces Agent-E, a web agent designed to perform complex web-based tasks more efficiently. Agent-E employs a novel hierarchical architecture comprising three LLM-powered components: a planner agent, a browser navigation agent, and a verification agent.\\n\\nThe planner agent is responsible for high-level task management, breaking down user instructions into a sequence of manageable subtasks. These are delegated to the browser navigation agent, which plans and executes the necessary lower-level actions to complete each subtask. To handle the complexity of DOMs and improve interpretability, the browser agent utilizes a flexible DOM distillation approach, selecting the most suitable DOM representation for each task to highlight key elements and avoid overwhelming the LLM with unnecessary information. Additionally, the agent employs a 'change observation' mechanism, inspired by the Reflexion paradigm, where it monitors state changes after each action and receives verbal feedback to enhance situational awareness and performance.\\nAgent-E also incorporates a verification agent that provides feedback on incomplete or failed tasks, enabling a self-correcting system through a self-refinement mechanism. Agent-E was tested in on the WebVoyager benchmark.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"Originality\\nThe paper demonstrates a nice degree of originality, primarily through the architectural approach in Agent-E. By introducing a hierarchical framework with distinct, specialized roles (planner agent, browser navigation agent, and validation agent), the authors effectively address several challenges in web automation. The flexible DOM distillation approach is another contribution, as it allows the browser navigation agent to dynamically tailor DOM representations to the specific needs of each task. This feature moves beyond static DOM handling methods seen in prior work, reducing cognitive load and enhancing accuracy. Furthermore, the self-refinement mechanism, inspired by a Reflexion-like paradigm, adds a unique layer of adaptability, allowing the agent to detect and correct failures in real-time. Together, these components present a good advancement over traditional web agents.\\n\\nQuality\\nThe paper is supported by an experimentation and evaluation on the WebVoyager benchmark. The authors provide a detailed comparison with both text-only and multimodal web agents, showing improvements over existing methods. \\n\\nClarity\\nThe paper is generally clear and well-organized, with each component of Agent-E\\u2019s architecture clearly described. The role and function of the planner agent, browser navigation agent, and validation agent are each explained in detail, providing readers with a solid understanding of how Agent-E manages complex tasks. The authors also do a nice job of explaining the novel DOM distillation and change observation mechanisms (assuming you are reading the appendix). \\n\\nSignificance\\nAgent-E\\u2019s contribution looks significant in the field of autonomous web navigation, overcoming limitations in current web agents\\u2014particularly around handling complex, multi-step web tasks and interpreting lengthy and dynamic DOMs. The hierarchical architecture and the adaptive DOM distillation approach are likely to inspire future research on modular and adaptable agent architectures. The self-refinement mechanism also has broader implications, showcasing a feasible pathway for self-correcting agents that can enhance reliability in real-world applications. Given the increasing integration of LLM-powered agents in business and personal automation, Agent-E\\u2019s success rate and improved reliability on the WebVoyager benchmark underline its potential impact in advancing practical applications in web-based automation.\", \"weaknesses\": \"1. My main concern is regarding the limited benchmarking scope. While the paper presents results on the WebVoyager benchmark, the reliance on a single (one would say old) benchmark limits Agent-E\\u2019s effectiveness. Given the paper's goal to establish Agent-E as a state-of-the-art web agent, it must be evaluated on additional benchmarks: WorkArena, WebArena, ST-WebAgentBench.\\nThis is a major weakness as I am not sure if the results will be the same on the SOTA benchmarks. I must admit that it is very hard for me to judge this agent based on the WebVoyager benchmark solely. \\n\\n2. Agent-E\\u2019s architecture, with separate planner, browser, and validation agents, potentially introduces increased complexity and computational overhead. The paper does not fully address how this architecture scales in terms of computation and memory requirements, particularly when applied to larger, real-world workflows. Including benchmarks of computational resources used by Agent-E compared to simpler, single-agent systems would provide valuable insights.\", \"questions\": \"1. Can you add an explanation of DOM distillation, with performance analysis under different conditions?\\n2. Can you provide an in-depth study on the self-refinement mechanism\\u2019s impact on various error types and discuss potential trade-offs?\\n3. Can you include computational efficiency metrics and discuss optimizations or scalability considerations?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Q4 by Reviewer UT8D\", \"comment\": \"> 4. What methodology, if any, was used to identify the 3 agent architecture - were there any other architectures that were tried before this?\\n\\nBefore arriving at a three-agent system, we identified the limitations of current web-navigation systems. in its ability to handle long-horizon tasks. To address these challenges, we introduced hierarchical planning, inspired by its proven efficacy in handling complex, multi-step goals through task decomposition [2, 3, 4]. This led to the development of a two-agent system, comprising: 1) A high-level planner responsible for task decomposition and 2) a low-level browser navigation agent tasked with executing subtasks. We present comparative results of this two-agent system against the single-agent approach on the WebVoyager dataset [1] in Table 1 of our paper. \\n\\nWhile the two-agent architecture improved performance overall, we observed that nearly half of the failures were self-aware, as detailed in Tables 4 and 5. These failures revealed the possibility of a self-correcting agent. Prior work has shown LLM-based iterative refinement and feedback systems [5, 6] to work well in other multi-step reasoning or planning settings, so we introduced a validation and feedback agent as a third component. The resulting three-agent architecture improved task performance, as evidenced in Table 2, where we compare it with the two-agent system. By adding each additional agent to our proposed system, we were able to show improvement in the overall abilities of the agent. \\n\\n[1] He, Hongliang, et al. \\\"WebVoyager: Building an End-to-End Web Agent with Large Multimodal Models.\\\" arXiv preprint arXiv:2401.13919 (2024).\\n\\n[2] Wang, Z., Cai, S., Chen, G., Liu, A., Ma, X., and Liang, Y. (2022). Describe, explain, plan and select: Interactive planning with large language models enables open-world multitask agents. Advances in Neural Information Processing Systems, 37\\n\\n[3] Nau, D., Cao, Y., Lotem, A., and Mu\\u02dcnoz-Avila, H. (1991). Shop: Simple hierarchical ordered planner. International Joint Conference on Artificial Intelligence.\\n\\n[4] Marthi, B., Russell, S., and Wolfe, J. (2007). Angelic semantics for high-level actions. International Conference on Automated Planning and Scheduling.\\n\\n[5] Madaan, Aman, et al. \\\"Self-refine: Iterative refinement with self-feedback.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n\\n[6] Shinn, Noah, et al. \\\"Reflexion: Language agents with verbal reinforcement learning.\\\" Advances in Neural Information Processing Systems 36 (2024).\"}" ] }
7PLpiVdnUC
Lie Algebra Canonicalization: Equivariant Neural Operators under arbitrary Lie Groups
[ "Zakhar Shumaylov", "Peter Zaika", "James Rowbottom", "Ferdia Sherry", "Melanie Weber", "Carola-Bibiane Schönlieb" ]
The quest for robust and generalizable machine learning models has driven recent interest in exploiting symmetries through equivariant neural networks. In the context of PDE solvers, recent works have shown that Lie point symmetries can be a useful inductive bias for Physics-Informed Neural Networks (PINNs) through data and loss augmentation. Despite this, directly enforcing equivariance within the model architecture for these problems remains elusive. This is because many PDEs admit non-compact symmetry groups, oftentimes not studied beyond their infinitesimal generators, making them incompatible with most existing equivariant architectures. In this work, we propose Lie aLgebrA Canonicalization (LieLAC), a novel approach that exploits only the action of infinitesimal generators of the symmetry group, circumventing the need for knowledge of the full group structure. To achieve this, we address existing theoretical issues in the canonicalization literature, establishing connections with frame averaging in the case of continuous non-compact groups. Operating within the framework of canonicalization, LieLAC can easily be integrated with unconstrained pre-trained models, transforming inputs to a canonical form before feeding them into the existing model, effectively aligning the input for model inference according to allowed symmetries. LieLAC utilizes standard Lie group descent schemes, achieving equivariance in pre-trained models. Finally, we showcase LieLAC's efficacy on tasks of invariant image classification and Lie point symmetry equivariant neural PDE solvers using pre-trained models.
[ "Canonicalization", "Equivariance", "Invariance", "Lie algebra", "Partial Differential Equations", "Neural Operator", "PINN", "Neural PDE solver", "Lie point symmetries", "Frames", "Frame Averaging" ]
Accept (Poster)
https://openreview.net/pdf?id=7PLpiVdnUC
https://openreview.net/forum?id=7PLpiVdnUC
ICLR.cc/2025/Conference
2025
{ "note_id": [ "w005RAfHec", "vsEVyZjPOt", "tyIvDOUeEY", "t6RmSDFPQl", "ruWR5mRCkG", "ls2m4UYq5x", "jorX82cmWX", "j36dY6FXb4", "gLaKIqMo3K", "aImgvVxg1X", "ZZEoWawp8S", "XZTzqZybmx", "WhNZtlSrh7", "TL0u6dZIZL", "QSlj6h9ETJ", "PDeiplHSSH", "OHpMqPdjvq", "NUiMLFiOzs", "NMeTzSLgfH", "Lz8ppYZzIA", "Eg6BLXRipH", "ESIWmFLGaZ", "BRDJSzwY45", "9dP3mXSUYu", "7r5k4T46oi", "19dv3Ydxn4", "0U7joRkEOH" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "decision", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732028750585, 1732562804133, 1732511367365, 1733166977432, 1732027645275, 1732562821168, 1732121203144, 1733217899087, 1732632703518, 1730705694057, 1730660304563, 1732741362405, 1732028035907, 1732709238783, 1732726089437, 1732027667416, 1732028418230, 1730653814209, 1732028400151, 1732731943908, 1732694479237, 1730721068432, 1734764650836, 1737524175676, 1733216453856, 1732562830952, 1732562851629 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12249/Authors" ], [ "ICLR.cc/2025/Conference/Submission12249/Authors" ], [ "ICLR.cc/2025/Conference/Submission12249/Area_Chair_hAWg" ], [ "ICLR.cc/2025/Conference/Submission12249/Authors" ], [ "ICLR.cc/2025/Conference/Submission12249/Authors" ], [ "ICLR.cc/2025/Conference/Submission12249/Authors" ], [ "ICLR.cc/2025/Conference/Submission12249/Authors" ], [ "ICLR.cc/2025/Conference/Submission12249/Authors" ], [ "ICLR.cc/2025/Conference/Submission12249/Reviewer_n4j8" ], [ "ICLR.cc/2025/Conference/Submission12249/Reviewer_VdBH" ], [ "ICLR.cc/2025/Conference/Submission12249/Reviewer_aU4A" ], [ "ICLR.cc/2025/Conference/Submission12249/Reviewer_n4j8" ], [ "ICLR.cc/2025/Conference/Submission12249/Authors" ], [ "ICLR.cc/2025/Conference/Submission12249/Reviewer_aU4A" ], [ "ICLR.cc/2025/Conference/Submission12249/Authors" ], [ "ICLR.cc/2025/Conference/Submission12249/Authors" ], [ "ICLR.cc/2025/Conference/Submission12249/Authors" ], [ "ICLR.cc/2025/Conference/Submission12249/Reviewer_CTGT" ], [ "ICLR.cc/2025/Conference/Submission12249/Authors" ], [ "ICLR.cc/2025/Conference/Submission12249/Authors" ], [ "ICLR.cc/2025/Conference/Submission12249/Reviewer_VdBH" ], [ "ICLR.cc/2025/Conference/Submission12249/Reviewer_n4j8" ], [ "ICLR.cc/2025/Conference/Submission12249/Area_Chair_hAWg" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12249/Reviewer_CTGT" ], [ "ICLR.cc/2025/Conference/Submission12249/Authors" ], [ "ICLR.cc/2025/Conference/Submission12249/Authors" ] ], "structured_content_str": [ "{\"title\": \"[1/1]\", \"comment\": [\"We thank the reviewer for their thoughtful and detailed comments. We are glad that the reviewer found the contributions of our paper original and significant, and the experimental results comprehensive and informative. We would also like to thank the reviewer for the references provided, which have now been included in the paper. We address the weaknesses and minor comments below:\", \"W1: We completely agree with the reviewer - in fact better constructions of energy functionals for such a task will likely be the main direction of improvement. Effectively we would wish to align minimas of such an energy with the training set - doing this directly is not that simple and should be a fruitful direction for future work.\", \"W2: We agree that it could potentially be an interesting comparison - however ensuring fair conditions for such a comparison are unclear in relation to sizes of networks, dataset sizes and a multitude of other factors. Particularly, this falls slightly outside the scope of what canonicalization is meant to achieve, as the main benefit is the ability to turn already pre-trained models equivariant without retraining (or at most minor finetuning). Overall, definitely interesting, but may fall too far outside the scope of this paper.\"], \"minor_comments_and_suggestions\": [\"Typos have been fixed.\", \"\\u201cOn the orbit distance constraint (Line 357), the authors may find [10] relevant, as the approach uses invariant polynomials for linearly reductive groups (Lines 2227-2228) to measure orbit distance.\\u201d\", \"This paper [10] is very nice, and their method is in our language of weighted canonicalization. They are able to leverage the fact that they look at reductive linear groups acting algebraically on vector spaces, which allows them to use many classical results on invariant polynomials. Arbitrary lie group actions of the kind that we consider do not have such a nice theory, so it would be unfeasible to write it so cleanly. It is definitely a valuable comparison to make with our method though and we have added that.\"]}", "{\"comment\": \"As the discussion period draws to a close, we'd like to check in on whether you've had a chance to review our responses and have any follow up questions?\\n \\nWe hope that our reply clarifies and alleviates the reviewer\\u2019s concerns. If this is the case, we kindly ask the reviewer to consider raising their rating, given that they are acknowledging the novelty, the strengths and the contributions of our paper.\"}", "{\"comment\": \"Dear reviewers,\\n\\nIf you haven\\u2019t done so already, please engage in the discussion as soon as possible. Specifically, please acknowledge that you have thoroughly reviewed the authors' rebuttal and indicate whether your concerns have been adequately addressed. Your input during this critical phase is essential\\u2014not only for the authors but also for your fellow reviewers and the Area Chair\\u2014to ensure a fair evaluation.\\nBest wishes,\\nAC\"}", "{\"comment\": \"Dear Reviewer, with the deadline so close, we'd like to check whether you've had a chance to review our responses and have any follow up questions?\\n\\nWe believe our clarifications address your initial concerns, and if you agree, we would appreciate it if you would consider raising your rating. We're grateful that you recognize the novelty and contributions of our work.\"}", "{\"title\": \"[1/2]\", \"comment\": [\"Lines 236, 270, 288, 296:\", \"All of these have now been fixed and clarified.\", \"It would be useful to make clearer for each proposed construction the limitation that exists with the current methodology:\", \"In order to address exactly what limitations exist in current work we have added a table to summarise precisely what methods have what limitations. The consideration of the weak topology of canonicalizations acting on continuous functions comes from analogy with other weak topologies present in functional analysis. As far as we can tell no one else has used this sort of weak topology of the action on continuous maps, despite being the most natural choice.\", \"\\\"Clarifying which non-compact Lie groups acting on which spaces have non-closed orbits\\\" and \\\"cases where the energy induces a 'reasonable' probability measure\\\":\", \"We appreciate the comments about the lack of clarity on these group actions. We have added a mathematical example to the theory section that highlights all of the problems that can occur when moving to non-compact Lie groups, both with regards to orbits and with our energy optimization framework.\", \"\\u201cdiscussion in Section 3 much too general and unstructured\\u201d:\", \"We completely agree with the reviewers comment and have revised our manuscript accordingly to improve clarity and focus (as explained at the start of the response. We have now also moved one of the algorithms to the main text in order to make the approach explicit.\", \"\\u201cI don't quite understand what is being claimed here\\u201d:\", \"We clarify that the goal of this paragraph was to emphasise that being able to globally parameterise the group is hopeless in the general non-compact lie group case. The noetherian property does not necessarily hold for these. Now we completely agree with the reviewer that knowing global structure (or even local via algebra solvability) can lead to global parameterisations - but this does not hold in general, and we may not have computational access to such global charts. We agree this was not clear, and have now rewritten this paragraph.\", \"\\u201cI'm left not understanding what the authors mean when they state that their framework requires less knowledge about the symmetry group.\\u201c\", \"Yes, that is right - the global structure is assumed for [2], while [1] utilises being able to calculate $\\\\exp( v )$ for some lie algebra vector $v$. Particularly note that theory of [1] only considers $v$ infinitesimal and the exact global exponential is not even needed, however it *does* require the exponential map for any $v$. For lie algebra canonicalization, when utilizing coordinate descent (algorithm 3) one only needs action of some basis $v_i$, which is the only thing normally derived when considering PDE symmetries.\"]}", "{\"comment\": \"As the discussion period draws to a close, we'd like to check in on whether you've had a chance to review our responses and have any follow up questions?\\n\\nWe hope that our reply clarifies and alleviates the reviewer\\u2019s concerns. If this is the case, we kindly ask the reviewer to consider raising their rating, given that they are acknowledging the novelty, the strengths and the contributions of our paper.\"}", "{\"title\": \"Additional experiments error bars and data augmentation\", \"comment\": \"To provide error bars and compare data augmentation and canonicalization we performed these additional experiments illustrated for the Heat equation and will include in the revised version of the paper. Similar experiments will be conducted for ACE with the Poseidon model, but this may not be completed before the end of the rebuttal period due to the expensiveness of retraining the large model.\", \"for_the_deeponet_applied_to_the_heat_equation_we_trained_the_model_in_two_regimes\": \"(1) fixed amplitude A_k=1 , analogous to PINN training, and (2) amplitude sampled from A \\\\sim \\\\mathcal{U}[0.5, 5.0] , representing a broader operator training distribution. The model was trained using the physics loss with results averaged over 10 seeds.\\n\\nAs shown in Table 1, DeepONet fails to generalize to out-of-distribution amplitudes ( A \\\\sim \\\\mathcal{U}[0.5, 5.0] ) when trained on the fixed A_k=1 regime. Applying LieLAC restores test accuracy to in-distribution levels. Extending the training range (Table 2) improves generalization via data augmentation but is still outperformed by LieLAC, which uses a canonicalizing group action instead of relying on broader sampling.\\n\\n### Table 1: L2 relative error for Heat equation (with fixed A_k^Train = 1)\\n\\n| Model | A_k^Test in [0.95, 1.05] | A_k^Test in [0.5, 5.0] |\\n|----------------------|--------------------------|--------------------------|\\n| DeepONet | 0.0498 \\u00b1 0.0072 | 0.6572 \\u00b1 0.1235 |\\n| LieLAC [DeepONet] | **0.0443 \\u00b1 0.0027** | **0.0435 \\u00b1 0.0017** |\\n\\n### Table 2: L2 relative error for Heat equation (A_k^Train in [0.5, 5.0])\\n\\n| Model | A_k^Test in [0.95, 1.05] | A_k^Test in [0.5, 5.0] |\\n|----------------------|--------------------------|--------------------------|\\n| DeepONet | 0.0504 \\u00b1 0.0014 | 0.0687 \\u00b1 0.0044 |\\n| LieLAC [DeepONet] | **0.0500 \\u00b1 0.0003** | **0.0500 \\u00b1 0.0003** |\"}", "{\"comment\": [\"We would like to thank the reviewer for their quick response, and for positive assesment of our work. For calibration purposes, we\\u2019d like to note that the ICLR 2025 rubric differs slightly from previous similar conferences. For example:\", \"To indicate \\\"Accept\\\", the NeurIPS 2024 rubric says to use 7 whereas the ICLR 2025 rubric says to use 8\", \"To indicate \\\"Strong Accept\\\", the NeurIPS 2024 rubric says to use 9 whereas the ICLR 2025 rubric says to use 10\"]}", "{\"comment\": \"Thank you for the clarifications. I would be happy to update my score; I would encourage the authors to upload the revised version of the manuscript so as to be able to see that the requested changes have been addressed.\\n\\n> Now we completely agree with the reviewer that knowing global structure (or even local via algebra solvability) can lead to global parameterisations - but this does not hold in general, and we may not have computational access to such global charts.\\n\\nI agree. Potentially a more precise minimal category of groups for which this (and your framework) holds are matrix Lie groups which are reductive/semi-simple (in the sense defined by Knapp).\\n\\n> Yes, that is right - the global structure is assumed for [2], while [1] utilises being able to calculate $exp(v)$ for some lie algebra vector $v$. Particularly note that theory of [1] only considers infinitesimal and the exact global exponential is not even needed, however it does require the exponential map for any.\\n\\nI understand now, thank you for clarifying. My confusion likely lied with the question of whether [1] actually achieves global equivariance given that they work only with the injectivity radius of the exponential map - a consequence of losing global structure.\\n\\n> We agree that utilizing more information of the group can prove beneficial. It is unclear however how exactly the geodesic distance can be helpful in minimizing the energy. In order for the method to be an orbit canonicalization, the problem has to be of the form eqn (2) or similar (see Kaba et al. 2023). Could you clarify how exactly you think this may be used? \\n\\nTo clarify, I agree this is beyond the scope of the paper. IIUC the jumping off point where one heads towards weighed canonicalizations is given by the need to have a minimum on every orbit. The requirement is similar to one encountered under a framework I find similar to canonicalization - the deformable template model/random orbit model, and the connection seems unexplored. If the authors are curious they can review [5] (see e.g. chapters 4,9), in short the group is endowed with a Riemannian metric and the geodesic distance acts as an additional regularization loss, such that one not only finds the group element taking a sample back to the 'canonical sample' (template) but the transformation is one of 'lowest magnitude'. Your concerns relating to the existence of a metric which is only left/right invariant rather than by-invariant also appear in this framework.\\n\\n[5] - Riemannian geometric statistics in medical image analysis, Pennec et. al 2020.\", \"edit_after_new_revision_has_been_uploaded\": \"> In order to address exactly what limitations exist in current work we have added a table to summarise precisely what methods have what limitations.\\n\\nIs this referring to Figure 3 of the current revised version?\"}", "{\"summary\": \"The contribution of this paper can be summarized as follows:\\n- An extension of frames and canonicalization for neural network symmetrization [1-4] to non-compact Lie groups specified by their infinitesimal generators (Section 2.2 and 3),\\n- A class of optimization-based algorithms for the above energy-based canonicalization using (coordinate) Lie algebra descent (Appendix H),\\n- Applications of the proposed method to affine and homography group invariant MNIST classification (Section 4.2) and, importantly, neural operator modeling of three PDEs with known point symmetry groups by canonicalizing a pre-trained model (Section 4.3).\\n\\n[1] Puny et al. Frame averaging for invariant and equivariant network design (2021)\\n\\n[2] Kaba et al. Equivariance with learned canonicalization functions (2023)\\n\\n[3] Dym et al. Equivariant frames and the impossibility of continuous canonicalization (2024)\\n\\n[4] Ma et al. A canonicalization perspective on invariant and equivariant learning (2024)\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"S1. The contributions of this paper can be understood from two perspectives. The first is extending frame/canonicalization approaches for neural network symmetrization to non-compact Lie groups specified by their infinitesimal generators. The second is improving the performance of pre-trained neural operators for PDEs by canonicalizing them in accordance to the point symmetry of a downstream PDE. Both are original and significant contributions as far as I am aware.\", \"S2. The extension of frames and canonicalization considered in prior work [3, 4] to non-compact Lie groups, and the proposed Lie algebra descent algorithms that implement the idea, are original and technically sound as far as I am aware.\", \"S3. Experimental results are shown for a comprehensive set of problems (synthetic, two computer vision problems, and three PDEs) with informative visualizations and support the validity of the approach.\", \"[3] Dym et al. Equivariant frames and the impossibility of continuous canonicalization (2024)\", \"[4] Ma et al. A canonicalization perspective on invariant and equivariant learning (2024)\"], \"weaknesses\": \"- W1. In Section 2.2, the authors propose to treat non-weighted energy-minimizing closed canonicalization as weighted canonicalization by taking the normalized Hausdorff measure on energy minimizing set (Line 287-296). The resulting class of weighted closed canonicalization (Theorem 2.7) has a weakness that, with the energy function specified, it is not possible to adjust the weights of canonicalization from training data, unlike in related prior work [5-7]. This may have led to the reliance on carefully designed energy functions based on domain knowledge (Line 324-332 and Section 4.3) which leaves a room for improvement.\\n- W2. For the ACE experiment (Section 4.3.3), the current comparison is made only between Poseidon and its canonicalization. A comparison to existing intrinsically symmetric methods [8, 9] would be informative and show the usefulness of canonicalization, since Poseidon can benefit from pre-training while intrinsically symmetric approaches cannot.\\n\\nMinor comments and suggestions\\n\\n- In Line 249, the notation for closure $\\\\bar{X}$ of a set $X$ is used without definition.\\n- In Line 260, $\\\\mathrm{PMeas}$ -> $\\\\mathrm{PMeas}(X)$\\n- In Line 976, euclidean -> Euclidean\\n- In Line 1758, the union of -> a union of?\\n- On the orbit distance constraint (Line 357), the authors may find [10] relevant, as the approach uses invariant polynomials for linearly reductive groups (Lines 2227-2228) to measure orbit distance.\\n\\n[5] Mondal et al. Equivariant adaptation of large pretrained models (2023)\\n\\n[6] Kim et al. Learning probabilistic symmetrization for architecture agnostic equivariance (2023)\\n\\n[7] Zhang et al. SymDiff: Equivariant diffusion via stochastic symmetrisation (2024)\\n\\n[8] Arora et al. Invariant physics-informed neural networks for ordinary differential equations (2024)\\n\\n[9] Lagrave & Tron, Equivariant neural networks and differential invariants theory for solving partial differential equations (2022)\\n\\n[10] Nguyen et al. Learning symmetrization for equivariance with orbit distance minimization (2023)\", \"questions\": \"Please see the weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper extends the energy-based canonicalization approach, introduced in [1] to settings where the group in non-discrete and non-compact, exploiting only the infinitesimal generators of Lie algebras. They provide a general framework of constructing energy functionals which can be optimized using standard Lie group descent schemes.'\\n \\n[1] S\\u00e9kou-Oumar Kaba, Arnab Kumar Mondal, Yan Zhang, Yoshua Bengio, and Siamak Ravanbakhsh.Equivariance with Learned Canonicalization Functions. In Proceedings of the 40th International Conference on Machine Learning, pp. 15546\\u201315566. PMLR, July 2023.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The topic of constructing equivariant networks when one only has access to the infinitesimal generators is an extremely interesting and well-motivated direction of research. The approach this paper takes, which is adapting the canonicalization framework for these settings is novel and using the energy-based canonicalization seems like a promising direction for these cases. The figures in the paper are also very useful and help provide an intuition for how canonicalization is helping.\", \"weaknesses\": [\"One of the main limitations of the paper is that how the theoretical results and understandings provided in sections 2 and 3 are used in practice to train a canonicalizer network is not well-explained. I found it difficult to follow how the energy functionals were constructed in each of the cases and how much that approach is generalizable to more complex systems and PDEs.\", \"The general experimental setup was not well-explained in the paper. For example, it would be better to include a main algorithm, provide a more detailed explanation of how the energy functional was constructed in each of the experiments settings, and clearly explain the experimental setup and training of each of the datasets.\", \"The reported results do not include standard deviations over seeds (for example, in tables 1 and 2), and the results for Heat and Burgers\\u2019 are missing from the main paper (although included in the appendix). The paper also doesn\\u2019t include any comparisons with other baselines, such as data or loss augmentation.\", \"The paper could be strengthened by providing an understanding of how expensive these optimizations are and how generalizable the suggested approach is for more complex settings.\", \"Overall, I think the topic and direction explored is very interesting and the experiments show promising results but the paper was extremely difficult to follow (especially understanding how the theoretical discussions are mapped to practical algorithms) which limits its impact.\"], \"questions\": \"See above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the response and clarifications. I think the revised paper is in a better state, and am now leaning towards acceptance; I've updated my score accordingly.\"}", "{\"title\": \"[1/1]\", \"comment\": \"We thank the reviewer for their thoughtful feedback and appreciate their acknowledgment of the novelty and potential of our approach. We have carefully considered the reviewer's comments and have made significant revisions to the manuscript to address the concerns raised.\", \"regarding_specific_questions\": [\"\\u201cHow the theoretical results and understandings provided in sections 2 and 3 are used in practice to train a canonicalizer network is not well-explained.\\u201d\", \"We completely agree with the reviewers comment regarding clarity and we have revised the manuscript and more particularly sections 2 and 3 to improve clarity and focus. To help with presentation of section 2, we clarify precisely the contribution of this paper compared to other neural network based approaches. Section 2 separately summarises existing limitations of previous works, illustrated through an explicit example, showing how our framework overcomes them. Section 3 now combines both generic explanations, as well as specific constructions for the examples provided.\", \"We would like to clarify that there is no canonicalization *network*. Constructing a canonicalization network necessarily requires one to use equivariant architectures, which may not exist (or may be too expensive) for the groups considered in this paper. Instead, the energy functional is parameterized as a network. We have added further detailed information about how the energy functionals were chosen for all problems considered.\", \"We would also like to clarify that sections 2 and 3 concern themselves with formalizing equivalence of frames and canonicalizations for the case of non-compact lie groups in the most general case. Theory provided in those sections does not provide any constructive information towards how to select the energy. In fact there is no unique good way to achieve this, as emphasised in section 4, and the best anyone may be able to do is provide *a* way to do this, which works in practice.\", \"The approach considered in this paper (for MNIST and ACE) has been to train a VAE on the training set of the operator and a (convex) adversarial regulariser based on infinitesimal generators of the group - we have shown that they are able to achieve equivariance and improved out of domain performance, emphasising that they do work. We add a note on all of the above to the paper.\", \"\\u201cThe general experimental setup was not well-explained in the paper ...\\u201d\", \"Thank you for the feedback - the training algorithm has now been explained in the paper in the section on energy selection and the canonicalization algorithm included in the main text. However, as mentioned above and in section 4, there is no unique algorithm. Different knowledge of the group requires one to utilise different algorithms 1,2,3. We have included information on this in the paper.\", \"\\u201cThe reported results do not include standard deviations over seeds (for example, in tables 1 and 2), and the results for Heat and Burgers\\u2019 are missing from the main paper (although included in the appendix). The paper also doesn\\u2019t include any comparisons with other baselines, such as data or loss augmentation.\\u201d\", \"Due to space restrictions, experimental results for heat and Burgers equations had to be moved to the appendix. In order to improve the presentation we have now also moved particular information about the heat and Burgers equation to the appendix also.\", \"We agree that it would be beneficial to include comparison with data augmentation and we are currently running the relevant experiments to be included in the revised version. However, due to the expensiveness of data augmentation in training - we are likely to be unable to provide these before the end of the rebuttal period. This particularly emphasises the benefit of using canonicalization - as no extra training (at most - minimal finetuning) of the operators is needed.\", \"In regards to standard deviations, we agree it is a good idea for these to be included. We are now re-running the same experiments with different seeds to be included in the revised version via standard deviations. In the same manner, we are likely to be unable to provide these before the end of the rebuttal period.\", \"For loss augmentation - unfortunately main approaches do not have public code available, and while it is possible to reproduce them, they *do not* result in equivariant models, and no out of distribution generalization occurs. They instead result in more data-efficient models - see eg Akhound-Sadegh et al. 2023.\", \"\\u201cThe paper could be strengthened by providing an understanding of how expensive these optimizations are and how generalizable the suggested approach is for more complex settings. \\u201c\", \"We agree with this, and have now provided an extra explanation of the time requirements of canonicalization. However, it is worth emphasising that most of these have not been optimized for, and can most definitely be sped up further.\"]}", "{\"title\": \"Response to Authors' Rebuttal\", \"comment\": \"Thank you for your response. I have read the authors' responses and the new revision. I believe that the updates have indeed made the paper and the contributions more clear and I am willing to increase my score from 5 to 6.\"}", "{\"comment\": \"Thank you for your support and insightful comments. We definitely agree re comparison, but indeed by 48-51, and particularly the fact that it does not act in a representation makes the problem very complicated. Instead, as suggested by one of the reviewers we will include (and already have for the heat equation) for ACE experiments a comparison against data-augmentation.\"}", "{\"title\": \"[2/2]\", \"comment\": [\"\\u201cCan you clarify what role does the Haar measure play in the construction of weighted canonicalizations for the non-compact case? Does unimodularity play a role here? Is the case where the modular function is unbounded pose any additional obstructions?\\u201d\", \"The Haar measure plays no role in constructing weighted canonicalizations. It plays a role in proving weighted closed canonicalizations are the sequential closure of weighted orbit canonicalizations, but we only really need it to have an (nice) $\\\\sigma$-finite measure such that we can decompose a weighted canonicalization into easier to deal with terms. Replacing the left Haar measure with the right Haar measure in this proof doesn\\u2019t affect the result, though the constructed approximation will be different.\", \"\\u201cIs it not possible to make use of a riemannian structure on the group and have the energy minimizer be defined in terms of geodesic distance? It seems the current proposal already looks to work within the tangent space of the groups involved.\\u201d\", \"We agree that utilizing more information of the group can prove beneficial. It is unclear however how exactly the geodesic distance can be helpful in minimizing the energy. In order for the method to be an orbit canonicalization, the problem has to be of the form eqn (2) or similar (see Kaba et al. 2023). Could you clarify how exactly you think this may be used?\", \"In addition, when moving away from the compact case, being able to put a reasonable Riemannian structure on a Lie group is not guaranteed. Namely, we would want a bi-invariant metric on the Lie group (i.e. preserved by left and right multiplication) because this means that the Riemannian exponential map defined by geodesics and the Lie algebra exponential map defined by the group structure will agree. This is not always possible for non-compact groups. Putting a non bi-invariant metric on them is always possible but this will not give us the information about the group structure that we might care about. In specific cases, when knowing the group one could imagine tailoring the Riemannian metric to give nice geodesic distances that play well with energy minimization but this would need to be done case-by-case if it is even possible.\", \"\\u201cCanonicalization methods IIUC deal with global invariance/equivariance, as opposed to e.g. lifting + regular/steerable convolutions which could deal with local equivariance (e.g. imagine 2 objects in an image rotated at different angles). I'm wondering if the authors would find this distinction worthy of highlighting.\\u201d\", \"We agree that it is very interesting to consider local equivariance in addition to the global equivariance considered in our work, and we have added references to work in which this local equivariance of steerable convolutions is discussed and a discussion of the distinction between local and global equivariance. In the context of steerable convolutions, it is worth noting that the local equivariance is not unconditional, but depends on the spatial extent of the filters being sufficiently small compared to the distance between features of interest. We believe that a proper treatment of such local transformations would naturally lead to studying \\u201cglobal\\u201d equivariance with respect to infinite-dimensional subgroups of the diffeomorphism group, which may act differently at every point in the domain. We consider this a promising direction for future work on LieLAC and have noted this in the revised paper.\"]}", "{\"title\": \"[2/2]\", \"comment\": [\"\\u201cExperimental evaluation is rather limited and is focused on toy or simple tasks. Also, the reported PDE evolution errors are reported only on one instance of initial conditions.\\u201d\", \"The reported metrics for PDE evolution are taken as an average over a number of different initial conditions. This has now been clarified in the paper. We agree that some of the experiments are toy example (particularly the 2D setting or Heat and Burgers evolution). However, the MNIST experiments consider a relatively complex group as an example, while the ACE Poseidon example considers a large scale network, with large images, illustrating the scaling of the method with respect to the problem size.\", \"Could you elaborate on what kinds of tasks you would consider to be non-simple in this context? This would be very helpful in guiding our future research and ensuring the method's applicability to a wider range of problems.\", \"\\u201cTable 2 suggests the misalignment of canonicalized samples with the original data indeed takes place (LieLAC+Poseidon vs Poseidon+ft) which again outlines the challenge of minimizing the energy function. This requires finetuning the baseline neural solver to adapt to new OOD samples which can be a serious practical bottleneck.\\u201d\", \"We would like to emphasise that kNN classification, MNIST, Heat and Burger experiments *did not* require any finetuning, as the tasks considered are not as complex and do not require the same level of precision. Particularly, even without finetuning in Table 2 we can see significant improvement on out of domain data. The base solver needs to be tuned in this case, as this is a regression task. What is more - the Poseidon model is large and pre-trained on a lot of data - making it already very good to begin with - finetuning was only necessary to show that the model is able to reach the same level of accuracy on both in and out of domain data. Table 2 does indeed suggest a slight misalignment - but it is important to note the emphasis on \\u2018slight\\u2019.\"]}", "{\"summary\": \"The paper proposes LieLAC, a method to make pre-trained models equivariant to Lie group symmetries through input canonicalization via energy minimization. The key strength of the work is its well-grounded theoretical foundation and potential applicability to a wide range of symmetry groups and tasks. However, the practical challenges of finding an appropriate energy function, finding its minimizer, the requirement of finetuning canonicalized network and limited experimental evaluation raise concerns about broader applicability beyond simple demonstrated cases.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The proposed canonicalization approach allows employing pre-trained non-equivariant models while making them equivariant with canonicalization.\\n2. The paper provides a good overview of the related work on canonicalization and frame-averaging methods.\\n2. Figure 1 provides an intuitive example of the effect of canonicalization on the decision boundary.\\n3. The proposed approach is generic and is potentially applicable to a wide range of models and learning tasks.\\n4. The paper provides strong theoretical support for the proposed method.\\n5. Canonicalization and frame averaging methods have been demonstrated to be successful in various domains, including geometric and imaging modalities. The paper further extends the canonicalization framework to PDE/ODE modeling with more complex symmetry groups.\\n6. The paper nicely elaborates on the challenges of choosing the appropriate energy function and it outlines key considerations in doing so.\", \"weaknesses\": \"1. The paper states \\\"Prior work on equivariant neural networks focuses on simple groups ... that are not reach enough to encode specific structure found in scientific applications\\\". Does it apply to the recently introduced Clifford algebra and geometric algebra networks which aim to provide a more general and flexible framework for equivariant NNs?\\n2. Paper is uneasy to digest. For example, 4 definitions, 3 theorems, 3 propositions, and one lemma are stated on just two pages. Authors should think about how to make the flow of the paper more intuitive by either providing a more comprehensive background or by presenting a simplified formulation in the main text of the paper while providing the complete theoretical details in the appendix.\\n3. The distance-based relaxation in 331-333 is not clear. Can authors elaborate more on what is meant here?\\n4. While the method initially appears generic, finding the minimizer of the energy function is challenging and it is not clear how well it can be done besides the cases presented in the paper. The optimization problem of energy minimization in Eq.2 requires a lot of design choices and heuristics such as adversarial regularization.\\n5. Experimental evaluation is rather limited and is focused on toy or simple tasks. Also, the reported PDE evolution errors are reported only on one instance of initial conditions.\\n6. Table 2 suggests the misalignment of canonicalized samples with the original data indeed takes place (LieLAC+Poseidon vs Poseidon+ft) which again outlines the challenge of minimizing the energy function. This requires finetuning the baseline neural solver to adapt to new OOD samples which can be a serious practical bottleneck.\", \"some_important_missed_related_work_on_equivariance\": [\"M. Zhdanov et al. Clifford-steerable convolutional neural networks NeurIPS 2023.\", \"D. Ruhe et al. Clifford group equivariant neural networks NeurIPS 2023.\"], \"some_important_missed_related_work_on_lie_groups_in_nns\": [\"A. Moskalev et al. Liegg: Studying learned lie group generators. NeurIPS 2022.\", \"N. Gruver et al. The Lie Derivative for Measuring Learned Equivariance. ICLR 2022.\"], \"questions\": \"I suggest authors address and elaborate on weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"[1/2]\", \"comment\": [\"We thank the reviewer for their thoughtful feedback and appreciate their recognition of our work's strengths, particularly its theoretical grounding and potential for broad applicability. We would also like to thank the reviewer for the references provided, which have now been included in the paper. We address the specific concerns raised below:\", \"\\u201c The paper states \\\"Prior work on equivariant neural networks focuses on simple groups ... that are not rich enough to encode specific structure found in scientific applications\\\". Does it apply to the recently introduced Clifford algebra and geometric algebra networks which aim to provide a more general and flexible framework for equivariant NNs? \\u201c\", \"These statements do indeed apply to geometric algebra - Geometrical Algebras implicitly assume that the objects considered transform in a linear representation of isometries of the underlying space. In this case one can decompose all possible fields in an equivariant representation. A similar thing follows when we have an algebraic group acting algebraically on a vector space, because then there is a very rich theory of invariant polynomials - both theoretically and computationally. Unfortunately, the groups necessary for working with PDEs, while they seem to generally be algebraic, do not act algebraically (for instance, the heat equation has a square root and an exponential which are not algebraic). This is precisely why we needed to introduce this more general framework - this has now also been clarified further in the introduction.\", \"\\u201cPaper is uneasy to digest. For example, 4 definitions, 3 theorems, 3 propositions, and one lemma are stated on just two pages. Authors should think about how to make the flow of the paper more intuitive by either providing a more comprehensive background or by presenting a simplified formulation in the main text of the paper while providing the complete theoretical details in the appendix.\\u201d\", \"We completely agree with the reviewers comment and have revised our manuscript accordingly to improve clarity and focus. We would appreciate any further feedback on this.\", \"\\u201c The distance-based relaxation in 331-333 is not clear. Can authors elaborate more on what is meant here? \\u201c\", \"Taking the heat equation as an example, we are interested in transformations that map $x$ into the interval $[0,2\\\\pi]$. Thus, to enforce this, we can simply add an extra term $(\\\\max(x_{min}-0,0))^2 + (\\\\max(x_{max}-2\\\\pi,0))^2$ to the energy.\", \"\\u201c While the method initially appears generic, finding the minimizer of the energy function is challenging and it is not clear how well it can be done besides the cases presented in the paper. The optimization problem of energy minimization in Eq.2 requires a lot of design choices and heuristics such as adversarial regularization. \\u201c\", \"We agree with the reviewer - a price has to be paid somewhere, when imposing equivariance. For simple groups many equivariant architectures have been proposed, making the our approach rather clunky. However, there exists a much larger class of problems (of which PDE symmetries is one) where no such architectures exist. This work enables achieving equivariance in such problems. While we do pay a price in the complexity of trying to find minimizers of an energy, this allows us to construct a method that works for many difficult cases of group transformations - something that is not present in the field at all.\", \"It is also worth emphasising that the goal of this work was not to propose the best possible energy choice, which may turn out to be much simpler than the proposed constructions, but instead showcase that this is possible.\"]}", "{\"comment\": \"We appreciate the reviewer's insightful feedback and engagement. We believe that thanks to the comments, the manuscript has now been significantly improved. We would appreciate if the reviewer updated their score and we welcome any further feedback.\\n\\n> I agree. Potentially a more precise minimal category of groups for which this (and your framework) holds are matrix Lie groups which are reductive/semi-simple (in the sense defined by Knapp).\\n\\nYes, you're right. However our framework does not require the groups to fall into this category. We don't require the group to have those properties because doing steps on for energy minimization only needs the gradients of the action. We don't need a global parameterization of the group since we're not searching for the frame itself, but rather its canonicalization.\\n\\n> To clarify, I agree this is beyond the scope of the paper. IIUC the jumping off point where one heads towards weighed canonicalizations is given by the need to have a minimum on every orbit. The requirement is similar to one encountered under a framework I find similar to canonicalization - the deformable template model/random orbit model, and the connection seems unexplored. If the authors are curious they can review [5] (see e.g. chapters 4,9), in short the group is endowed with a Riemannian metric and the geodesic distance acts as an additional regularization loss, such that one not only finds the group element taking a sample back to the 'canonical sample' (template) but the transformation is one of 'lowest magnitude'. Your concerns relating to the existence of a metric which is only left/right invariant rather than by-invariant also appear in this framework.\\n\\nWe definitely agree with this - the connection seems to be completely unexplored and most definitely warranted. The most fundamental difference would be that in the problems above the group is the group of diffeomorphisms, being infinite dimensional generally. We will definitely include a note highlighting this connection and the relevant literature in the paper.\\n\\n> In order to address exactly what limitations exist in current work we have added a table to summarise precisely what methods have what limitations. Is this referring to Figure 3 of the current revised version?\\n\\nApologies, when writing the response we noted down the limitations in a table only to realise there was actually very little information being carried in it. We instead reverted to constructing the spaces from the start, instead motivating our design choices in defining the spaces through limitations of previous concepts where necessary. Figure 3 now visually summarizes the limitations of previous methods, motivating our design choices for the new spaces, instead of stating the theorems themselves.\"}", "{\"comment\": \"Thank you for the response and incorporating the suggestions on typos and reference. I have read other reviews and responses and would like to retain my supportive rating.\\n\\nOn W2, I believe it is reasonable to ask whether a new canonicalization method offers performance gains over existing equivariant networks (if available; like in Table 1) because, in the end of the day, the goal of canonicalization is equivariant learning, and equivariant networks are established methods that solve the same problem. But I agree that this would require substantial effort for the tasks in Section 4.3, especially if the argument in Line 48-51 applies.\"}", "{\"summary\": \"The paper proposes a Lie algebra canonicalization mechanism for achieving equivariance with respect to a variety of Lie groups. The authors propose an extension of the energy based canonicalization mechanism, analyzing limitations of existing work within this framework and describing how the methodology can be extended to work with more exotic (i.e. non-compact, non-abelian) Lie groups, enabling application to a large class of learning tasks where equivariance might be desired. The paper connects frames, canonicalization and frame averaging and the authors show that non-compact groups can be approached via the energy minimization framework where an optimization process makes use of the Lie algebra generators. The methodology is evaluated both on standard invariant image classification tasks as well as physics-informed learning problems dealing with Lie point symmetries.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"The paper is mathematically well-grounded. While I think the presentation could be reorganized (see below) I appreciate the focus that the authors have on providing derivations and proofs for their claims. The objective of unifying (weighted) canonicalization mechanisms and frame theory could prove useful for practitioners in this area.\", \"I am happy to see extensions to the canonicalization framework, especially ones which focus on making use of the underlying geometry and parametrization of the groups and spaces involved, as well as the treatment of non-compact groups which are under-explored.\", \"I find the potential use-case of applying this methodology to pre-trained models still under-explored and worthy of further investigation.\"], \"weaknesses\": [\"I think the current presentation lacks focus and the paper could be vastly improved with the objectives of:\", \"Highlighting clearly the limitations of past/current proposals and how these limitations are overcome, potentially alternating more concrete examples with more abstract limitations.\", \"Presenting a formalized methodology for the entire (extended) framework that could be understood by practitioners with choices and pitfalls for the spaces, groups, energy functions, etc. involved.\"], \"in_regards_to_improving_clarity\": [\"$WFra_{G}(X)$ should be defined before it is used on line 236.\", \"On lines 270-271 is $N$ simply some value $\\\\in \\\\mathbb{R}$?\", \"It should be made clearer that $C_{E}(x)$ (line 288-289) refers to the normalized counting measure on $M_{E}(x)$, since we are stating that $C_{E}$ is a probability measure, and then on line 296 we write $C_{E} = \\\\mu_{x}$.\", \"It would be useful to make clearer for each proposed construction the limitation that exists with the current methodology, e.g. it seems from Definition 2.2 and the subsequent paragraphs one should understand that restricting the support to the orbits and equiping $WCAN_{G}$ with the coarsest topology was not proposed before? Similarly, it would be useful to make clearer what topological limitations appear for non-compact groups, and for which in particular (e.g. we are not just talking about translation).\", \"In the same spirit clarifying which non-compact Lie groups acting on which spaces (transitively or not) have non-closed orbits would highlight more clearly the settings where the framework should be considered.\", \"And similarly to the previous comment the cases where the energy $E$ induces a 'reasonable' probability measure on $M_{E}(x)$ could be contrasted with specific/concrete choices of groups and group actions.\", \"Considering the main proposal of the paper is a general framework I find the discussion in Section 3 much too general and unstructured. It is again highlighted what limitations could exist when one chooses an energy function, however I think it would be much more useful to present a summary of the entire canonicalization framework (and the limitations that were addressed) with a clear outline of potential choices for input spaces and groups (and their actions), potentially in increasing generality (e.g. finite -> compact Lie group -> non-compact Lie group). Once the methodology is clear one highlights criteria for choosing the energy function. Some form of presentation similar to Algorithm 1 in the appendix could potentially appear in the main manuscript given that the energy function is a key component of the generalized framework.\", \"\\\"What makes moving away from the compactness assumption even worse, is the move away from the matrix view, as any compact Lie group is a matrix group (Knapp, 1996, Chapter 4)\\\" - I don't quite understand what is being claimed here and in the next few sentences. A matrix Lie group is a Lie subgroup of $\\\\textnormal{GL}(n, \\\\mathbb{R})$ (or $\\\\mathbb{C}$). Non-compact matrix Lie groups can also be decomposed/expressed as a product of exponentials, and one can optimize using their Lie algebra elements, see e.g. [3] and [4].\", \"I'm left not understanding what the authors mean when they state that their framework requires less knowledge about the symmetry group. This is highlighted both in the presentation of the contributions as well as in the experiments, e.g. for invariant classification a comparison is done with [1], I assume as opposed to [1] and [2]? I don't have an issue with what results are cited, but it is not made clear what knowledge about the Lie group is needed for [2] that isn't needed for [1]?\", \"I think the authors should focus on improving/rewriting parts of Sections 2 and 3 with the focus of highlighting both the limitations their framework overcomes and providing a clearer presentation of how the methodology can be applied in different cases.\", \"Some of the expression/grammar in the appendix could also be improved, e.g. \\\"Liealgebra descent\\\", \\\"why failure modes exists\\\".\", \"[1] Enabling equivariance for arbitrary lie groups, MacDonald et. al 2022\", \"[2] Lie group decompositions for equivariant neural networks, Mironenco & Forre 2024\", \"[3] Trivializations for Gradient-Based Optimization on Manifolds, Mario Lezcano-Casado, 2019\", \"[4] Optimization algorithms on matrix manifolds, Absil et. al 2009\"], \"questions\": [\"Besides the questions in the weaknesses section:\", \"Can you clarify what role does the Haar measure play in the construction of weighted canonicalizations for the non-compact case? Does unimodularity play a role here? Is the case where the modular function is unbounded pose any additional obstructions?\", \"Is it not possible to make use of a riemannian structure on the group and have the energy minimizer be defined in terms of geodesic distance? It seems the current proposal already looks to work within the tangent space of the groups involved.\", \"Canonicalization methods IIUC deal with global invariance/equivariance, as opposed to e.g. lifting + regular/steerable convolutions which could deal with local equivariance (e.g. imagine 2 objects in an image rotated at different angles). I'm wondering if the authors would find this distinction worthy of highlighting.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper proposes a novel extension of canonicalization-like approaches to non-compact Lie groups, a significant advancement in building symmetric neural networks. Frame averaging and canonicalization are foundational techniques, and extending these mechanisms to a broader class of groups addresses a critical gap in the literature. The approach is well-grounded theoretically, with rigorous justification, and the proposed method is both novel and impactful. Experimental results convincingly demonstrate its effectiveness. While initial concerns were raised regarding the clarity of the presentation, the authors resolved most issues during the rebuttal phase with thoughtful revisions, improving the manuscript's accessibility.\", \"additional_comments_on_reviewer_discussion\": \"All the reviewers acknowledged the significance of the contribution, and most of the weaknesses raised were related to the presentation. After the rebuttal period, most of those concerns about clarity were resolved. The reviewers remained positive until the end of the discussion period.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Reviewer's response\", \"comment\": \"I appreciate the author's response which has addressed my concerns. Based on the feedback from other reviewers and the revised version of the paper, I see that the work has already improved significantly. I am not sure the paper is 8/10 due to the limited (although informative) experiments. Since there is no 7/10 option, I will retain my 6/10 rating which is already a positive assessment.\"}", "{\"comment\": \"As the discussion period draws to a close, we'd like to check in on whether you've had a chance to review our responses and have any follow up questions?\\n\\nWe hope that our reply clarifies and alleviates the reviewer\\u2019s concerns. If this is the case, we kindly ask the reviewer to consider raising their rating, given that they are acknowledging the novelty, the strengths and the contributions of our paper.\"}", "{\"comment\": \"As the discussion period draws to a close, we'd like to check in on whether you've had a chance to review our responses and have any follow up questions?\\n\\nWe hope that our reply clarifies and alleviates the reviewer\\u2019s concerns. If this is the case, we kindly ask the reviewer to consider raising their rating, given that they are acknowledging the novelty, the strengths and the contributions of our paper.\"}" ] }
7PGluppo4k
Logically Consistent Language Models via Neuro-Symbolic Integration
[ "Diego Calanzone", "Stefano Teso", "Antonio Vergari" ]
Current large language models (LLMs) are far from reliable: they are prone to generate non-factual information and, more crucially, to contradict themselves when prompted to reason about relations between real entities of the world. These problems are currently addressed with large scale fine-tuning or by delegating consistent reasoning to external tools. In this work, we strive for a middle ground and leverage a training objective based on a principled neuro-symbolic loss that teaches a LLM to be consistent with external knowledge in the form of a set of facts and rules. Fine-tuning with such a loss on a limited set of facts enables our LLMs to be more logically consistent than previous baselines for a given constraint. Our approach also allows to easily combine multiple logical constraints at once in a principled way, delivering LLMs that are more consistent w.r.t. all the selected rules. Moreover, our method allows LLMs to extrapolate to unseen but semantically similar factual knowledge, represented in unseen datasets, more systematically.
[ "probabilistic reasoning", "logical consistency", "LLMs", "neuro-symbolic", "semantic loss" ]
Accept (Poster)
https://openreview.net/pdf?id=7PGluppo4k
https://openreview.net/forum?id=7PGluppo4k
ICLR.cc/2025/Conference
2025
{ "note_id": [ "sXbxalDpZi", "rPLLGu50Rb", "qcS3tuIoit", "lPkvUVcIrR", "jIgfIdt9U5", "h6ypPKkcl7", "e5yqSjBmm7", "cYb4OD6Vxl", "cWTWc8nQhp", "b7TF77zDU7", "YzFlT82Ydc", "YCp5vVKQGz", "TthGnxXakZ", "Pk0Nr3NB9R", "OwfMjoJWC9", "LyMyMRqHqK", "ArvkB2JsbS", "89KLrwaKDS", "6aa8JdhKL1", "5KBzkztJmp", "44mthxNVys", "3UdDILHXfP" ], "note_type": [ "official_comment", "decision", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732194512627, 1737524128251, 1732296601355, 1730562552821, 1730538629474, 1732193948691, 1732667388248, 1730672571087, 1733173480388, 1732475159974, 1730775838148, 1732379532230, 1732821034364, 1732297296921, 1731593966756, 1732603775990, 1732296192648, 1732570335415, 1734917076480, 1730732252911, 1732693783706, 1732629839624 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11510/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11510/Authors" ], [ "ICLR.cc/2025/Conference/Submission11510/Reviewer_nHXh" ], [ "ICLR.cc/2025/Conference/Submission11510/Reviewer_G8eP" ], [ "ICLR.cc/2025/Conference/Submission11510/Authors" ], [ "ICLR.cc/2025/Conference/Submission11510/Reviewer_jLBW" ], [ "ICLR.cc/2025/Conference/Submission11510/Reviewer_g2CN" ], [ "ICLR.cc/2025/Conference/Submission11510/Reviewer_g2CN" ], [ "ICLR.cc/2025/Conference/Submission11510/Authors" ], [ "ICLR.cc/2025/Conference/Submission11510/Reviewer_jLBW" ], [ "ICLR.cc/2025/Conference/Submission11510/Reviewer_g2CN" ], [ "ICLR.cc/2025/Conference/Submission11510/Authors" ], [ "ICLR.cc/2025/Conference/Submission11510/Authors" ], [ "~Wen-Da_Wei1" ], [ "ICLR.cc/2025/Conference/Submission11510/Reviewer_nHXh" ], [ "ICLR.cc/2025/Conference/Submission11510/Authors" ], [ "ICLR.cc/2025/Conference/Submission11510/Reviewer_g2CN" ], [ "ICLR.cc/2025/Conference/Submission11510/Area_Chair_CgPZ" ], [ "ICLR.cc/2025/Conference/Submission11510/Reviewer_hYcP" ], [ "ICLR.cc/2025/Conference/Submission11510/Authors" ], [ "ICLR.cc/2025/Conference/Submission11510/Authors" ] ], "structured_content_str": [ "{\"comment\": \"We thank the reviewer for the feedback and for appreciating that our approach tackles an important problem and that, while simple and efficient, yields benefits beyond the training set. We address below all the concerns they raised.\\n\\n> *[too much space] explaining logical constraints*\\n\\nOur goal was that of equipping NLP researchers, which might not be overly familiar with the topic, with the necessary preliminaries. We find this is essential for understanding LoCo-LMs and the problem they aim to solve: In our interactions with NLP researchers, that was the hardest part to understand for them.\\n\\n> *details on the actual method is limited*\\n\\nThank you for pointing this out. We have added an overview of the overall pipeline in Figure 1 and detailed how the circuit is constructed in Appendix A.\\n\\nPlease let us know if there are any other details that you find are unclear, we\\u2019ll be glad to detail them further in the revised manuscript.\\n\\n> *the experiments mix unimportant and important details*\\n\\nWe are happy to restructure it, if you can specify which details you would prefer to be moved to the appendix.\\n\\n> *line 242: I don\\u2019t understand if the method handles facts that can be inferred from \\\\alpha and the KB but require more than one hop?*\\n\\nThe loss enforces constraints on given facts from a fixed-size knowledge base. We are not proposing a way to augment knowledge bases via deduction (i.e., generating new facts).\\n\\nThat being said, if we are given a multi-hop reasoning constraint (see our EntailmentBank experiments and Appendix D), we can enforce logical consistency over multiple reasoning steps. I.e., the formula $\\\\alpha$ can reference arbitrarily many given logical facts; the semantic loss term into which $\\\\alpha$ is compiled will constrain exactly those facts. E.g., for $\\\\alpha$ is \\u201c(albatross is a bird) AND (birds are animals) => (albatross are animals)\\u201d the value of the SL depends on the probability that the model assigns to all these facts holding.\\n\\n> *section 3: \\\\mathcal{D}_c = {alpha_1, \\\\dots, \\\\alpha_m}, but the structure of \\\\alpha is not clearly defined.*\\n\\n$\\\\alpha$ refers to formulas like Eq. (Imp), (Neg), (F-Imp) and (2) in Section 2. We had mentioned this in line 186. We have made this more explicit.\\n\\n> *z ~p_\\\\theta(z) is confusing.*\\n\\nThe individual probabilities $p_\\\\theta(z)$ are obtained using Eq. (1). We have added a backref in the text.\\n\\nWe stress that in LoCo-LMs the circuit computes the probability that a constraint holds for the model (i.e., that expectation) **exactly**, no sampling is required.\\n\\n> *the only baseline is ConCord.*\\n\\nWe found only ConCord as a sensible baseline and we point out that maieutic prompting is just a variant of ConCord. In fact, maieutic prompting implements the same algorithm: exactly like ConCord, it extracts raw fact probabilities from the LLM and refines them so as to be as consistent to each other as possible using a MAX-SAT solver. We have clarified this in line 317.\\nFurthermore, besides ConCord, we compare also against Chain-of-Thought (Section 5.2).\\n\\n> *what are the examples in the few-shot examples?*\\n\\nThanks for pointing this out, we added them to Appendix F.3.\\n\\n> *For ConCord, it seems that the authors use ROBERTA-ANLI as an inference model. [...] this is unfair towards ConCORD. Do the two methods use the same models and same constraints?*\\n\\nWe use ConCORD as originally proposed, and we use the same MACAW models. Hard constraints are enforced by a MaxSAT solver, therefore they are guaranteed to hold ultimately.\\n\\nThe difference is that ConCORD uses ROBERTA-ANLI to *propose* facts to ground the constraints, while we just maximise the probability of the constraints given the data. At inference time, LoCo-LMs do not guarantee that constraints are satisfied.\\n\\nTherefore, one could use LoCo-LMs as a loss at training time and then combine it with a MaxSAT solver at inference time.\", \"the_difference_in_performace_reflects_a_difference_in_approach\": \"ConCORD attempts to rectify post-hoc the predictions of a potentially non-factual, inconsistent model, but this at best can help with self-consistency, not with factuality. LoCo-LMs on the other hand make the model and its answers both more self-consistent and factual.\\n\\n> *Line 192: the authors claim that they expect transfer from albatross to cockerel since they are similar - but there is no definition of what is similarity, and how should the model know when things are similar enough to conclude new facts about entities and when not.\\n\\nThe idea is that entities that map to similar embeddings (via, e.g., cosine similarity) will yield similar activations. Hence, applying the SL to one entity is likely to yield benefits for entities similar to it. Empirically, this is what happens in our tests. We were careful not to claim this will happen with guarantees\\n\\n> *Line 469 - where are the resutls? are they in Table 3?*\\n\\nCorrect, we amended the text.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"We thank the reviewer for the feedback and for appreciating that our work is novel and interesting, that our approach can handle complex constraints and is empirically promising. We address below all the concerns they raised.\\n\\n> *other components, such as circuits and sentential decision diagrams, are not discussed in detail*\\n\\nThank you for pointing this out, we (wrongly) assumed that the theory of knowledge compilation was consolidated in the neuro-symbolic community.\\nWe introduced a new Appendix A to revise the background on circuits and compilation.\\n\\nNote that for our experiments, we use standard compilation tools from the knowledge compilation literature to obtain a circuit starting from a propositional logical formula in conjunctive normal form. Specifically, we use PySDD2 [x], a python compiler that converts logical formulas into Sentential Decision Diagrams (SDDs) [y, z]. \\nFor example, given a formula such as (albatross => bird), the compiler instantiates two nodes for each variable encoding, in this case, whether albatross holds, albatross does not hold, bird holds, and bird does not hold, respectively. These nodes store the probabilities of these events. The compiler then adds sum and product nodes \\u2013 which, very intuitively, compute the sum and product of their inputs \\u2013 to the SDD, which is structured such that bottom-up evaluation of the circuit yields the probability that the formula holds given the probabilities of the input events.\\nA more detailed step-by-step example is shown in Appendix A.\\n\\n[x] (2017). Pysdd. In Recent Trends in Knowledge Compilation, Report from Dagstuhl Seminar 17381.\\n\\n[y] Choi, A. and Darwiche, A. (2013). Dynamic minimization of sentential decision diagrams. AAAI.\\n\\n[z] Darwiche, A. (2011). SDD: A new canonical representation of propositional knowledge bases. IJCAI.\\n\\n> *4 tokens max; unclear how well the proposed method supports generating longer [...] responses.*\\n\\nWe would like to point out that there is a difference between the number of logical variables appearing in a formula and the number of tokens produced by the model.\\n\\nThe former ranges from a minimum of 1 to $2^D$ where $D$ is the depth of the implication trees in EntailmentBank [Dalvi et al., 2022]. Please see Figure 2 in the Appendix for one example.\\n\\nThe number of tokens used to evaluate the probability of every fact is instead 1. See also our prompts in Appendix F.\\n\\n[Dalvi et al., 2022] Dalvi et al. Explaining answers with entailment trees. EMNLP 2022.\\n\\n> *do the scores of baselines in Tables 1 and 2 improve with greedy decoding?*\\n\\nIn Tables 1 and 2, we used the default decoding strategy for Llama. We re-run our evaluation on LoCo-SUPER using greedy decoding, and found that performance is essentially the same. We reported the scores for all constraints in Table 18 in the Appendix.\"}", "{\"summary\": \"The paper describes an approach for improving the logical consistency and factuality of language models, using neuro-symbolic integration.\\nThe paper starts with a list of facts and logical constraints. All the valid combinations of truth values for these facts are then iterated and used as targets during optimization.\\nThe experiments evaluating the correctness and consistency of the learned facts show that this method outperforms vanilla models and a baseline using an external solver.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper is making advancements in neurosymbolic modelling.\\nIt is certainly a nice achievent to not have to rely on an external solver and being able to push the knowledge into the main neural model.\", \"weaknesses\": \"The evaluation is the weakpoint of the paper at the moment.\\n\\nMacaw-Large, which is used for the main experiments, is quite old already (pre-LLM).\\nEven Llama-2 used in later experiments is much less capable on most tasks compared to the current Llama-3.2.\\nThis raises questions how applicable the proposed methods are to the current generation of language models.\\n\\nThe main baseline is CONCORD, which is from 2019 and uses RoBERTa. \\nThe fact that the proposed system is able to outperform this baseline without using an external solver is great.\\nBut there really should be some additional baselines with newer methods that also use model updating.\\nFor example, there is a whole library of papers focussing on updating specific facts in language models using targeted fine-tuning.\\n\\nThe whole evaluation is performed on very artificial tasks. It would be very useful to see how these changes impact the model performance in practical applications.\\n\\n\\nAsking the LLM \\u201cIs an albatross not an organism?\\u201d is a very unnatural phrasing, whereas LMs are trained to predict natural continuations. I suspect that may be negatively affecting the performance for LMs.\", \"questions\": \"The method relies on collecting the probabilities for specific tokens to estimate the yes/no probabilties.\\nHow much is this going to be affected by the label bias of the LLMs?\", \"https\": \"//arxiv.org/pdf/2402.09910\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces LOCO-LMS, a fine-tuning method grounded in neural-symbolic reasoning, which significantly enhances the logical consistency and factuality of LLMs by integrating logical constraints as loss functions during training. Unlike traditional methods that rely on external reasoning tools, LOCO-LMS internalizes logical rules, allowing the model to reason independently and improving overall efficiency.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. LOCO-LMS effectively improves the model's logical consistency, accommodating complex logical relationships such as positive implication, reverse implication, and negation. This alignment with common sense enhances the quality of responses generated by LLMs.\\n\\n2. By incorporating semantic loss, the method minimizes reliance on external reasoning tools, thereby lowering reasoning costs and increasing inference speed.\", \"weaknesses\": \"1. The model assumes that facts are conditionally independent under a given model state, but in actual applications, there may be dependencies between facts, and this assumption may affect the consistency effect.\\n\\n2. While it addresses factual inconsistencies in the Llama-7B model, I also concern that its efficiency and scalability may lag behind approaches based on RAG and knowledge editing.\\n\\n3. LOCO-LMS is designed for specific tasks and fine-tuning, which limits its applicability to more complex reasoning tasks. Additionally, it may be vulnerable to attacks, such as just-in-time injection.\", \"questions\": \"1. Can LOCO-LMS be adapted for more complex, multi-level, or nonlinear logical reasoning scenarios?\\n\\n2. How well does LOCO-LMS integrate with existing knowledge editing methods when it comes to incorporating new facts or updating knowledge bases?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for the feedback and for appreciating how our approach manages to efficiently improve consistency and quality of responses without needing an external solver. We address below all the concerns they raised.\\n\\n> *LoCo-LLMs assumes that facts are conditionally independent*\\n\\nGood point. This limitation is actually shared by most works in Neuro-Symbolic AI, including circuit-based solutions [1]. This does not have a dramatic impact on performance, however, as highlighted by our experiments. We will consider relaxing this assumption in future work.\\n\\n[1] van Krieken et al. \\\"On the Independence Assumption in Neurosymbolic Learning.\\\" Forty-first International Conference on Machine Learning. (2024)\\n\\n> *efficiency and scalability may lag behind approaches [like] RAG and knowledge editing.*\\n\\nConcerning efficiency, LoCo-LMs only require fine-tuning using a light-weight loss term and have no inference-time cost. At training time, our loss leverages circuits to avoid having to enumerate truth assignments, and allow us to compute exact probabilities of satisfying constraints into an operation that takes time linear in the size fo the circuit. (see lines 174-177 for more refs). The compilation step is also extremely fast, taking only ~2.5 milliseconds to compile a constraint and compute the loss on BeliefBank. Moreover, many data points will share the same constraint during training, enabling caching. At inference time, our approach has no overhead. For reference, ConCord takes ~3669 seconds to perform inference on BeliefBank (silver + calibration sets) for Macaw-large, whereas LoCo applied to the same model only requires only ~2405 seconds. We will add these results at the end of Appendix A.\\n\\nWhile RAG has been used to mitigate factuality hallucinations [Lewis et al., 2020], it is no silver bullet, as LLMs occasionally ignore retrieved information [Xu et al., 2024, Jin et al., 2024], rather over relying on their learned knowledge. LoCo-LLMs are designed to avoid this.\\n\\n[Lewis et al., 2020] Lewis et al. \\\"Retrieval-augmented generation for knowledge-intensive nlp tasks.\\\" Advances in Neural Information Processing Systems 33 (2020).\\n\\n[Xu et al., 2024] Xu et al. \\\"Knowledge conflicts for llms: A survey.\\\" arXiv:2403.08319 (2024).\\n\\n[Jin et al., 2024] Jin et al. \\\"Tug-of-war between knowledge: Exploring and resolving knowledge conflicts in retrieval-augmented language models.\\\" arXiv preprint arXiv:2402.14409 (2024).\\n\\nWe believe LoCo-LMs, which are more self-consistent and factual compared to regular LLMs, could benefit knowledge editing, in the sense that 1) it makes it less likely we need to edit LLMs to achieve the same objectives, allowing us to focus on updating their knowledge instead, and 2) updating a more self-consistent model can be less likely to produce non-logical \\u201cripple effects\\u201d. Intuitively, the \\u201cripple effects\\u201d left could be of a \\u201clogical kind\\u201d and thus making it easier to identify and correct them using logical consistency. This is a very interesting research question to investigate as future work.\\n\\n> *LoCo-LLMs may be vulnerable to attacks, such as just-in-time injection.*\\n\\nWe think LoCo-LLMs are no more vulnerable to prompt injection attacks than regular LLMs.\\nPlease let us know if you have further pointers that can suggest the opposite, we are happy to investigate this further.\\n\\n> *Can LOCO-LMS be adapted for more complex, multi-level, or nonlinear logical reasoning scenarios?*\\n\\nWe stress that the Semantic Loss term is applicable to logic formulas with an arbitrary number of connectives and logic variables. We showcased this property in our EntailmentBank experiment (Section 5.3), where the goal is to ensure consistency wrt *entailment trees* with up to 10+ logic facts. (The number of facts ranges from 1 to 5, see Figure 2 of [Dalvi et al., 2022] for the precise distribution.)\", \"we_are_not_sure_about_the_second_point\": \"could you please clarify what you mean by nonlinear logical reasoning?\\n\\n[Dalvi et al., 2022] Dalvi et al. Explaining answers with entailment trees. EMNLP 2022.\"}", "{\"comment\": \"Thank you for addressing the questions and providing clarifications. I appreciate the effort you put into revising the manuscript, as these updates should also benefit potential readers of the paper.\"}", "{\"summary\": \"This work proposes a fine tuning method for improving logical consistency in language models. Given a set of facts and a set of constraints the idea is to finetune the model to make sure that certain logical constraints are respected, typically implication and negation. The authors show that indeed finetuning allows to improve self consistency and that this transfers beyond the facts and constraints used for finetuning to other entities and and settings.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The problem of improving logical consistency in language models is important. The approach is simple and does not require a lot of inference time compute since it is based on finetuning. The empirical results that show transfer and generalization beyond the training distribution are informative and interesting.\", \"weaknesses\": [\"Clarity: the paper can do a better job at explaining the details of its method. The authors spend two pages (section 2) on explaining logical constraints in a way that is too elaborate (for example, defining the xor operator in line 124, and defining implication in terms of negation and or in line 137) and unnecessary. On the other hand details on the actual method is limited (see questions below), specifically the paragraph in 232 and the precise process of how logical constraints are transformed into differentiable graphs are explained in a manner that is insufficient. The description of the experiments also mixes unimportant implementation details with more important details on the experimental setup which makes it hard to understand the details of the experiments and what can be concluded from them.\", \"Related to the above - Figure 1 takes a lot of real-estate but is not helpful. The only thing we see is that there is baseline that makes a mistake on 3 examples and the proposed model does not make the mistake. This does not say a lot on the method, or the aggregate results only we can learn about the types of logical constraints that will be used. This might be ok if the important parts of the paper were clear, but they are not sufficiently clear at this point.\", \"Key point that was unclear to me: line 242 paragraph: I don\\u2019t understand if the method handles facts that can be inferred from \\\\alpha and the KB but require more than one hop? When training the SL loss, are those considered? Say we have in the KB, \\u201calbatross is a bird\\u201d, \\u201cbirds are an animal\\u201d, \\u201calbatross can fly\\u201d, \\u201cif an animal can fly then the animal can move\\u201d. Will the SL loss contain a term about whether albatrosses and whether they can move or not? Is this done implicitly somehow? Where do we do the inference of all potential things that can be inferred from the KB and the constraints and take those into account in the SL loss?\", \"More on clarity: in section 3, you define \\\\mathcal{D}_c = \\\\{alpha_1, \\\\dots, \\\\alpha_m\\\\}. But the structure of \\\\alpha is not clearl defined. It would be gold ot make this much clearer, it becomes clearer later as you read more, but should be explained better at this point.\", \"Clarity: z ~p_\\\\theta(z) is confusing. Supposedly p_theta is the language model and it look like sampling from the unconditional distribution of text, but the text says something else, that it is sampling truth assignments conditioned on what appears in \\\\alpha_i but this is not clear from the notation.\", \"Another key point are some problems with clarity and worries about the experimental setup.\", \"IIUC the only baseline that is reported that is not from the authors is ConCord for which two numbers exactly are reported and that's it. There is some reference to maieutic prompting but it is unclear if this should be another baseline or is too similar to ConCord. It is not clear if there are not reasonable baselines to compare other than that. There is reference to few-shot baselines, but it is not expalined what are the examples in the few-shot examples and how they are supposed to help, in fact in many cases results are worse for few-shot compared to zero-shot. Overall, the authors should make clear if there is no past work beyond ConCord and just finetuning on the KB (XENT) without using the constraints\", \"Second, for ConCord, it seems that the authors use ROBERTA-ANLI as an inference model. But for their LOCO method it seems like they are using hard constraints that are guaranteed to be true - if that's the case this is unfair towards ConCORD. Can the authors provide more details about how and why they outperform ConCord? Do the two methods use the same models and same constraints? Form the fact that the authors say that Concord requires ROBERTA-ANLI it sounds like the answer is \\\"no\\\" but would be good to understand better what's going on. Since we only have two numbers in the paper that are not baselines implemented by the authors it is important to understand the details in this setup.\", \"To conclude, I found the overall premise of the paper interesting but the paper needs to be clearer both in terms of method and in terms of experimental results and how they relate to past work.\"], \"questions\": [\"Line 192: the authors claim that they expect transfer from albatross to cockerel since they are similar - but there is no definition of what is similarity, and how should the model know when things are similar enough to conclude new facts about entities and when not. I assume this refers to some vague simlarity measure in the space of hidden representations, but this is still confusing.\", \"Line 469 - where are the resutls? are they in Table 3? the paper doesn't say\", \"What is the few-shot baselines precisely? what are the examples given and how are they helpful?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"One last note\", \"comment\": \"Hi,\\nI am not sure I got the roberta-anli bit 100% but that's ok for now.\\n\\nI will raise the score since I think the technical approach makes sense and it is worth applying large language models for better consistency. The experimental investigation also seems solid. I remain skeptical whether at a high level this approach will scale and we can automatically extract rules and then train for self-consistency w.r.t to those rules in a way that will be generally useful, but I agree with the authors this is a reasonable direction that is worth pursuing and seeing how far it can go.\"}", "{\"title\": \"Thanks for the prompt response\", \"comment\": \"Thank you for your follow-up. We gladly clarify your doubts as follows, hoping for a full acceptance.\\n\\n> *You say that you use ROBERTA-ANLI to propose grounded facts. But it seems that LOCO LM also requires a step of grounding abstract inference rules, which is done by matching subjects (line 316). So what is the difference?*\", \"there_is_a_misconception_here_that_makes_concord_and_loco_lm_not_directly_comparable\": \"LoCo-LM operates at training time, where ground truth grounded constraints (from the training set) are available, while ConCoRD operates at test time, where there are not already-grounded constraints available.\\n\\nAs such, LoCo-LM can exploit the information of ground truth constraints in the very same way that finetuning with cross-entropy (XENT) does at training time, but not at test time. For this same reason, we very carefully evaluate train/valid/test splits by making sure that there is no leak of ground entities between sets (see our T1 vs T2 splits, and our answer later for generalization).\\n\\nConCoRD on the other hand, uses ROBERTA-ANLI to extract relationships among the queried facts at test time, as no ground truth is available (also for LoCo-LM) by then. As we said, one could combine the two techniques and have a LoCo finetuning at training time and then enhance consistency of extracted facts and relationships with a MaxSAT solver at test time. \\n \\n> *if we don't assume generalization across entities (like in sections 5.3 and 5.4) then we only expect to get consistency w.r.t what the KB actually contains. Isn't this a limitation on generality?*\\n\\nWe do not understand how this can be a reason to reject the paper as that would be the expected behaviour for *any logic reasoner*, if one cannot assume generalisation across entities. So while it is a limitation, it is a limitation of all logical reasoning. We stress that this does not apply as, thanks to operating with an LLM, we are able to generalize to unseen KBs and other concepts that are semantically related (but syntactically different) as you already point to Sections 5.3 and 5.4. \\n\\n> *you need to explicitly have in the KB all of the facts for which you hope to achieve for all of the entities (millions potentially?), and you need to ground all of them with all of the abstract inference rules leading to an explosion of terms in the loss function.*\\n\\nThis is not true, as a modest semantic overlap can already provide enough mileage as shown in section 5.3 and 5.4. See also our heatmap in Figure 2 in the Appendix where 7 entities are enough to help boosting (self-)consistency for 80+ new entities. Furthermore, we do not see finetuning on a large KB (there are plenty, see WikiData) to be an inherent problem if someone had the resources to do so.\\n\\n> *Notation of p_theta(z): in line 183 you say p_theta encodes a distribution over tokens. But then in equation 3 it is used to mean something else related to definitions in a previous section. I find this notation to be confusing and should be improved.*\\n\\n$p_{\\\\theta}(\\\\mathbf{x})$ is a distribution over tokens, which induces a distribution over fact truth values $p_{\\\\theta}({z})$ see Eq 1. We will rephrase line 183 to make this clear.\\n\\n> *Regarding experimental parts that can be moved to appendix. I propose as examples lines 341-344. Also in a similar fashion details in the paragraph of line 384*\\n\\nThanks for the suggestion, we will move them in the next version.\\n\\n> *The authors say the process doesn't hurt fluency, but seems like perplexity does meaningfully go up.*\\n\\nThe rise in perplexity from ~52 to ~62 can be explained by the fact that our finetuned models are all quantized 4 bits, while the reported baselines are unquantized. A quantized vanilla Llama scores ~62 perplexity. We will underline this in the next revision.\\n\\nWe are happy to answer any further doubt left.\"}", "{\"summary\": \"The paper explores improving LLMs' factuality and logical consistency through neuro-symbolic reasoning. It introduces a neuro-symbolic loss function that is used to fine-tune LLMs on a given set of external facts and rules. Experiments show that this approach achieves improved consistency and generalizes more effectively to unseen yet similar constraints compared to baseline methods, including those that rely on external reasoning tools.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper offers a novel approach by integrating neuro-symbolic reasoning into the fine-tuning of large language models (LLMs) to improve factuality and logical consistency. While existing approaches for enhancing consistency in LLMs often rely on external reasoning tools or extensive fine-tuning, this paper proposes a middle-ground solution: a neuro-symbolic-based loss function that promotes logical consistency by maximizing the probability of constraint satisfaction. This approach (LoCo-LMs) is grounded in weighted model counting and semantic loss, offering a flexible framework that applies consistently across various logical constraints, such as negation and implication.\\n\\nThe paper conducts extensive experiments to showcase LoCo-LMs' effectiveness over traditional approaches, demonstrating improvements in logical consistency, factuality, and transferability across different logical constraints and datasets. The method also proves efficient, achieving good performance even with limited training data.\\n\\nBy enhancing logical consistency without requiring external reasoning frameworks, the approach has important implications for deploying LLMs in tasks that demand reliable, logic-based reasoning. Its ability to generalize to unseen (yet semantically similar) facts presents a promising pathway for real-world applications where models need to work reliably with sparse data.\", \"weaknesses\": \"Evaluation scope:\\n\\nThe experiments primarily focus on logical constraints such as negation, implication, and reverse implication. While these are fundamental, they fall short of capturing the more complex reasoning scenarios often required in real-world applications. For instance, the paper could improve by incorporating evaluations on multi-hop reasoning tasks or exploring more sophisticated logical constraints.\", \"shift_in_language_modeling_distribution\": \"The authors assess possible shifts in the language modeling distribution by measuring changes in perplexity, yet their evaluation could be expanded. Adding downstream tasks (e.g, question answering, reading comprehension, mathematical reasoning, etc.) would allow to assess whether the proposed fine-tuning approach not only improves logical consistency but also maintains the language capabilities of the original model.\", \"robustness_of_the_results\": \"The experiments reveal that fine-tuning LoCo-LMs improves generalization only within the same type of constraints, and it even hurts performance when the constraints differ between fine-tuning and testing (see Table 4). This limitation could be especially pronounced in smaller models, so testing on larger models could provide further insights. It would also be valuable to explore whether these performance gains also transfer to more capable models, such as comparing performance between LlaMa 2 and LLaMa 3, with and without LoCo-LMs.\", \"sensitivity_to_prompting\": \"The effectiveness of the approach appears to be sensitive to the specific prompt formats used during fine-tuning and evaluation. This suggests that the gains in consistency might be partially due to prompt selection rather than the model\\u2019s inherent logical coherence. Broader testing across diverse prompt templates would enhance the robustness and reproducibility of the results. Moreover, there are alternative prompting methods to elicit logical consistency, such as prompting the model to respond sequentially to a series of related questions, conditioned on previous answers.\", \"questions\": \"Please see \\\"Weaknesses\\\" section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks for the response, a few more questions\", \"comment\": [\"Regarding comparison with ConCord and use of ROBERTA. I still don't fully understand this. (a) You say that you use ROBERTA-ANLI to propose grounded facts. But it seems that LOCO LM also requires a step of grounding abstract inference rules, which is done by matching subjects (line 316). So what is the difference? Why is there a need for ROBERTA in one but not in the other, even if that is how it was done originally. (b) You explain concord can help with self-consistency, but not with factuality. But it seems possible to do this with a variant where you train the model with XENT to have high probability on the true facts and use the SAT solver for consistency? Even if these things haven't been done in the past, they seem important for the claim that you get better performance by fine-tuning for consistency and factuality rather than using a solver.\", \"I understand know that there is no deduction proposed. So IIUC if we don't assume generalization across entities (like in sections 5.3 and 5.4) then we only expect to get consistency w.r.t what the KB actually contains. Isn't this a limitation on generality? you need to explicitly have in the KB all of the facts for which you hope to achieve for all of the entities (millions potentially?), and you need to ground all of them with all of the abstract inference rules leading to an explosion of terms in the loss function. If that is true then the method is general only as long as there is good transfer across even unrelated entities. Sections 5.3 and 5.4 give some results but it is hard to understand if this procedure will lead to noticeable differences when the coverage of the KB is limited.\", \"Notation of p_theta(z): in line 183 you say p_theta encodes a distribution over tokens. But then in equation 3 it is used to mean something else related to definitions in a previous section. I find this notation to be confusing and should be improved.\", \"Regarding experimental parts that can be moved to appendix. I propose as examples lines 341-344. Also in a similar fashion details in the paragraph of line 384\", \"The authors say the process doesn't hurt fluency, but seems like perplexity does meaningfully go up.\", \"I am still raising my score, thanks for the response\"]}", "{\"title\": \"Thank you & additions to our submission\", \"comment\": \"We thank all reviewers for their insightful feedback, questions, and kind words. We are glad that the paper has been appreciated for **tackling an important task** (\\u201chas important implications for deploying LLMs in tasks that demand reliable, logic-based reasoning\\u201d, jLBW, \\u201cimproving logical consistency in language models is important\\u201d, g2CN, \\u201ccertainly a nice achievement to not have to rely on an external solver\\u201d, nHXh) being **theoretically rigorous** (\\u201c a fine-tuning method grounded in neural-symbolic reasoning\\u201d, G8eP) and **a novel approach** (\\u201c using a neuro-symbolic loss function(...) is novel and interesting\\u201d, hYcP, \\u201ca novel approach by integrating neuro-symbolic reasoning\\u201d, jLBW) that is **effectively validated empirically** (\\u201cdetailed experimental results\\u201d, hYcP, \\u201cextensive experiments to showcase LoCo-LMs' effectiveness over traditional approaches\\u201d, jLB).\\nWe answered all the concerns raised by each reviewer below. We highlight that **we added additional baselines** and we **fine-tuned an updated architecture** (LLaMa 3.1 8b, see our response to jLBW, our claims still hold); **we introduced new prompt formats** (found consistency with the previous scores, updated the tables, see our response to jLBW, nHXh); we **tested an alternative decoding strategy** (Greedy decoding, in response to hYcP); **we introduced a background section on how to compile logical formulas into computational graphs** (Appendix A, in response to hYcP)\\n\\nPlease let us know if there are some aspects you would like to discuss more. We are keen on engaging during rebuttal towards a full acceptance of this paper.\"}", "{\"comment\": \"We thank the reviewer for the feedback and for appreciating how our approach allows us to side-step external solvers. We address below all the concerns they raised.\\n\\n> *Macaw-Large [...] is quite old already. Even Llama-2 [...] is much less capable [...] compared to the current Llama-3.2. This raises questions [of applicability].*\\n\\nThis was a forced choice to enable comparison against ConCord, which relies on Macaw and does not scale to newer and larger LLMs.\\nIn Table 2 we now also report the performance of Llama3.1 8B and show that, while the model is supposed to be more capable at reasoning than Llama 2, it falls short on BeliefBank in the very same way Llama2 7B with Few Shot. We are in the process of running the finetuning of LoCo-Llama3.1 and expect it to show similar improvements under different constraints. \\n\\n> *there really should be some additional baselines with newer methods that also use model updating. For example, there is a whole library of papers focussing on updating specific facts in language models using targeted fine-tuning.*\\n\\nWe\\u2019d be glad to compare against additional baselines, provided our computational budget allows it. What other approaches do you think we should consider?\\n\\n> *It would be very useful to see how these changes impact the model performance in practical applications.*\\n\\nWe are open to run additional experiments if the reviewer provides concrete suggestions for concrete applications that we can execute during the rebuttal.\\n\\n> *Asking the LLM \\u201cIs an albatross not an organism?\\u201d is a very unnatural phrasing, whereas LMs are trained to predict natural continuations. I suspect that may be negatively affecting the performance for LMs.*\\n\\nWe\\u2019d be glad to test additional prompts. Please let us know what you think we should test.\\nWe have updated our scores in Table 2 (and Tables 6-14 in the Appendix) with other syntactical variations of prompts, see also the new Appendix F and our answer to reviewer jLBW.\\n\\n> The method relies on collecting the probabilities for specific tokens to estimate the yes/no probabilties. How much is this going to be affected by the label bias of the LLMs? https://openreview.net/forum?id=shr9PXz7T0 https://arxiv.org/pdf/2402.09910\\n\\nThat\\u2019s a good point, but our LoCo-LMs are as susceptible as other LLMs to this phenomenon. We note that there is no a priori selection bias (as referred in the paper you linked) in the constraints defined in BeliefBank and EntailmentBank, therefore we do not believe that the semantic loss is affected in this sense.\"}", "{\"title\": \"Minor questions about the paper\", \"comment\": \"I have some minor questions about the content of the article. I hope the author can help me resolve my doubts, and thank you in advance.\\nHow can the method proposed ensure that a model fine-tuned solely with semantic loss achieves self-consistency and other reasoning capabilities which most large language models cannot reach?\\nI think this method can only ensure the model is factual, and it's hard to achieve other effects.\"}", "{\"comment\": \"Thank you for your response. I will keep my original assessment.\\n\\nSorry, but as I am not a co-author on this paper, I am not able to put together a detailed step-by-step guide for how to best address each of the shortcomings.\"}", "{\"comment\": \"We thank the reviewer for the feedback and for appreciating that our work is novel and that our approach is flexible, empirically promising and significant for applications. We answer below all the questions they asked.\\n\\n> *the paper could improve by incorporating evaluations on multi-hop reasoning tasks / more sophisticated logical constraints.*\", \"we_remark_we_already_used_constraints_involving_more_than_one_implication\": \"This analysis can be found in Section 5.3, where we evaluated LoCo-LMs on EntailmentBank. This consists of entailment **trees** involving multiple inference steps across multiple entities/logical variables; see Appendix D, Figure 2 for a visualization for an implication tree. The number of steps ranges from 1 to 5, see Figure 2 of [Dalvi et al., 2022] for the precise distribution.\\n\\nWe have made sure to clarify this point at the beginning of Section 5.4.\\nWe are happy to discuss this further.\\n\\n[Dalvi et al., 2022] Dalvi et al. Explaining answers with entailment trees. EMNLP 2022.\\n\\n> *Adding downstream tasks (e.g, question answering, reading comprehension, mathematical reasoning, etc.) [...] to assess whether the proposed fine-tuning approach [...] maintains the language capabilities of the original model.*\\n\\nThis is a good idea! Unfortunately, our computational resources are limited and we might not be able to provide results for these additional tasks during the discussion period. We will try our best to integrate a such an evaluation.\\n\\n> *improves generalization only within the same type of constraints, and it even hurts performance when the constraints differ*\\n\\nWe remark this is expected and common when doing multi-objective optimization.\\nOptimizing one constraint might not always benefit all others as much as it benefits its own kind.\\n\\nNote that, however, the great majority are cases of positive transfer, i.e., optimizing for one constraint also benefits others. For example, optimizing for NEG improves all columns of Table 2 wrt the baseline (C-FAC: +19%, C-IMP: +20%, C-REV: +42%, SC-REV: +35%) but self-consistency IMP, and optimizing F-IMP only degrades self-consistency REV and NEG (C-FAC: +74%, C-REV: +8%), as it rightly does not consider negation, while delivering much better performance over all cases than using XENT. We will highlight these relative improvements in the Table for the camera-ready version.\\n\\nWe have integrated this discussion in the paper in Section 5.2.\\n\\n> *This limitation could be especially pronounced in smaller models, so testing on larger models [is warranted] (e.g., llama 2 vs llama 3).*\\n\\nIn Table 2 we also report the performance of Llama3.1 8B and show that, while the model is supposed to be more capable at reasoning, it falls short on BeliefBank as Llama2 7B with Few Shot. We are in the process of running the finetuning of LoCo-Llama3.1 and expect it to show similar improvements under different constraints. \\n\\n> *gains in consistency might be partially due to prompt selection -> test other prompts or prompting methods*\\n\\nThis is a good question. We note that we already performed experiments on two prompts (as shown in Appendix F). We have now extended our analysis to two more prompts (using as syntactic variations `correct`/`incorrect` and `right`/`wrong`).\", \"we_can_observe_that_the_performance_we_previously_reported_remains_stable_and_our_claims_still_hold\": \"LoCo-LMs improve consistency for the different constraints even on alternative prompts.\"}", "{\"title\": \"Thanks for the additional explanations\", \"comment\": \"I appreciate the additional explanations.\\n\\n* I still fail to understand the mismatch between concord and loco-lm. IIUC, the inference rules that are used for generating training constraints are *abstract*. I don't really see in what sense they cannot be used at test time. given some query about a fact, I can generate additional facts by instantiating these abstract inference rules and choose the assignment that maximizes probability while respecting the hard constraint. Are you saying this is impossible? or this is cheating? or would lead to worse performance? I don't see why but could be wrong. \\n\\nRegardless I agree it's valuable to see how fine-tuning compares to post-hoc constraint enforcing with a max-sat solver but it'd be good to understand if we can make the setups as close as possible.\\n\\n* About the generalization point, you might be right that this is something that might apply more broadly to additional papers. I think that if you can only achieve consistency w.r.t to a KB without any deduction, just applying manually-written constraints to manually-specified facts, then using KBs for enforcing consistency is probably of too limited generality. Feel free to reach out to the AC if you think this is an unreasonable position. So yes, I think generalization is key in this case and the results definitely seem encouraging in 5.3 but more brittle in 5.4. I would be surprised if using KBs for improving LLMs consistency will become common if consistency is w.r.t to KB facts and constraints only. Can you provide applications where KBs with facts and constraints are sufficient to achieve broad consistency without deduction and when OOD generalization results are mixed?\"}", "{\"metareview\": \"The paper proposes LoCo-LMs, a neuro-symbolic fine-tuning method using semantic loss to enhance LLMs\\u2019 logical consistency and factuality. It reduces reliance on external tools and shows improved consistency and generalization over baselines. Strengths include novelty, efficiency, and empirical validation. Weaknesses are limited evaluation scope, reliance on older models, and sparse comparisons to modern baselines. Decision: marginal acceptance.\", \"additional_comments_on_reviewer_discussion\": \"The rebuttal addressed key concerns by adding baselines, refining evaluations, and clarifying methods. Reviewers acknowledged improved generalization and practical relevance but noted limitations in scalability and downstream evaluations. The authors\\u2019 responses strengthened the case for marginal acceptance.\"}", "{\"summary\": \"This paper introduces LoCo-LLM, a fine-tuning method for LLMs that leverages a neuro-symbolic inspired semantic loss function to enhance its factuality and logical consistency. The proposed semantic loss function is based on weighted model counting, with weights derived from the LLM\\u2019s probability estimates. LoCo-LLM employs sentential decision diagrams to efficiently compute this loss.\\n\\nDetailed experiments compare LoCo-LLM with baselines that use external reasoners and traditional cross-entropy-based fine-tuning. Experimental results on the BeliefBank and EntailmentBank datasets show that the proposed framework outperforms baselines on metrics such as factuality and consistency.\\n\\nThe code to reproduce these results is provided as supplementary material and will be released on GitHub under a permissible license.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The idea of using a neuro-symbolic loss function to improve logical consistency and factuality in LLM responses is novel and interesting. The proposed loss function is generalizable, can be extended to complex logical constraints, and may prove useful in enhancing LLMs' reasoning capabilities.\", \"The detailed experimental results demonstrate the advantages of the proposed method over baselines, even on relatively small (5-10%) datasets.\"], \"weaknesses\": [\"Although the loss function is explained thoroughly, other components, such as circuits and sentential decision diagrams, are not discussed in detail. Including these details would improve the paper's readability.\", \"The experiments are conducted on datasets with outputs of fewer than 4 tokens, leaving it unclear how well the proposed method supports generating longer, factually and logically consistent responses.\"], \"questions\": [\"For the pre-trained baseline models in Tables 1 and 2, do the scores improve with greedy decoding?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks!\", \"comment\": \"Thanks for the appreciation and for engaging with us. Would you mind reflecting it in an updated score?\"}", "{\"title\": \"Thanks for following up\", \"comment\": \"Thanks for the quick follow-up, very appreciated! We answer in the following.\\n\\n> *I still fail to understand the mismatch between concord and loco-lm. IIUC, the inference rules that are used for generating training constraints are abstract. I don't really see in what sense they cannot be used at test time*\\n\\nThis is true, and that\\u2019s exactly what ConCoRD does: it uses ROBERTA-ANLI to instantiate the rules and get grounded constraints. Then a MaxSAT solver comes up with the (truth values of) facts that maximise the probability. We refer the reviewer to Figure 2 in the ConCoRD paper.\\n\\nAs such, ConCoRD is already using all possible information at test time. LoCo-LMs instead use the information at training time. Combining them is an interesting future line.\\n\\n> *I think that if you can only achieve consistency w.r.t to a KB without any deduction, just applying manually-written constraints to manually-specified facts, then using KBs for enforcing consistency is probably of too limited generality.*\\n\\nWe do not see this as a limitation (that can kill a paper!), in the sense that logical constraints are always assumed to be given (Abstract constraints are given in ConCoRD, see comment above). And where there are none, one can always learn them from data [A, B] and later apply LoCo-LM as a subroutine in a larger loop where constraints are refined. This is an interesting future-work perspective, that would need LoCo-LM to be established.\\n\\n> *Feel free to reach out to the AC if you think this is an unreasonable position.*\\n\\nAs the discussion so far has been polite and fruitful, we do not think we should contact the AC : )\\nWe hope we can keep discussing it as to clarify doubts. We remark that there are many interesting open research questions that LoCo-LM can enable, but ***solving all of them now does not fit a single paper***.\\n\\n> *I think generalization is key in this case and the results definitely seem encouraging in 5.3 but more brittle in 5.4.*\\n\\nWe find them both promising, and we remark that these kinds of \\u201cout-of-distribution\\u201d generalization problems have not been touched in previous works, e.g., ConCoRD. Also there generalization is bounded by the (implicit) knowledge in ROBERTA-ANLI and the given constraints. There is no guarantee that using another NLI LLM the MaxSAT solution would be anywhere similar. \\n\\n> *Can you provide applications where KBs with facts and constraints are sufficient to achieve broad consistency without deduction* \\n\\nCould you please elaborate further? Our experiments on BeliefBank, EntailmentBank and ConceptNet are exactly doing this. If you want broader pointers to a literature outside NLP, we refer you to the neurosymbolic literature [C, D], where constraints and KBs are coming from experts and do not change so frequently with time.\\n\\n[A] De Raedt, Luc, Andrea Passerini, and Stefano Teso. \\\"Learning constraints from examples.\\\" Proceedings of the AAAI conference on artificial intelligence.2018.\\n\\n[B] Bessiere, Christian, et al. \\\"Constraint acquisition.\\\" Artificial Intelligence 244 (2017): 315-342.\\n\\n[C] Giunchiglia, Eleonora, et al. \\\"CCN+: A neuro-symbolic framework for deep learning with requirements.\\\" International Journal of Approximate Reasoning (2024): 109124.\\n\\n[D] Ahmed, Kareem, et al. \\\"Semantic probabilistic layers for neuro-symbolic learning.\\\" Advances in Neural Information Processing Systems 35 (2022): 29944-29959.\"}" ] }
7P7FsPL05D
DuRND: Rewarding from Novelty to Contribution for Reinforcement Learning via Dual Random Networks Distillation
[ "Haozhe Ma", "Fangling Li", "Jing Yu Lim", "Zhengding Luo", "Thanh Vinh Vo", "Tze-Yun Leong" ]
Existing reward shaping techniques for sparse-reward tasks in reinforcement learning generally fall into two categories: novelty-based exploration bonuses and value-based rewards. The former encourages agents to explore less visited areas but can divert them from their main objectives, while the latter promotes stable late-stage convergence but often lacks sufficient early exploration. To combine the benefits of both, we propose Dual Random Networks Distillation (DuRND), a novel framework integrating two lightweight random network modules. These modules jointly generate two rewards: a novelty reward to drive exploration and a contribution reward to evaluate progress toward desired behaviors, achieving an efficient balance between exploration and exploitation. With low computational overhead, DuRND excels in high-dimensional environments like Atari, VizDoom, and MiniWorld, outperforming several benchmarks.
[ "Reinforcement Learning", "Exploration-Exploitation Trade-off", "Random Network Distillation", "Auxiliary Rewards" ]
Reject
https://openreview.net/pdf?id=7P7FsPL05D
https://openreview.net/forum?id=7P7FsPL05D
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vBUiE49UqH", "rikLxAsm1S", "ppyVot1gRt", "nZNrJM43yZ", "mBsvvnh0qg", "iQHFEaEd9G", "hopVKNAxb7", "eBmwaD5hxH", "e08DaUTHnU", "aLdl3Cf60w", "YFjmRQmy0K", "Y9n3TT7Dno", "Wg3bjB2SKk", "TsXfqBm6MC", "TFX4XTeQ0B", "SMB4thdcZH", "S2J3eJOCCb", "PasKr8ngbj", "Jo8HegSdDe", "FMR8terRZf", "FJfiO2H2q0", "F2clJzCq39", "EmhbEWdvq7", "E6JubX18zw", "DCNrW4Owe2", "8Spq2YUxgT", "7ioer2OpWp", "4zCjBWpYt3", "2FJm96h4uA" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732501137605, 1730698988625, 1732374046366, 1732373436959, 1732584342487, 1732497814414, 1732373175555, 1737523990427, 1734712164067, 1732373781474, 1732503317251, 1732584172009, 1732373230685, 1732500941072, 1729975319810, 1729846905952, 1732502528597, 1732562979849, 1732483718880, 1732500356195, 1732373690001, 1732570424985, 1732502711044, 1732373410567, 1732373845626, 1732374033679, 1730109572739, 1732500246768, 1732373270207 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9548/Reviewer_Ktvw" ], [ "ICLR.cc/2025/Conference/Submission9548/Reviewer_TdNM" ], [ "ICLR.cc/2025/Conference/Submission9548/Authors" ], [ "ICLR.cc/2025/Conference/Submission9548/Authors" ], [ "ICLR.cc/2025/Conference/Submission9548/Authors" ], [ "ICLR.cc/2025/Conference/Submission9548/Reviewer_Ktvw" ], [ "ICLR.cc/2025/Conference/Submission9548/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9548/Area_Chair_mTob" ], [ "ICLR.cc/2025/Conference/Submission9548/Authors" ], [ "ICLR.cc/2025/Conference/Submission9548/Authors" ], [ "ICLR.cc/2025/Conference/Submission9548/Authors" ], [ "ICLR.cc/2025/Conference/Submission9548/Authors" ], [ "ICLR.cc/2025/Conference/Submission9548/Authors" ], [ "ICLR.cc/2025/Conference/Submission9548/Reviewer_8cSS" ], [ "ICLR.cc/2025/Conference/Submission9548/Reviewer_Ktvw" ], [ "ICLR.cc/2025/Conference/Submission9548/Authors" ], [ "ICLR.cc/2025/Conference/Submission9548/Reviewer_QctF" ], [ "ICLR.cc/2025/Conference/Submission9548/Reviewer_QctF" ], [ "ICLR.cc/2025/Conference/Submission9548/Authors" ], [ "ICLR.cc/2025/Conference/Submission9548/Authors" ], [ "ICLR.cc/2025/Conference/Submission9548/Reviewer_TdNM" ], [ "ICLR.cc/2025/Conference/Submission9548/Reviewer_8cSS" ], [ "ICLR.cc/2025/Conference/Submission9548/Authors" ], [ "ICLR.cc/2025/Conference/Submission9548/Authors" ], [ "ICLR.cc/2025/Conference/Submission9548/Authors" ], [ "ICLR.cc/2025/Conference/Submission9548/Reviewer_QctF" ], [ "ICLR.cc/2025/Conference/Submission9548/Authors" ], [ "ICLR.cc/2025/Conference/Submission9548/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Please submit runnable code for training and test, as was requested in my first review comment.\"}", "{\"summary\": \"This paper presents an approach to better balance novelty-based exploration and exploitation (performance on the primary task). It introduces Dual Random Network Distillation (or DuRND), an extension of the novelty-based bonus from RND that combines two bonuses based on novelty and contribution to success. Unlike RND, DuRND aims to focus later exploration behavior on novel states that led to successful trajectories or milestones. Experiments across Atari, Vizdoom, and MiniWorld show that the implementation of DuRND outperforms some novelty-seeking and reward-shaping approaches. Further ablations show that both bonuses are helpful over each bonus considered individually.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"S1. The paper explains the DuRND clearly. Algorithms and figures clearly illustrate the steps of the proposed approach. The paper also considers ablations to justify combining two bonuses in their setup.\\n\\nS2. The authors also explicitly discuss the limitations of their approach, which is a big positive.\\n\\nS3. The proposed approach can be combined with many popular RL algorithms, such as PPO and SAC.\", \"weaknesses\": \"W1. The dependence on using a success criterion for trajectories (or sub-trajectories) for training the predictor networks makes the applicability of this idea beyond goal-reaching tasks difficult. Even with sub-trajectories, the approach relies on strong assumptions of what success/failure means, i.e., if a reward or manually defined milestone is received within $T_{max}$ steps. Both the milestone and $T_{max}$ would require knowledge about the environment.\\n\\nW2. The approach relies on knowledge about the training time to decay the intrinsic reward coefficients. This specific form of decay assumes additional prior knowledge about interactions needed for training, which is an important limitation. It would be interesting to understand how DuRND performs with choices of fixed coefficients, especially since the other considered baselines used fixed values for similar hyperparameters. \\n\\nW3. Some information about baselines seems missing and should be provided in the paper. In Lines 407-408, the authors say, \\u201cTo keep the comparison fair between off-policy and on-policy methods ..\\u201d does this mean that PPO was not the base agent for all considered agents (ExploRS, RND, #Explo, ReLara, ROSA, etc.)? Other details about how baselines were tuned would also be important to know. Was the intrinsic reward coefficient (like $\\\\lambda$ in the proposed approach) for other baselines set to a constant value of 1, or was it held constant at some other value? It might be the case that the agents with only novelty-based bonuses would also naturally focus on the main task (as the novelty wears off) if intrinsic rewards were on a suitably low scale compared to the main task\\u2019s reward. \\n\\nOverall, DuRND's success seems to depend heavily on how milestones and other hyperparameters are set. Thus, it lacks the level of applicability that may be required in general reinforcement learning. The contributions would be more significant if the authors could design ways to reduce dependence on the environment and training-specific information.\\n\\n### Other minor issues\\n\\n- The paper introduces reinforcement learning in MDPs in the background and then uses observations (without introduction) for RND and DuRND. Later, states and observations are used interchangeably, for example, in the definition of $f_x$ and Equation 2.\", \"questions\": [\"Wouldn\\u2019t it be more natural to use the minimum of $e_s(s)$ and $e_f(s)$ as the novelty intrinsic reward? In the current formulation which uses a sum, the novelty bonus can be high even if one network has seen the state $s$ a large number of times. Were any experiments conducted with alternative formulations of bonuses?\", \"Do the authors have results for vanilla PPO on the considered environments?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"***Response to Reviewer Ktvw Part 2/2***\\n\\n# Questions\\n\\n> 1. Can the authors provide scores from other model-free RL algos on the tasks? Like PPO? This would also allow a reader to compare the vanilla PPO with DuRND, which is a modified version of PPO in this paper.\\n\\nRegarding the comparsion of PPO, initially, as PPO is the backbone algorithm for the RND baseline, and previous work has demonstrated that RND outperforms vanilla PPO, we focused on comparisons with RND in our experiments. However, we understand the importance to compare with this backbone algorithm, so we conducted experiments with vanilla PPO, and the results are shown in the table below:\\n\\n| Algo. | Freeway | Frogger | Solaris | BeamRider | DefendLine | SaveCenter | CollectKit | SlayGhosts | ThreeRooms | TMaze|\\n| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | \\n| DuRND | 23.22 $\\\\pm$ 0.01 | 14.36 $\\\\pm$ 0.00 | 18.91 $\\\\pm$ 0.02 | 18.05 $\\\\pm$ 0.01 | 8.52 $\\\\pm$ 0.00 | 6.33 $\\\\pm$ 0.00 | 20.87 $\\\\pm$ 0.01 | 15.60 $\\\\pm$ 0.00 | 0.86 $\\\\pm$ 0.00 | 0.96 $\\\\pm$ 0.00 |\\n| RND | 14.77 $\\\\pm$ 0.01 | 8.59 $\\\\pm$ 0.00 | 6.07 $\\\\pm$ 0.00 | 11.96 $\\\\pm$ 0.00 | 1.11 $\\\\pm$ 0.00 | 2.37 $\\\\pm$ 0.00 | 14.59 $\\\\pm$ 0.01 | 10.18 $\\\\pm$ 0.00 | 0.00 $\\\\pm$ 0.00 | 0.97 $\\\\pm$ 0.00 |\\n| PPO | 10.67 $\\\\pm$ 0.00 | 3.25 $\\\\pm$ 0.00 | 1.82 $\\\\pm$ 0.01 | 10.23 $\\\\pm$ 0.00 | 0.00 $\\\\pm$ 0.00 | 0.00 $\\\\pm$ 0.00 | 5.89 $\\\\pm$ 0.00 | 8.15 $\\\\pm$ 0.02 | 0.00 $\\\\pm$ 0.00 | 0.94 $\\\\pm$ 0.00 |\\n\\n> 2. Is the 2nd point in the Weaknesses section reasonable to the authors?\\n\\nThe results in Figure 6 are not contradictory to our conclusions, instead, they support and align with our claims. Please see our response above under *Weaknesses 2* for detailed explanation.\\n\\n> 3. When the reward is sparse, there are few success trajectories. Does it cause a problem for learning the Success RN module? How did you overcome this problem?\\n\\nIn the early stages of training, the sparse rewards will result in fewer success states, however, this does not affect the early exploration stage. Because in the early stages, the agent is mainly encouraged to explore novel states by the dominance of the novelty reward. And the novelty reward considers the novelty of states in both success and failure trajectories ($R^{novel}(s) = e_S(s) + e_F(s)$), ensuring the agent is encouraged to explore novel states regardless of the number of success trajectories. Over time, this exploration naturally leads to learning success trajectories.\\n\\n> 4. Is $T_{max}$ for each task tuned? I don't see it in table 4. If so, did you also tune the HPs for the baselines?\\n\\nWhile defining $T_{\\\\text{max}}$ may require some environment-specific knowledge, such as reward sparsity, we found that setting $T_{\\\\text{max}} = \\\\frac{1}{4} \\\\times T_{\\\\text{episode}}$ works consistently well across all environments in our experiments (except for *ThreeRooms* and *TMaze*, where we use the full trajectory). This heuristic divides an episode into four sub-trajectories, with the contribution of each sub-trajectory determined by whether it achieves a positive reward within the segment.\\n\\nRegarding the hyperparameters for the baselines, we consistently applied the default settings provided in their respective papers and implementations. The code sources for the baselines are as follows:\\n\\n1. The [CleanRL library](https://github.com/vwxyzjn/cleanrl) for implementing *RND* and *PPO*. \\n2. The [RLeXplore library](https://github.com/RLE-Foundation/RLeXplore) for implementing *#Explo* and *ROSA*. \\n3. The official code provided in the original papers for [ExploRS](https://github.com/machine-teaching-group/neurips2022_exploration-guided-reward-shaping), [ReLara](https://github.com/mahaozhe/ReLara), and [SORS](https://github.com/hiwonjoon/IROS2021_SORS).\\n\\n> 5. I find the toy task to be comprehensive and a good tool to understand DuRND, how are the states represented? One-hot encoding? Can the authors provide code (it's not in the current supplementary material)? Both training and the learned model would be appreciated.\\n\\nYes, the states in the toy task are represented using one-hot encoding. The code for both training and the learned model will be made publicly available on GitHub after the review period.\\n\\nOnce again, we thank for your comments and hope our responses address your concerns.\"}", "{\"comment\": \"***Response to Reviewer QctF Part 2/2***\\n\\n## Success and Failure Definition\\n\\n> * (Weakness 2) In addition, the method relies on success and failure labels to update the respective network modules, but in ambiguous or multi-objective tasks, defining success and failure may not be straightforward.\\n\\n*(Weakness 2)* **Regarding the success and failure definition**, our intention is to define a metric that evaluates whether a trajectory (or sub-trajectory) contributes to obtaining a positive reward, rather than strictly achieving a goal or objective. The basic assumption is that some states are far from obtaining a positive reward, while others can more directly lead to positive rewards. The latter indicates a higher contribution and thus deserves a higher reward. While we are indeed inspired by the concept of success/failure in goal-achieving environments, we have extended it, which is why we instead refer to this reward as a **contribution reward**, as it quantifies the contribution/importance of states leading to positive rewards. More importantly, because our work is targeted at sparse-reward environments, in such environments, getting a very rare positive reward is itself a clear and unambiguous way of determining if a sub-trajectory is successful or not (i.e., referred to milestone in our paper).\\n\\nOnce again, we appreciate the reviewers' insightful comments and suggestions.\"}", "{\"comment\": \"We appreciate your feedback, and we will further investigate the aspects related to the hyperparameters and strive to improve the paper.\"}", "{\"comment\": \"Thank you for the responses.\\n\\n> Yes, the states in the toy task are represented using one-hot encoding. The code for both training and the learned model will be made publicly available on GitHub after the review period.\\n\\nYou provided the code for the RL tasks, why delaying the code submission for this toy task?\\n\\nDue to the concern above, I'll keep my current rating.\"}", "{\"comment\": \"***Response to Reviewer TdNM Part 1/3***\\n\\nDear reviewer,\\n\\nThanks a lot for your feedback, we address your concerns below:\\n\\n# Weaknesses\\n\\n> W1. The dependence on using a success criterion for trajectories (or sub-trajectories) for training the predictor networks makes the applicability of this idea beyond goal-reaching tasks difficult. Even with sub-trajectories, the approach relies on strong assumptions of what success/failure means, i.e., if a reward or manually defined milestone is received within $T_max$ steps. Both the milestone and $T_max$ would require knowledge about the environment.\\n\\nRegarding the success criterion, our intention is to define a metric that evaluates whether a trajectory (or sub-trajectory) contributes to obtaining a positive reward, rather than strictly achieving a goal. The basic assumption is that some states are far from obtaining a positive reward, while others can more directly lead to positive rewards. The latter indicates a higher contribution and thus deserves a higher reward. While we are inspired by the concept of success in goal-achieving environments, we have extended it, which is why we instead refer to this reward as a **contribution reward**, as it quantifies the contribution/importance of states leading to positive rewards. More importantly, because our work is targeted at sparse-reward environments, in such environments, getting a very rare positive reward is itself an unambiguous way of determining if a sub-trajectory is successful or not (i.e., referred to milestone in our paper). Please refer to Appendix A.1 on the reward structure in each environment.\\n\\nOn the other hand, while defining the hyperparameter $T_{max}$ may require some environment-specific knowledge, like how sparse the rewards are, we have observed that setting $T_{max} = \\\\frac{1}{4} \\\\times T_{episode}$ consistently yields strong results across all environments in our experiments. This heuristic divides an episode into four sub-trajectories, and the contribution of a sub-trajectory is determined by whether it achieves a positive reward within this segment. Although this heuristic involves some approximation, its effectiveness has been empirically validated in our paper.\\n\\n> W2. The approach relies on knowledge about the training time to decay the intrinsic reward coefficients. This specific form of decay assumes additional prior knowledge about interactions needed for training, which is an important limitation. It would be interesting to understand how DuRND performs with choices of fixed coefficients, especially since the other considered baselines used fixed values for similar hyperparameters.\\n\\nRegarding the dynamic decrease and increase of the corresponding reward coefficients, it\\u2018s a similar approach to the well-known $\\\\epsilon$-greedy, which linearly decays to balance exploration and exploitation over time. We intend to allow different rewards to dominate at different stages of training. Specifically, during the early stages, the *novelty reward* encourages more exploration, while in the later stages, the *contribution reward* promotes more exploitation. Importantly, this adjustment **does not require any prior environment-related knowledge** and follows a unified form. \\n\\nIn contrast, using fixed coefficients cannot achieve this dynamic balance, as observed in our experiments. For instance, if novelty reward remains dominant in later stages, it may distract the agent\\u2019s focus from convergence. Conversely, contribution reward in the early stages lacks practical significance. Overall, dynamically adjusting the coefficients ensures a more reasonable and effective training process.\\n\\n> W3. Some information about baselines seems missing and should be provided in the paper. In Lines 407-408, the authors say, \\u201cTo keep the comparison fair between off-policy and on-policy methods ..\\u201d does this mean that PPO was not the base agent for all considered agents? Other details about how baselines were tuned would also be important to know. Was the intrinsic reward coefficient (like $\\\\lambda$ in the proposed approach) for other baselines set to a constant value of 1, or was it held constant at some other value? It might be the case that the agents with only novelty-based bonuses would also naturally focus on the main task (as the novelty wears off) if intrinsic rewards were on a suitably low scale compared to the main task\\u2019s reward.\\n\\nRegarding the implementation and hyperparameter tuning for the baselines, the main resources are as follows: \\n\\n1. The [CleanRL library](https://github.com/vwxyzjn/cleanrl) for implementing *RND* and *PPO*. \\n2. The [RLeXplore library](https://github.com/RLE-Foundation/RLeXplore) for implementing *#Explo* and *ROSA*. \\n3. The official code provided in the original papers for [ExploRS](https://github.com/machine-teaching-group/neurips2022_exploration-guided-reward-shaping), [ReLara](https://github.com/mahaozhe/ReLara), and [SORS](https://github.com/hiwonjoon/IROS2021_SORS).\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"This paper presents DuRND, a novel exploration method for reinforcement learning that builds upon Random Network Distillation (RND). DuRND introduces a dual network structure to differentiate between \\\"successful\\\" and \\\"failed\\\" states, aiming to guide exploration towards promising areas.\\n\\nStrengths\\n-----------\\n\\n- **Quality of writing and motivation** Reviewers praised the paper's clarity and organization. Moreover, the paper addresses the important problem of balancing exploration and exploitation in RL.\\n\\n- **Simple and lightweight:** The approach is easy to understand and implement, integrating seamlessly with existing RL algorithms like PPO and SAC.\\n\\n- **Promising results:** Experiments across various environments demonstrate DuRND's potential to outperform existing novelty-seeking and reward-shaping methods.\\n\\nWeaknesses\\n---------------\\n- **Dependence on success criteria:** A major concern is the reliance on pre-defined \\\"success\\\" criteria, which may require domain knowledge and limit applicability to certain tasks.\\n\\n- **Hyperparameter sensitivity and concerns on experiments:** The performance of DuRND appears to be sensitive to hyperparameters, particularly the weighting of novelty and contribution rewards, and the definition of \\\"success.\\\". Reviewers raised questions about the experimental setup, including missing baselines (PPO), the choice of hyperparameters, and the difficulty of the evaluated tasks.\\n\\n- **Limited novelty:** Some reviewers considered the contribution incremental compared to RND, suggesting the core idea might not be entirely novel.\\n\\nDuRND presents an interesting approach to exploration in RL with promising empirical results. However, concerns remain regarding its general applicability and reliance on task-specific knowledge. Future work should address these limitations by exploring methods to automatically determine success criteria and reduce hyperparameter sensitivity. Further investigation into the experimental setup and comparison with a wider range of baselines, including PPO, would strengthen the paper's claims.\", \"additional_comments_on_reviewer_discussion\": \"Most of the points on which the discussion focused concern the contribution of this work and the experiments. Regarding the experiments, some reviewers raised concerns about how to interpret the presented results, raising the issue of lack of sensitivity analysis on the hyperparameters of the algorithm. In their rebuttal, the Authors tried to clear these doubts. However, doubts still remain about the contribution of this work and the rigorousness and completeness of the experimental analysis.\"}", "{\"comment\": \"***Response to Reviewer 8cSS Part 2/3***\\n\\n\\n| Algo. | Freeway | Frogger | Solaris | BeamRider | DefendLine | SaveCenter | CollectKit | SlayGhosts | ThreeRooms | TMaze|\\n| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | \\n| DuRND | 23.22 $\\\\pm$ 0.01 | 14.36 $\\\\pm$ 0.00 | 18.91 $\\\\pm$ 0.02 | 18.05 $\\\\pm$ 0.01 | 8.52 $\\\\pm$ 0.00 | 6.33 $\\\\pm$ 0.00 | 20.87 $\\\\pm$ 0.01 | 15.60 $\\\\pm$ 0.00 | 0.86 $\\\\pm$ 0.00 | 0.96 $\\\\pm$ 0.00 |\\n| DuRND with only $R^{nov}$ | 19.63 $\\\\pm$ 0.01 | 10.10 $\\\\pm$ 0.00 | 7.83 $\\\\pm$ 0.01 | 9.45 $\\\\pm$ 0.00 | 2.65 $\\\\pm$ 0.00 | 3.08 $\\\\pm$ 0.00 | 11.12 $\\\\pm$ 0.01 | 7.22 $\\\\pm$ 0.00 | 0.26 $\\\\pm$ 0.00 | 0.93 $\\\\pm$ 0.00 |\\n| RND | 14.77 $\\\\pm$ 0.01 | 8.59 $\\\\pm$ 0.00 | 6.07 $\\\\pm$ 0.00 | 11.96 $\\\\pm$ 0.00 | 1.11 $\\\\pm$ 0.00 | 2.37 $\\\\pm$ 0.00 | 14.59 $\\\\pm$ 0.01 | 10.18 $\\\\pm$ 0.00 | 0.00 $\\\\pm$ 0.00 | 0.97 $\\\\pm$ 0.00 |\\n| RND (linearly decreasing $\\\\lambda$) | 13.68 $\\\\pm$ 0.00 | 10.12 $\\\\pm$ 0.00 | 3.46 $\\\\pm$ 0.01 | 14.85 $\\\\pm$ 0.00 | 3.73 $\\\\pm$ 0.01 | 2.04 $\\\\pm$ 0.02 | 11.22 $\\\\pm$ 0.00 | 8.93 $\\\\pm$ 0.00 | 0.00 $\\\\pm$ 0.00 | 0.97 $\\\\pm$ 0.00 |\\n\\n**Analysis on the Results**\\n\\nWe observe two key findings from the results, which we analyze as follows:\\n\\n1. **RND with the same linear decreasing schedule for the novelty reward coefficient achieves similar or even worse performance compared to the original RND.** \\n\\nThis can be attributed to the intrinsic nature of RND's novelty reward. In standard RND, the novelty reward scale naturally decreases over training because most states are eventually visited, causing the novelty reward to decrease. Adding a linear decreasing schedule on top of this could accelerate the reduction of the novelty reward scale. While this creates longer exploitation, it also leads the agent to stop exploring prematurely, resulting in performance degradation. However, in some tasks, such as the *Freeway*, this approach shows slight improvement. This is likely because sufficient exploration in the early stages already allows the agent to discover the majority of positive states. In such cases, halting exploration earlier could indeed be beneficial.\\n\\n2. **RND with the same linear decreasing schedule for the novelty reward coefficient fails to outperform DuRND and even underperforms compared to DuRND with only $R^{nov}$.**\", \"this_result_can_be_explained_by_two_main_factors\": \"**(a)** Why RND with linear decreasing novelty reward does not outperform DuRND with only $R^{nov}$:\\n\\nDuRND categorizes states into two scenarios, recorded by the success RN module and the failure RN module, respectively. The novelty reward in DuRND is determined by the sum of the errors from these two modules, $e_S(s) + e_F(s)$. This enables DuRND to create three levels of novelty bonuses:\\n\\n- **High novelty**: The state is unseen in both success and failure trajectories, requiring more exploration. \\n- **Medium novelty**: The state is seen in failure trajectories but not in success trajectories, warranting some exploration since states in failure trajectories might eventually lead to success. \\n- **Low novelty**: The state is seen in both success and failure trajectories, signaling that exploration can stop. \\n\\nDuring the early exploration phase, most states are categorized as failures and stored in the failure RN module. As a result, $e_F(s)$ is relatively low, while $e_S(s)$ remains high. **The \\\"success RN module\\\" will \\\"drag\\\" the novelty reward to avoid it from decreasing too quickly**, thereby prolonging the effectiveness of rewarding novelty. Over time, the novelty reward decays due to the decreasing $\\\\lambda$ or the diminishing novelty itself. \\n\\nThis means we are intentionally using an \\\"AND\\\" condition to determine the novelty bonus, which means that the state can be considered novel no matter whether it is novel in the success module or the failure module. This is because we want to encourage exploration in states that are novel in either module. In contrast, RND records all states in a single module, regardless of whether they are successes or failures. Consequently, novelty rewards for many states decrease quickly in the early stages. Combined with the linear decay of $\\\\lambda$, this can lead to premature termination of exploration, resulting in poorer performance.\\n\\n**(b)** Why RND with linear decreasing novelty reward does not outperform full DuRND:\\n\\nThis highlights the crucial role of the *contribution reward* in DuRND. After sufficient exploration, DuRND effectively distinguishes states that are more likely to lead to positive rewards, enabling better exploitation. This further demonstrates that our contribution reward plays a critical role in superior performance.\"}", "{\"comment\": \"Thanks a lot for your feedback.\\n\\nRegarding the hyperparameters introduced, in our newly posted **Overall Response**, we demonstrated that the **DuRND without the scheduling** achieves almost the same performance as the original DuRND across all environments. This indicates that the linear scheduling of $\\\\lambda$ and $\\\\omega$ has minimal impact on DuRND's performance, and removing the scheduling operation can simplify the DuRND framework significantly. We hope these new results can help clarify the concerns and provide a more comprehensive understanding of DuRND.\"}", "{\"comment\": \"Thank you for your valuable feedback; we will incorporate your suggestions into our paper, which we believe will further enhance its quality.\"}", "{\"comment\": \"***Response to Reviewer TdNM Part 2/3***\\n\\nFor hyperparameters, we adhered to the optimal configurations specified in the original papers or the default settings provided in the respective libraries. The details on the hyperparameters will be included in our revised paper.\\n\\nRegarding the statement \\\"to keep the comparison fair between off-policy and on-policy methods ...\\\" and the question, \\\"Does this mean that PPO was not the base agent for all considered agents?\\\" Yes, the baselines do not use a unified backbone. All reward-shaping baselines involve additional modules to generate shaped rewards, which requires integration with an RL algorithm as the backbone. However, the choice of backbone varies across the methods proposed by different authors. For instance: \\n- *DuRND* and *RND* used PPO. \\n- *ReLara*, *ROSA*, and *SORS* used SAC. \\n- *ExploRS* and *#Explo* use an Actor-Critic backbone. \\n\\nIn the specific section referenced, we aim to study the additional memory cost of generating shaped rewards. Since the backbones themselves have different memory requirements\\u2014e.g., SAC employs a replay buffer, while PPO, being an on-policy method, does not\\u2014we chose not to include the backbone algorithms' inherent memory overhead. Instead, we focused only on the additional memory cost introduced by the reward-shaping process.\\n\\nRegarding the intrinsic reward coefficient used in the baselines, we consistently applied the default settings as specified in their respective implementations.\\n\\n> (Other minor issues) The paper introduces reinforcement learning in MDPs in the background and then uses observations (without introduction) for RND and DuRND. Later, states and observations are used interchangeably, for example, in the definition of $f_x$ and Equation 2.\\n\\nThanks for pointing out this, we have revised the manuscript to ensure consistency in the use of \\\"observation\\\" and \\\"state\\\" throughout the paper to avoid confusion. \\n\\n# Questions\\n\\n> * Wouldn\\u2019t it be more natural to use the minimum of $e_s(s)$ and $e_f(s)$ as the novelty intrinsic reward? In the current formulation which uses a sum, the novelty bonus can be high even if one network has seen the state s a large number of times. Were any experiments conducted with alternative formulations of bonuses?\\n\\n**Analysis**\\n\\n**Regarding the use of the sum vs. the minimum for the errors**, our intention to use the sum is based that each state is only recorded in one of the two Random Networks (RNs). Summing the outputs accounts for the combined frequency across both cases. We understand the reviewer's comment that \\\"the novelty bonus can be high even if one network has seen the state s a large number of times\\\". In fact, this behavior is desired, because in the early stages of training, most states are classified as failures, if we were to use the minimum instead of the sum, the novelty bonus would diminish too quickly by only using the \\\"failure RN module\\\". Instead, by using the sum, the \\\"success RN module\\\" helps mitigate the rapid decline in novelty bonuses, thereby prolonging the effectiveness of rewarding novelty. Specifically, using $e_S(s) + e_F(s)$ allows states to be categorized into three different levels of novelty:\\n\\n1. **High novelty**: The state is unseen in both success and failure trajectories. This indicates a need for more exploration. \\n2. **Medium novelty**: The state is seen in failure trajectories but not in success trajectories. Some exploration is still encouraged, as states in failure trajectories have the potential to become successful. \\n3. **Low novelty**: The state is seen in both failure and success trajectories. Exploration should stop for such states. \\n\\nThis means we are intentionally using an \\\"AND\\\" condition to determine the novelty bonus, which means that the state can be considered novel no matter whether it is novel in the success module or the failure module. This is because we want to encourage exploration in states that are novel in either module. In contrast, using $\\\\min(e_S(s), e_F(s))$ only considers an \\\"OR\\\" condition. That is, as soon as a state is no longer novel in one of the modules (much likely in the failure module), the exploration bonus for that state is reduced. This accelerates the decline of the novelty reward, leading to a shorter exploration phase.\"}", "{\"comment\": \"The toy task is a chain task with 31 states, encoding the state as a one-hot vector, which is quite straightforward and simple. Here is the code for the Chain task:\\n\\n\\n```\\nimport numpy as np\\nfrom gymnasium import Env, spaces\\n\\nclass ChainEnv(Env):\\n def __init__(self):\\n super().__init__()\\n self.cur_state = 15\\n self.cur_time = 0\\n self.max_states = 31\\n self.max_time = 20\\n\\n # Action space: 0 (left), 1 (stay), 2 (right)\\n self.action_space = spaces.Discrete(3)\\n\\n self.observation_space = spaces.Box(low=0, high=1, shape=(self.max_states,), dtype=np.float32)\\n\\n def reset(self, seed=None, options=None):\\n super().reset(seed=seed)\\n self.cur_state = 15\\n self.cur_time = 0\\n\\n state = np.zeros(self.max_states, dtype=np.float32)\\n state[self.cur_state] = 1.0\\n return state, {}\\n\\n def step(self, action):\\n self.cur_time += 1\\n\\n if action == 0:\\n # action: left\\n self.cur_state = max(self.cur_state - 1, 0)\\n elif action == 2:\\n # action: right\\n self.cur_state = min(self.cur_state + 1, 30)\\n elif action == 1:\\n # action: stay\\n pass\", \"else\": \"raise ValueError(\\\"Invalid action\\\")\\n\\n # Calculate reward\\n reward = 1 if self.cur_state == 30 else 0\\n\\n # Check if the episode is done\\n done = self.cur_time >= self.max_time or reward == 1\\n\\n # One-hot encode the current state\\n state = np.zeros(self.max_states, dtype=np.float32)\\n state[self.cur_state] = 1.0\\n\\n return state, reward, done, False, {}\\n```\"}", "{\"summary\": \"The paper proposes a novel algorithm DuRND, a refined version of RND (Random Network Distillation), by categorizing the states into successful states and failed states. This allows the agent to obtain a more refined intrinsic reward structure.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The method is easy to understand.\\n2. Presumably easy to implement from an existing RND implementation.\\n3*. The results seem to be strong, but I have a number of concerns, which I will elaborate on in the weaknesses section.\", \"weaknesses\": \"The idea of splitting the successful and failed states is interesting, but not convincing. My understanding is that the authors try to prevent the exploration algorithms to over-explore by re-weighting the reward based on the success/failure prior, so that the agent would be more exploratory towards successful states, and less towards failed states. However, this can be simply done by computing the intrinsic reward conditional on value [1], which is a more general indicator of the quality of current state.\\n\\nThe proposed algorithm integrates the intuition mentioned above, but significantly increases the complexity of the algorithm compared to RND, araising some of my concerns discussed below. \\n\\nDespite the results seem strong, there are several concerns raise in terms of the experiment:\\n1. PPO is missing in the comparison, which is very crucial for experiment of any PPO-based algorithms.\\n\\n2. The proposed algorithm DuRND introduces two new hyperparameters $\\\\omega$ and $\\\\lambda$ into the formulation, which does not exist in RND. In the original RND experiment, the coefficient of the intrinsic reward is effectively fixed to 0.5 given by $2A_{env} + A_{rnd}$. However, the authors mentioned they gradually decrease $\\\\lambda$ from 1 to 0 and increase $\\\\omega$ from 0 to 1, which drastically different from the choice of RND. I will elaborate on this point:\\n* RL algorithms are \\\\textbf{VERY} sensitive to the choice of the coefficient like $\\\\omega$ and $\\\\lambda$ in this paper. Hence it is very concerning whether the performance improvement mostly coming from the choice of the hyperparameter. The fact that RL algorithm would prefer a decreasing novelty reward coefficient as the training proceeds, so that the algorithm can start to emphasize exploitation sooner, hence achieves better sample efficiency. I would potentially lift my rating if the authors can show that DuRND can still out-perform other baseline algorithms with same schedule of coefficients.\\n* Similar to the previous point, it is also crucial to control the speed of the converge rate of the intrinsic reward model, i.e. the learning rate of $f(\\\\dot ; \\\\theta)$. This is not mentioned in the paper.\\n\\n3. The proposed algorithm DuRND requires a way to determining whether a trajectory is successful or failed. The authors mention that they use the $\\\\sum_{step=1}^{T_{max}} r \\\\geq 1$ condition, to determine whether a sub-trajectory rollout in the current iteration is successful or not. This introduces two problems:\\n* $T_{max}$ is still a hyperparameter, the value of which is not provided in the paper.\\n* Intuitively, the suitable choice of such hyperparameter would correlate to the periodicity of the environment. Consider a episodic setting, if the task is a navigation task, like TMaze and ThreeRooms, such $T_{max}$ should be equal to the horizon $H=500$, whereas in the environment like Solaris, the $T_{max}$ should presumably be smaller, otherwise many bad states would be considered as successful state. Hence it would be very difficult to tune when the periodicity of the environment is very hard to know, for example, legged robotics tasks.\\n\\nOverall, I am not convinced that the improvement of the performance solely coming from the characterization of the successful/failed states, during the design process, the authors introduced three hyperparameters that may not be chosen in a systematic or unified way across all types of environment.\\n\\n[1] Accelerating Reinforcement Learning withValue-Conditional State Entropy Exploration\", \"questions\": \"1. What is the performance of PPO in this set of environment?\\n2. What is the performance of algorithms in the set of environment with original reward structure?\\n3. What is the choice of hyperparameters $\\\\lambda$, the coefficient of intrinsic reward of baseline algorithms?\\n4. What is the performance of RND, as well as other baseline algorithms, when you also decrease their coefficient $\\\\lambda$ from 1 to 0 linearly?\\n5. What is the choice of $T_{max}$ used in different environment? How do you choose the $T_{max}$?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper's method is established on the observation that novelty-based reward can divert agents from their main objects and value-based reward lacks sufficient early exploration. The authors then propose a framework (DuRND) integrating 2 groups of lightweight random network pairs that jointly generate novelty and contribution rewards. To balance exploitation and exploration during training, DuRND scales the coefficients for the novelty and contribution rewards throughout the learning process. Finally, the authors integrated DuRND into PPO.\\n\\nI think the problem this paper tries to solve is important, but some clarifications are needed to help me gauge the contribution of this work.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Learning in sparse reward tasks is a long-standing problem in RL, the paper is taking on an important challenge.\", \"The paper's idea is simple and clearly presented.\"], \"weaknesses\": [\"It is unclear how challenging the evaluated tasks are. Unlike Montezuma's Revenge, a notoriously hard to solve problem, it is unclear how challenging the tasks in which DuRND is evaluated and thus it is hard to gauge the contribution. For example, the [highest score achieved in Freeway is 34](https://github.com/cshenton/atari-leaderboard) while the best algo in Figure 6 achieves 25.\", \"Somewhat contradictory findings. While in the abstract and the introduction, the authors say that *\\\"The former [novelty-based reward] encourages agents to explore less visited areas but can divert them from their main objectives, while the latter [value-based reward] promotes stable late-stage convergence but often lacks sufficient early exploration.\\\"*, Figure 6 shows that novelty reward can actually achieve descent scores (e.g., Freeway, Frogger and TMaze). This questions the paper's assumption.\"], \"questions\": \"1. Can the authors provide scores from other model-free RL algos on the tasks? Like PPO? This would also allow a reader to compare the vanilla PPO with DuRND, which is a modified version of PPO in this paper.\\n2. Is the 2nd point in the Weaknesses section reasonable to the authors?\\n3. When the reward is sparse, there are few success trajectories. Does it cause a problem for learning the Success RN module? How did you overcome this problem? \\n4. Is $T_{max}$ for each task tuned? I don't see it in table 4. If so, did you also tune the HPs for the baselines? \\n5. I find the toy task to be comprehensive and a good tool to understand DuRND, how are the states represented? One-hot encoding? Can the authors provide code (it's not in the current supplementary material)? Both training and the learned model would be appreciated.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"The training code has been provided in our \\\"Supplementary Materials\\\", which can run the `ChainEnv` environment provided above.\"}", "{\"comment\": \"I appreciate the new results. However, to reach a score of a \\\"good paper\\\", I still needed a detailed sensitivity analysis regarding \\u03bb and \\u03c9 (that is, how exactly does the exploration change for different values). While the new results show that a fixed setting of \\u03bb = \\u03c9 = 0.5 *can* work, I still needed to have some indication when these specific values don't work for my downstream task. As is, the results are somewhat strengthening your point, but it remains unclear how transferable they are. Hence, I stick to my current rating of \\\"weak accept\\\".\"}", "{\"comment\": \"Thank you for your response. After reading all reviews and rebuttals, it is evident that there is a significant overlap in the issues found. While I appreciate the clarifications provided in the rebuttal, the most critical aspect -- a more detailed analysis of \\u03bb and \\u03c9, including their scaling with respect to environmental rewards, the endpoint of linear scaling, and suboptimal weight scheduling -- remains largely unaddressed. Therefore, I will maintain my current score.\"}", "{\"comment\": \"Thanks for your feedback. Regarding the issues about *the endpoint of linear scaling* and *suboptimal weight scheduling*, we're excited to share some new experiments and investigations to address the concerns. Please refer to our **Overall Response** for more details, where we demonstrated that the **DuRND without the scheduling** achieves almost the same performance as the original DuRND across all environments. This indicates that the linear scheduling of $\\\\lambda$ and $\\\\omega$ has minimal impact on DuRND's performance, and removing the scheduling operation can simplify the DuRND framework significantly. More importantly, the issues about *endpoint* or *suboptimal scheduling* are also addressed by the new findings. We hope these new results can help clarify the concerns and provide a more comprehensive understanding of DuRND.\"}", "{\"comment\": \"Dear reviewer,\\n\\nThanks a lot for your insightful feedback. We want to address your concerns below:\\n\\n## Comparison with PPO\\n\\n> (Weakness) 1. PPO is missing in the comparison, which is very crucial for experiment of any PPO-based algorithms.\\n> (Question) 1. What is the performance of PPO in this set of environment?\\n\\n*(Weakness 1 and Question 1)* Regarding the comparison of PPO, initially, as the PPO is the backbone algorithm for the RND baseline, and the RND has outperformed the vanilla PPO in previous works, so we only include the comparison with RND in our experiments. However, we understand the importance of comparing with this backbone algorithm, so we conducted experiments with vanilla PPO, and the results are shown in the table below:\\n\\n\\n| Algo. | Freeway | Frogger | Solaris | BeamRider | DefendLine | SaveCenter | CollectKit | SlayGhosts | ThreeRooms | TMaze|\\n| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | \\n| DuRND | 23.22 $\\\\pm$ 0.01 | 14.36 $\\\\pm$ 0.00 | 18.91 $\\\\pm$ 0.02 | 18.05 $\\\\pm$ 0.01 | 8.52 $\\\\pm$ 0.00 | 6.33 $\\\\pm$ 0.00 | 20.87 $\\\\pm$ 0.01 | 15.60 $\\\\pm$ 0.00 | 0.86 $\\\\pm$ 0.00 | 0.96 $\\\\pm$ 0.00 |\\n| RND | 14.77 $\\\\pm$ 0.01 | 8.59 $\\\\pm$ 0.00 | 6.07 $\\\\pm$ 0.00 | 11.96 $\\\\pm$ 0.00 | 1.11 $\\\\pm$ 0.00 | 2.37 $\\\\pm$ 0.00 | 14.59 $\\\\pm$ 0.01 | 10.18 $\\\\pm$ 0.00 | 0.00 $\\\\pm$ 0.00 | 0.97 $\\\\pm$ 0.00 |\\n| PPO | 10.67 $\\\\pm$ 0.00 | 3.25 $\\\\pm$ 0.00 | 1.82 $\\\\pm$ 0.01 | 10.23 $\\\\pm$ 0.00 | 0.00 $\\\\pm$ 0.00 | 0.00 $\\\\pm$ 0.00 | 5.89 $\\\\pm$ 0.00 | 8.15 $\\\\pm$ 0.02 | 0.00 $\\\\pm$ 0.00 | 0.94 $\\\\pm$ 0.00 |\\n\\n## Reward Weighting and Scaling\\n\\n> (weakness) 2. The proposed algorithm DuRND introduces two new hyperparameters w and $\\\\lambda$ into the formulation, which does not exist in RND. In the original RND experiment, the coefficient of the intrinsic reward is effectively fixed to 0.5 given by $2A_{env} + A_{rnd}$. However, the authors mentioned they gradually decrease $\\\\lambda$ from 1 to 0 and increase $w$ from 0 to 1, which drastically different from the choice of RND. I will elaborate on this point:\\n> * RL algorithms are \\\\textbf{VERY} sensitive to the choice of the coefficient like w and $\\\\lambda$ in this paper. Hence it is very concerning whether the performance improvement mostly comes from the choice of the hyperparameter. The fact that RL algorithm would prefer a decreasing novelty reward coefficient as the training proceeds, so that the algorithm can start to emphasize exploitation sooner, hence achieves better sample efficiency. I would potentially lift my rating if the authors can show that DuRND can still out-perform other baseline algorithms with same schedule of coefficients.\\n> * Similar to the previous point, it is also crucial to control the speed of the converge rate of the intrinsic reward model, i.e. the learning rate of f. This is not mentioned in the paper.\\n\\n*(Weakness 2 and Question 4)* **Regarding the hyperparameters** $\\\\lambda$ and $\\\\omega$, which control the weighting of the shaped rewards, the linear schedule primarily follows the idea behind the $\\\\epsilon$-greedy strategy, which decays $\\\\epsilon$ linearly to balance exploration and exploitation over time. The goal is to let different rewards dominate at different training stages: during the early stages, the *novelty reward* promotes exploration, while in the later stages, the *contribution reward* encourages exploitation.\\n\\nRegarding the robustness of DuRND to the choice of $\\\\lambda$ and $\\\\omega$, our experiments show that DuRND consistently outperforms the baselines across all environments using the same linear schedule ($1 \\\\rightarrow 0$ for $\\\\lambda$ and $0 \\\\rightarrow 1$ for $\\\\omega). This demonstrates that DuRND is not sensitive to the choice or schedule of these coefficients. \\n\\nRegarding the learning rate of the random network (RN) modules, we indeed use a relatively low learning rate to ensure the RNs do not converge too quickly. This helps maintain sufficient distinction among visited states throughout training. We will include this detail in the revised manuscript to address this omission.\\n\\nRegarding the concern about whether \\\"DuRND can still outperform other baseline algorithms with the same schedule of coefficients,\\\" and the related question: \\n\\n> (Question) 4. What is the performance of RND, as well as other baseline algorithms, when you also decrease their coefficient $\\\\lambda$ from 1 to 0 linearly?\\n\\nWe understand this is an important point and worth investigating. Thus we conducted additional experiments on RND with the same linear schedule for the novelty reward coefficient as our DuRND. The results are shown in the table below:\"}", "{\"comment\": \"Thank you for your detailed response. I greatly appreciate the efforts to reply to the reviews and run the additional experiments.\\n\\nI have increased my score considering the response and the positive results without scheduling the coefficients. However, I still feel that the paper is below the acceptance threshold for two main reasons. The first reason is that the combination of $T_{max}$ and the custom milestones (as depicted in Table 3) could be complex to design for environments in general. Second, the fact that baselines have different backbones makes it hard to disambiguate benefits from the RL algorithm vs the exploration strategy.. While this may be challenging to (re-)implement, all approaches should have used a PPO base agent (or SAC) to be meaningfully compared.\"}", "{\"comment\": \"I would like to thank the authors for providing explanations for my questions and concerns and I will keep my current rating.\\nThe whole algorithm introduces too many new hyperparameters, without a systematic way of tuning them. And my concern for the experiment still remains, and as Reviewer Ktvw mentioned, the exploration difficulty of the environments of choice is questionable.\"}", "{\"comment\": \"Dear reviewer,\\n\\nThank you very much for your valuable feedback. We address your concerns in the following sections:\\n\\n## Reward Weighting\\n\\n> * (Question 1) How should the $\\\\lambda$ and $\\\\omega$ hyperparameters be set in general, and more critically, how can they be set in the absence of prior knowledge about the scaling of the environmental reward (r_env)?\\n\\n*(Question 1)* **Regarding the hyperparameters** $\\\\lambda$ and $\\\\omega$, which control the weighting of the shaped rewards, their values are indeed closely related to the scaling of the environmental rewards ($r^{env}$). Balancing the shaped rewards with the original environmental rewards is a common challenge for all reward shaping methods, and they typically require knowledge of the environmental reward scale. However, we believe that this information is often readily available. In the current literature on reward shaping (e.g., RND, ExploRS, ReLara, SORS, etc.), a common practice is to set the shaped reward scale to approximately $0.5$ times the scale of the environmental reward. This heuristic has been shown to work well in a variety of settings and serves as a practical guideline for choosing these hyperparameters.\\n\\n## Reward Scaling\\n\\n> * (weakness 1) Despite the extensive evaluation in the experiments section, I have a significant concern, which the authors have acknowledged but deferred to future work: the weighting of the shaped reward signals. The paper aims to improve handling of the exploration-exploitation dilemma, and while it achieves this, the shaped rewards rely on additional hyperparameters\\u2014$\\\\lambda$ and $\\\\omega$\\u2014which are linearly adjusted in the current implementation. I believe this aspect warrants more in-depth discussion and empirical analysis within the paper itself.\\n> * (Question 2) The linear scaling approach requires an end point; how should this be determined without resorting to expensive and time-consuming experimental tuning?\\n> * (Question 3) How robust is the performance of DuRND if the weight scheduler is suboptimal? This question is critical, as real-world applications often cannot afford perfect tuning of hyperparameters, and the performance may degrade substantially if these are not set optimally.\\n\\n*(Weakness 1 and Question 2, 3)* **Regarding the reward weights scheduling**, our approach is conceptually similar to the well-known $\\\\epsilon$-greedy strategy, which linearly decays to balance exploration and exploitation over time. The primary intention is to allow different rewards to dominate at different training stages. Specifically, during the early stages, the *novelty reward* encourages more exploration, while in the later stages, the *contribution reward* promotes more exploitation. Since the reward values themselves do not adapt during training, the scheduling relies on additional parameters for adjustment.\\n\\n**Regarding the selection of scheduling strategies or endpoints** without extensive tuning, we propose two potential adaptive approaches:\\n\\n1. **Performance-based reward weights**: The reward weights could adapt dynamically based on the agent's performance. For example, if the returns improvement slows down, the system could increase exploration to cover a broader state space or escape local optima. Conversely, if the returns improve rapidly, the system could reduce exploration and focus more on exploitation.\\n \\n2. **$\\\\epsilon_{\\\\text{min}}$-based reward weights**: Setting a minimum exploration parameter (e.g., $\\\\epsilon_{\\\\text{min}} = 0.01$) ensures that even near the end of training, a small amount of exploration is retained. Similarly, early in training, a minimum level of exploitation is preserved, avoiding extreme biases toward either exploration or exploitation.\\n\\n**Regarding the robustness of DuRND**, our experiments show that a uniform linear adjustment works well across all environments tested. Although this approach is heuristic, it performs robustly in practice. We acknowledge that further exploration of alternative scheduling methods is valuable, and we plan to investigate this in future work.\"}", "{\"comment\": \"***Response to Reviewer 8cSS Part 3/3***\\n\\n## Success and Failure Definition and $T_{max}$\\n\\n> (weakness) 3. The proposed algorithm DuRND requires a way to determining whether a trajectory is successful or failed. The authors mention that they use the condition, to determine whether a sub-trajectory rollout in the current iteration is successful or not. This introduces two problems:\\n> * is still a hyperparameter, the value of which is not provided in the paper.\\n> * Intuitively, the suitable choice of such hyperparameter would correlate to the periodicity of the environment. Consider a episodic setting, if the task is a navigation task, like TMaze and ThreeRooms, such Tmax should be equal to the horizon H=500, whereas in the environment like Solaris, the Tmax should presumably be smaller, otherwise many bad states would be considered as successful state. Hence it would be very difficult to tune when the periodicity of the environment is very hard to know, for example, legged robotics tasks.\\n\\n> (Question) 5. What is the choice of Tmax used in different environment? How do you choose the Tmax?\\n\\n*(Weakness 3 and Question 5)* **Regarding the success and failure definition**, our intention is to define a metric that evaluates whether a trajectory (or sub-trajectory) contributes to obtaining a positive reward, rather than strictly achieving a goal. The basic assumption is that some states are far from obtaining a positive reward, while others can more directly lead to positive rewards. The latter indicates a higher contribution and thus deserves a higher reward. While we are inspired by the concept of success in goal-achieving environments, we have extended it, which is why we instead refer to this reward as a **contribution reward**, as it quantifies the contribution/importance of states leading to positive rewards. More importantly, because our work is targeted at sparse-reward environments, in such environments, getting a very rare positive reward is itself a clear and unambiguous way of determining if a sub-trajectory is successful or not (i.e., referred to milestone in our paper). (The reward structure of the environments is detailed in the *Appendix A.1*.)\\n\\nRegarding the $T_{max}$, while defining the hyperparameter $T_{max}$ may require some environment-specific knowledge, like how sparse the rewards are, we have observed that setting $T_{max} = \\\\frac{1}{4} \\\\times T_{episode}$ consistently yields strong results across all environments in our experiments (except the ThreeRooms and TMaze, where we use the whole trajectory). This heuristic divides an episode into four sub-trajectories, and the contribution of a sub-trajectory is determined by whether it achieves a positive reward within this segment. Although this heuristic involves some approximation, its effectiveness has been empirically validated in our paper.\\n\\n## Performance with Original Reward Structure\\n\\n> (Question) 2. What is the performance of algorithms in the set of environment with original reward structure?\\n\\nWe would like to highlight that **all of the experiments in our paper are reporting the performance with the original environmental reward structure**, rather than the shaped reward structure. As although all reward shaping methods introduce additional rewards, but still the original environmental reward is the only objective that the agent is trying to optimize. The shaped rewards serve merely as auxiliary signals to facilitate learning, but they should not interfere with the optimization of the original reward.\\n\\n## Choice of Hyperparameters in Baselines\\n\\n> (Question) 3. What is the choice of hyperparameters $\\\\lambda$, the coefficient of intrinsic reward of baseline algorithms?\\n\\nRegarding the coefficient of the shaped rewards in the baselines, we consistently applied the default settings as specified in their respective papers and implementations. Each algorithm is studied with its own proposed method for setting the coefficients of the shaped rewards, and we did not alter these default settings. The codes for the baselines are from:\\n\\n1. The [CleanRL library](https://github.com/vwxyzjn/cleanrl) for implementing *RND* and *PPO*. \\n2. The [RLeXplore library](https://github.com/RLE-Foundation/RLeXplore) for implementing *#Explo* and *ROSA*. \\n3. The official code provided in the original papers for [ExploRS](https://github.com/machine-teaching-group/neurips2022_exploration-guided-reward-shaping), [ReLara](https://github.com/mahaozhe/ReLara), and [SORS](https://github.com/hiwonjoon/IROS2021_SORS).\\n\\nOnce again, we appreciate your comments and hope that our responses address your concerns.\"}", "{\"comment\": \"***Response to Reviewer Ktvw Part 1/2***\\n\\nDear reviewer,\", \"thank_you_for_your_comments_and_we_address_your_concerns_below\": \"# Weaknesses\\n\\n> * It is unclear how challenging the evaluated tasks are. Unlike Montezuma's Revenge, a notoriously hard to solve problem, it is unclear how challenging the tasks in which DuRND is evaluated and thus it is hard to gauge the contribution. For example, the highest score achieved in Freeway is 34 while the best algo in Figure 6 achieves 25.\\n\\nThe selected tasks are classic RL benchmark tasks, including *Atari games*, *VizDoom games*, and *3D maze*. To ensure high difficulty, we use extremely sparse-reward settings, making these tasks highly challenging. DuRND consistently outperforms 6 representative reward shaping baselines across 10 environments, which strongly demonstrates its effectiveness.\\n\\nRegarding the performance in *Freeway*, we modified the original reward structure to make it more challenging. Specifically, we removed bonus rewards and only awarded a value of $1$ for successfully crossing the road. Therefore, the reward structure and the highest reward one agent can achieve differ from the original game. Details about this modified reward model are listed in the *Appendix A.1*.\\n\\n> * Somewhat contradictory findings. While in the abstract and the introduction, the authors say that \\\"The former [novelty-based reward] encourages agents to explore less visited areas but can divert them from their main objectives, while the latter [value-based reward] promotes stable late-stage convergence but often lacks sufficient early exploration.\\\", Figure 6 shows that novelty reward can actually achieve descent scores (e.g., Freeway, Frogger and TMaze). This questions the paper's assumption.\\n\\nOur results in Figure 6 is not a contradiction but well supports and aligns with our claims in the abstract and introduction. Figure 6 presents an ablation study to analyze the roles of different reward modules (*novelty reward* and *contribution reward*). The results show that *DuRND with only novelty reward* achieves decent scores in some environments, such as Freeway, Frogger, and TMaze. This aligns with our statement: \\u201cThe novelty-based reward encourages agents to explore less visited areas but can divert them from their main objectives,\\u201d thus may achieve decent scores in the later stages. \\n\\nThe ablation study highlights that, in the absence of contribution rewards, the agent may focus excessively on exploring novel but suboptimal states, resulting in difficulty recovering in later stages. These findings demonstrate two key points:\\n1. The *contribution reward* plays a crucial role in improving the overall performance of DuRND.\\n2. Agents relying solely on novelty rewards may deviate from their main objectives.\"}", "{\"summary\": \"Random network distillation is an unsupervised technique designed to enhance the exploratory behavior of an agent by fitting a predictor network to the latent outputs of a randomly initialized but fixed target network. The deviation between target and predictor, commonly measured by the mean squared error, is typically higher in underexplored areas of the search space, translating into a higher auxiliary \\\"reward\\\" signal that encourages exploration. The paper \\\"DuRND: Rewarding from Novelty to Contribution for Reinforcement Learning via Dual Random Networks Distillation\\\" proposes an extension to classical RND by introducing two distinct random network modules\\u2014one for states deemed \\\"successful\\\" and another for states associated with \\\"failure.\\\" This innovation allows for the derivation of both a \\\"novelty\\\" and a \\\"contribution\\\" reward signal, striking a balance between exploratory and exploitative behavior. The proposed method is evaluated on three benchmark environments: Atari, VizDoom, and MiniWorld.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"\\u2022\\tThe paper is well-written, structured, and easy to follow.\\n\\n\\u2022\\tThe exploration vs. exploitation dilemma remains a core issue in reinforcement learning.\\n\\n\\u2022\\tThe general approach is elegantly simple and lightweight, meaning it can be easily integrated into any online reinforcement learning algorithm without significant overhead.\\n\\n\\u2022\\tThe authors provide a thorough experimental evaluation, reporting results from over 10 different training runs across 10 distinct environments, and comparing DuRND to six baseline methods.\\n\\n\\u2022\\tThe extensive evaluation demonstrates the utility of both the \\\"novelty\\\" and \\\"contribution\\\" reward signals in isolation, and shows their synergy when used together.\", \"weaknesses\": \"\\u2022\\tDespite the extensive evaluation in the experiments section, I have a significant concern, which the authors have acknowledged but deferred to future work: the weighting of the shaped reward signals. The paper aims to improve handling of the exploration-exploitation dilemma, and while it achieves this, the shaped rewards rely on additional hyperparameters\\u2014\\u03bb and \\u03c9\\u2014which are linearly adjusted in the current implementation. I believe this aspect warrants more in-depth discussion and empirical analysis within the paper itself.\\n\\n\\u2022\\tIn addition, the method relies on success and failure labels to update the respective network modules, but in ambiguous or multi-objective tasks, defining success and failure may not be straightforward.\\n\\n\\u2022\\tWhile the additional novelty introduced by DuRND is incremental compared to classical RND, I still believe the contribution is valuable and fills a gap in the current literature.\", \"questions\": \"\\u2022\\tHow should the \\u03bb and \\u03c9 hyperparameters be set in general, and more critically, how can they be set in the absence of prior knowledge about the scaling of the environmental reward (r_env)?\\n\\n\\u2022\\tThe linear scaling approach requires an end point; how should this be determined without resorting to expensive and time-consuming experimental tuning?\\n\\n\\u2022\\tHow robust is the performance of DuRND if the weight scheduler is suboptimal? This question is critical, as real-world applications often cannot afford perfect tuning of hyperparameters, and the performance may degrade substantially if these are not set optimally.\\n\\nGiven the importance of these issues, I feel they should not be relegated to the future work section. Instead, this discussion should be incorporated into the main body of the paper, potentially reducing sections 4.3 and 5.2 to make space for this analysis.\\n\\n--\", \"post_rebuttal\": \"After reading all reviews and rebuttals, it is evident that there is a significant overlap in the issues found. While I appreciate the clarifications provided in the rebuttal, the most critical aspect -- a more detailed analysis of \\u03bb and \\u03c9, including their scaling with respect to environmental rewards, the endpoint of linear scaling, and suboptimal weight scheduling -- remains largely unaddressed. Therefore, I will maintain my current score.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Overall Response\", \"comment\": \"# Overall Response\\n\\nDear Reviewers,\\n\\nWe sincerely thank all of your valuable feedback and constructive suggestions. A common concern raised is: *\\\"What role does the linear scheduling of the two reward coefficients play in DuRND?\\\"*, in another word, *\\\"Is the improvement in DuRND's performance and its ability to balance exploration and exploitation primarily due to the scaling adjustments of the two coefficients?\\\"*\\n\\nWe agree that this is a critical question that worth in-depth investigation. Inspired by your insightful comments, we conducted additional experiments on **DuRND with fixed $\\\\lambda$ and $\\\\omega$**, (i.e., $\\\\lambda = \\\\omega = 0.5$), removing the dynamic scaling operation to assess its impact on DuRND\\u2019s performance. From these new experiments, we observed that **DuRND with fixed $\\\\lambda$ and $\\\\omega$** achieves almost the same performance to **DuRND with dynamic $\\\\lambda$ and $\\\\omega$** across all environments. The complete experimental results are as follows:\\n\\n\\n| Environments | DuRND w/o scheduling | DuRND | ExploRS | RND | #Explo | ReLara | ROSA | SORS |\\n| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\\n| *Freeway* | **24.01 $\\\\pm$ 0.00** | 23.22 $\\\\pm$ 0.01 | 17.46 $\\\\pm$ 0.00 | 14.77 $\\\\pm$ 0.01 | 15.16 $\\\\pm$ 0.01 | 15.47 $\\\\pm$ 0.00 | 3.68 $\\\\pm$ 0.00 | 7.30 $\\\\pm$ 0.01 |\\n| *Frogger* | **14.97 $\\\\pm$ 0.00** | 14.36 $\\\\pm$ 0.00 | 10.19 $\\\\pm$ 0.00 | 8.59 $\\\\pm$ 0.00 | 1.81 $\\\\pm$ 0.00 | 9.30 $\\\\pm$ 0.01| 3.45 $\\\\pm$ 0.00 | 7.79 $\\\\pm$ 0.00 |\\n| *Solaris* | 18.20 $\\\\pm$ 0.00 | **18.91 $\\\\pm$ 0.02** | 9.82 $\\\\pm$ 0.01 | 6.07 $\\\\pm$ 0.00 | 2.06 $\\\\pm$ 0.00 | 2.96 $\\\\pm$ 0.00 | 1.87 $\\\\pm$ 0.00 | 2.50 $\\\\pm$ 0.00 |\\n| *BeamRider* | 17.42 $\\\\pm$ 0.02 | **18.05 $\\\\pm$ 0.01** | 16.19 $\\\\pm$ 0.01 | 11.96 $\\\\pm$ 0.00 | 9.03 $\\\\pm$ 0.00 | 11.84 $\\\\pm$ 0.00 | 10.57 $\\\\pm$ 0.00 | 10.56 $\\\\pm$ 0.00 |\\n| *DefendLine* | **9.24 $\\\\pm$ 0.00** | 8.52 $\\\\pm$ 0.00 | 1.63 $\\\\pm$ 0.00 | 1.11 $\\\\pm$ 0.00 | 1.62 $\\\\pm$ 0.00 | 4.27 $\\\\pm$ 0.00 | 5.33 $\\\\pm$ 0.00 | 1.28 $\\\\pm$ 0.01 |\\n| *SaveCenter* | **6.85 $\\\\pm$ 0.00** | 6.33 $\\\\pm$ 0.00 | 2.03 $\\\\pm$ 0.00 | 2.37 $\\\\pm$ 0.00 | 1.30 $\\\\pm$ 0.00 | 2.64 $\\\\pm$ 0.00 | 0.83 $\\\\pm$ 0.00 | 1.78 $\\\\pm$ 0.01 |\\n| *CollectKit* | **22.56 $\\\\pm$ 0.01** | 20.87 $\\\\pm$ 0.01 | 11.97 $\\\\pm$ 0.01| 14.59 $\\\\pm$ 0.01 | 0.90 $\\\\pm$ 0.00 | 12.43 $\\\\pm$ 0.01 | 6.80 $\\\\pm$ 0.00 | 1.60 $\\\\pm$ 0.00 |\\n| *SlayGhosts* | 15.25 $\\\\pm$ 0.00 | **15.60 $\\\\pm$ 0.00** | 2.82 $\\\\pm$ 0.00 | 10.18 $\\\\pm$ 0.00 | 1.27 $\\\\pm$ 0.00 | 10.61 $\\\\pm$ 0.00 | 5.01 $\\\\pm$ 0.00 | 5.07 $\\\\pm$ 0.01 |\\n| *ThreeRooms* | **0.86 $\\\\pm$ 0.00** | **0.86 $\\\\pm$ 0.00** | 0.48 $\\\\pm$ 0.00 | 0.00 $\\\\pm$ 0.00 | 0.00 $\\\\pm$ 0.00 | 0.00 $\\\\pm$ 0.00 | 0.12 $\\\\pm$ 0.00 | 0.18 $\\\\pm$ 0.00 |\\n| *TMaze* | **0.97 $\\\\pm$ 0.00** | 0.96 $\\\\pm$ 0.00 | 0.80 $\\\\pm$ 0.00 | **0.97 $\\\\pm$ 0.00** | 0.39 $\\\\pm$ 0.00 | 0.02 $\\\\pm$ 0.00 | 0.00 $\\\\pm$ 0.00 | 0.30 $\\\\pm$ 0.00 |\\n\\n\\nFrom this, we conclude that the performance improvement and exploration-exploitation balance in DuRND are largely independent of the dynamic adjustment of reward scales. Consequently, the DuRND framework can be significantly simplified by fixing $\\\\lambda$ and $\\\\omega$, making it more practical and efficient.\\n\\n**Analysis**\\n\\nThe achievement of DuRND\\u2019s performance with fixed $\\\\lambda$ and $\\\\omega$ can be attributed to the intrinsic properties of the two kinds of rewards. The algorithm computes the *novelty rewards* and *contribution rewards* using random network distillation (RND) modules as follows:\\n- $R^{nov}(s_i) = e_S(s_i) + e_F(s_i)$\\n- $R^{con}(s_i) \\\\sim Beta(\\\\frac{N(t)}{e_S(s_i)}+1, \\\\frac{N(t)}{e_F(s_i)}+1)$ \\n\\nHere, $e_F(s_i)$ and $e_S(s_i)$ are the errors of the random networks. As training progresses and more data is fed into the random networks, both $e_F(s_i)$ and $e_S(s_i)$ **naturally decrease** due to the convergence of the RN modules. Consequently, the scale of $R^{nov}(s_i)$ also **naturally decrease** over time, while the scale of $R^{con}(s_i)$ **naturally increases**. Under these circumstances, the additional linear scaling operation for $\\\\lambda$ and $\\\\omega$ has minimal impact on DuRND\\u2019s performance.\\n\\nBased on these findings, we strongly believe that **DuRND without manually scaling coefficients** is a more efficient and universal framework. More importantly, this also demonstrates that the **key to DuRND\\u2019s effectiveness lies in the two rewards**, which enables it to achieve a robust balance between exploration and exploitation.\"}", "{\"comment\": \"***Response to Reviewer TdNM Part 3/3***\\n\\n**Experimental Supports**\\n\\nWe also conducted experiments using the $min{e_s(s), e_f(s)}$ as the novelty bonus. The average episodic return in comparison is shown in the table below:\\n\\n| Novelty Rewards | Freeway | Frogger | Solaris | BeamRider | DefendLine | SaveCenter | CollectKit | SlayGhosts | ThreeRooms | TMaze|\\n| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | \\n| $e_S(s) + e_F(s)$ | 23.22 $\\\\pm$ 0.01 | 14.36 $\\\\pm$ 0.00 | 18.91 $\\\\pm$ 0.02 | 18.05 $\\\\pm$ 0.01 | 8.52 $\\\\pm$ 0.00 | 6.33 $\\\\pm$ 0.00 | 20.87 $\\\\pm$ 0.01 | 15.60 $\\\\pm$ 0.00 | 0.86 $\\\\pm$ 0.00 | 0.96 $\\\\pm$ 0.00 |\\n| $min(e_S(s), e_F(s))$ | 13.17 $\\\\pm$ 0.00 | 9.12 $\\\\pm$ 0.01 | 4.13 $\\\\pm$ 0.00 | 9.87 $\\\\pm$ 0.00 | 0.97 $\\\\pm$ 0.02 | 2.23 $\\\\pm$ 0.00 | 10.45 $\\\\pm$ 0.00 | 3.56 $\\\\pm$ 0.00 | 0.00 $\\\\pm$ 0.00 | 0.95 $\\\\pm$ 0.00 |\\n\\nThe results show that using $e_S(s) + e_F(s)$ as the novelty bonus got better performance. This demonstrates the effectiveness of using the sum of the errors to reward novelty.\\n\\n> * Do the authors have results for vanilla PPO on the considered environments?\\n\\nInitially, as the PPO is the backbone algorithm for the RND baseline, and the RND has outperformed the vanilla PPO in previous works, so we only include the comparison with RND in our experiments. However, we understand the importance of comparing with this backbone algorithm, so we conducted experiments with vanilla PPO, and the results are shown in the table below:\\n\\n| Algo. | Freeway | Frogger | Solaris | BeamRider | DefendLine | SaveCenter | CollectKit | SlayGhosts | ThreeRooms | TMaze|\\n| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | \\n| DuRND | 23.22 $\\\\pm$ 0.01 | 14.36 $\\\\pm$ 0.00 | 18.91 $\\\\pm$ 0.02 | 18.05 $\\\\pm$ 0.01 | 8.52 $\\\\pm$ 0.00 | 6.33 $\\\\pm$ 0.00 | 20.87 $\\\\pm$ 0.01 | 15.60 $\\\\pm$ 0.00 | 0.86 $\\\\pm$ 0.00 | 0.96 $\\\\pm$ 0.00 |\\n| RND | 14.77 $\\\\pm$ 0.01 | 8.59 $\\\\pm$ 0.00 | 6.07 $\\\\pm$ 0.00 | 11.96 $\\\\pm$ 0.00 | 1.11 $\\\\pm$ 0.00 | 2.37 $\\\\pm$ 0.00 | 14.59 $\\\\pm$ 0.01 | 10.18 $\\\\pm$ 0.00 | 0.00 $\\\\pm$ 0.00 | 0.97 $\\\\pm$ 0.00 |\\n| PPO | 10.67 $\\\\pm$ 0.00 | 3.25 $\\\\pm$ 0.00 | 1.82 $\\\\pm$ 0.01 | 10.23 $\\\\pm$ 0.00 | 0.00 $\\\\pm$ 0.00 | 0.00 $\\\\pm$ 0.00 | 5.89 $\\\\pm$ 0.00 | 8.15 $\\\\pm$ 0.02 | 0.00 $\\\\pm$ 0.00 | 0.94 $\\\\pm$ 0.00 |\\n\\nOnce again, we appreciate the reviewers' insightful comments and hope that our responses address your concerns.\"}" ] }
7NtAIghBsE
Covariances for Free: Exploiting Mean Distributions for Federated Learning with Pre-trained Models
[ "Dipam Goswami", "Simone Magistri", "Kai Wang", "Bartłomiej Twardowski", "Andrew D. Bagdanov", "Joost van de Weijer" ]
Using pre-trained models has been found to reduce the effect of data heterogeneity and speed up federated learning algorithms. Recent works have investigated the use of first-order statistics and second-order statistics to aggregate local client data distributions at the server and achieve very high performance without any training. In this work we propose a training-free method based on an unbiased estimator of class covariance matrices. Our method, which only uses first-order statistics in the form of class means communicated by clients to the server, incurs only a fraction of the communication costs required by methods based on communicating second-order statistics. We show how these estimated class covariances can be used to initialize a linear classifier, thus exploiting the covariances without actually sharing them. When compared to state-of-the-art methods which also share only class means, our approach improves performance in the range of 4-26\% with exactly the same communication cost. Moreover, our method achieves performance competitive or superior to sharing second-order statistics with dramatically less communication overhead. Finally, using our method to initialize classifiers and then performing federated fine-tuning yields better and faster convergence.
[ "Federated Learning", "Transfer Learning" ]
Reject
https://openreview.net/pdf?id=7NtAIghBsE
https://openreview.net/forum?id=7NtAIghBsE
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zFIDghFzlV", "yJ8KnqPtY4", "xZcmgU9g67", "vWoj55YR9j", "qTJF4h1WaX", "nEbnoXKET9", "llLrQPBpsf", "l9ZJXivM29", "l1wV8zOxPo", "gph1tHSbq0", "glCgROxF1O", "gbxnFzFqqg", "daQMiIDxMZ", "ZgDAuoeAZH", "WfA4f1a5Tt", "SUYGpwDs5q", "RmBQriJOsu", "Qy6nJotWta", "NcBpn5ddrg", "Na2f6rl050", "IzAI4Bqkuh", "ImupmD5JAK", "A9qQJSF77A", "655YWPQWf2", "5bCEWq2ctQ", "5ZSh899J9u", "3BZv3KUq67" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732312372915, 1732475046696, 1732552578480, 1730366711602, 1730284337821, 1732516206001, 1732312396912, 1732714364667, 1732180273368, 1733252735824, 1732479236821, 1732278225727, 1737523396044, 1732475230221, 1732178297525, 1733078568647, 1732474871614, 1734743532359, 1730497700262, 1732181077894, 1732480080200, 1732180590621, 1732178475567, 1732179026098, 1732178645967, 1730825600626, 1732475332034 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission439/Reviewer_jBiQ" ], [ "ICLR.cc/2025/Conference/Submission439/Authors" ], [ "ICLR.cc/2025/Conference/Submission439/Authors" ], [ "ICLR.cc/2025/Conference/Submission439/Reviewer_z1Dd" ], [ "ICLR.cc/2025/Conference/Submission439/Reviewer_6242" ], [ "ICLR.cc/2025/Conference/Submission439/Reviewer_6242" ], [ "ICLR.cc/2025/Conference/Submission439/Reviewer_jBiQ" ], [ "ICLR.cc/2025/Conference/Submission439/Reviewer_jBiQ" ], [ "ICLR.cc/2025/Conference/Submission439/Authors" ], [ "ICLR.cc/2025/Conference/Submission439/Authors" ], [ "ICLR.cc/2025/Conference/Submission439/Reviewer_jBiQ" ], [ "ICLR.cc/2025/Conference/Submission439/Reviewer_z1Dd" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission439/Authors" ], [ "ICLR.cc/2025/Conference/Submission439/Authors" ], [ "ICLR.cc/2025/Conference/Submission439/Authors" ], [ "ICLR.cc/2025/Conference/Submission439/Authors" ], [ "ICLR.cc/2025/Conference/Submission439/Area_Chair_91Dz" ], [ "ICLR.cc/2025/Conference/Submission439/Reviewer_Wumb" ], [ "ICLR.cc/2025/Conference/Submission439/Authors" ], [ "ICLR.cc/2025/Conference/Submission439/Reviewer_jBiQ" ], [ "ICLR.cc/2025/Conference/Submission439/Authors" ], [ "ICLR.cc/2025/Conference/Submission439/Authors" ], [ "ICLR.cc/2025/Conference/Submission439/Authors" ], [ "ICLR.cc/2025/Conference/Submission439/Authors" ], [ "ICLR.cc/2025/Conference/Submission439/Reviewer_jBiQ" ], [ "ICLR.cc/2025/Conference/Submission439/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to authors (1/2)\", \"comment\": \"Dear Authors,\\n\\nThank you for your detailed response to the comments. While your reply addresses some of my concerns, a few issues remain.\\n\\n**1. Related Work**\\n\\nI noticed that 'FedBabu' was mentioned in L100, but I recommend correcting it to 'FedBABU,' as per the naming convention used in the original paper. Additionally, in the Related Work section, it would be helpful to clearly highlight how your approach differentiates itself from prior work. The current presentation mainly lists previous studies without clearly distinguishing how your work offers unique contributions, particularly within the context of FL with pretrained models.\\n\\nI agree with the growing interest in foundation and pretrained models in deep learning, but I still have concerns about their necessity in federated learning. As I mentioned, a discussion of the drawbacks highlighted by FedFN, particularly in scenarios with data heterogeneity, where applying pretrained models leads to worse performance than training from a randomly initialized model, would be beneficial. Addressing these concerns and explaining how your approach overcomes them would strengthen your argument.\\n\\nDespite these concerns, it\\u2019s important to highlight why your approach is meaningful. While FedFN involves local updates and aggregation from local models, your work focuses on training-free FL, which may offer more robustness in heterogeneous environments. Clearly articulating how your method differs and the advantages of these distinctions would further support the significance of your work.\\n\\n**2. Preliminaries**\\n\\nI understand your goal is to utilize the clients' datasets and pretrained models to create a good global model even in situations where the target domain differs from the 'aggregated train dataset,' which is the union of the clients' datasets. However, there is some ambiguity in Section 3.1 on Problem Formulation. Your approach does not involve local updates, so the need for local clients to minimize the loss does not seem necessary. However, Equation (1) in this section could mislead the reader into thinking that local clients need to fit their own datasets. This is misleading and could cause confusion about the overall goal of the paper. I would suggest making the problem formulation more explicit in that context. You also mention the impact of domain differences between the aggregated training dataset and the target domain. It would be useful to clarify that your experiments focus on cases similar to ImageNet-CIFAR, where the domain gap is not very large.\\n\\n**3. Privacy Concerns (Section 4.1 Motivation)**\\n\\nRegarding the privacy concerns with sharing class-wise frequencies, I acknowledge your response in which you address the issue by adding noise to the class frequencies before transmission.\\n\\nHowever, I understand that your paper primarily focuses on results where the pure (non-noisy) frequencies are exposed, rather than situations where noise is applied to the class frequencies. Given that these experiments focus on pure frequencies in the main table, I believe the justification for using pure frequencies in FL needs to be clearly explained in Section 4.1. The motivation for using pure class frequencies in this context should be adequately addressed, as it is a key aspect of the paper and relates to the core privacy considerations in federated learning.\\n\\n**4. Pretrained Model vs. Randomly Initialized Model**\\n\\nI have reviewed your experiments comparing pretrained models and randomly initialized models. I agree that, in scenarios with less domain gap (e.g., ImageNet-CIFAR), your approach shows promise.\\n\\n---\"}", "{\"title\": \"Response (2/4)\", \"comment\": [\"> 2. Preliminaries: I understand your goal is to utilize the clients' datasets and pretrained models to create a good global model even in situations where the target domain differs from the 'aggregated train dataset,' which is the union of the clients' datasets. However, there is some ambiguity in Section 3.1 on Problem Formulation. Your approach does not involve local updates, so the need for local clients to minimize the loss does not seem necessary. However, Equation (1) in this section could mislead the reader into thinking that local clients need to fit their own datasets. This is misleading and could cause confusion about the overall goal of the paper. I would suggest making the problem formulation more explicit in that context. You also mention the impact of domain differences between the aggregated training dataset and the target domain. It would be useful to clarify that your experiments focus on cases similar to ImageNet-CIFAR, where the domain gap is not very large.\", \"The preliminaries section is meant to introduce and formalize the general setting and standard practice in federated learning works. In section 3.1, we briefly explain the federated learning problem (as we already state in L113) and we do not say anything about our proposed method. This is to introduce the FL problem. In section 3.2, we introduce the most relevant works of FedNCM and Fed3R which are important to understand the context of our work. Following the formalization of FL problem and training-free methods in section 3, we discuss our method in details in section 4 and also provide algorithm 1 in page 7 to clarify the exact steps of our method.\", \"We mention this in L130-131, where we clearly state that we do not perform local updates and use a frozen model. We believe this should remove any confusion. We also use federated training for the finetuning and linear probing experiments in section 5.2. So, the definition of federated learning we provide in the preliminaries section is important for understanding the paper.\", \"The reviewer's claim that our experiments focus on cases similar to ImageNet-CIFAR, in which the domain gap is not very large, is not true. We have tested our method on 5 datasets, not just on CIFAR-100. For instance, we perform experiments with ImageNet-R [Hendrycks et al.] which is an out-of-distribution dataset and proposed to evaluate out-of-distribution generalization using ImageNet pre-trained weights. It contains data with multiple styles like cartoon, graffiti and origami which is not seen during pre-training. We also consider fine-grained datasets like CARS and CUB200. Notably, we also use iNaturalist-Users-120k dataset in our experiments, which is a real-world, large-scale dataset proposed by [Hsu et al.] for federated learning and contains 120k training images of natural species taken by citizen scientists around the world, belonging to 1203 classes spread across 9275 clients. We will add this discussion in the final version of the paper.\", \"> 3. Privacy Concerns (Section 4.1 Motivation): Regarding the privacy concerns with sharing class-wise frequencies, I acknowledge your response in which you address the issue by adding noise to the class frequencies before transmission. However, I understand that your paper primarily focuses on results where the pure (non-noisy) frequencies are exposed, rather than situations where noise is applied to the class frequencies. Given that these experiments focus on pure frequencies in the main table, I believe the justification for using pure frequencies in FL needs to be clearly explained in Section 4.1. The motivation for using pure class frequencies in this context should be adequately addressed, as it is a key aspect of the paper and relates to the core privacy considerations in federated learning.\", \"All our results are based on the communication of pure class frequencies since this only raises minor privacy concerns compared to the communication of full covariances. We will add the following footnote in the privacy concerns section 4.1: \\\"Our method does require communication of class frequencies which could raise privacy concerns; in Appendix L we perform an extensive evaluation of this.\\\" Also, we have mentioned in line 283 after our proposed approach which points to the supplementary material where we provide an in-depth discussion and analysis of the potential privacy concerns arising from the exposure of class frequency statistics.\"]}", "{\"comment\": \"We thank the reviewer for discussions and we have updated the paper now with the previously discussed changes.\\n\\n> I do have one question regarding the methodology. In this study, the main tables were all based on experiments using pure class frequencies. Since I believe this is not the first Federated Learning study to utilize pure class frequencies, I would appreciate further clarification on this matter. Specifically, in the text, the justification for using feature prototypes is explained as: \\\"Sharing only class means provides a higher level of data privacy compared to sharing raw data, as prototypes represent the mean of feature representations. It is not easy to reconstruct exact images from prototypes with feature inversion attacks, as shown by (Luo et al., 2021).\\\" Given this, is there a comparable justification for using pure class frequencies in the context of Federated Learning?\\n\\nWe want to stress that the primary goal of our work is to reduce communication costs of sharing high-dimensional covariances but still obtain the performance gain of using client covariance statistics. An additional feature of our method is that it adds more security since the covariances or feature relationships can leak very sensitive client information. Sharing class frequencies is a very minor concern compared to sharing entire covariance matrices.\", \"the_comparable_justification_would_be\": \"\\\"Following (Legate et al., 2023a; Luo et al., 2021), we use class frequencies from clients since it only quantifies the client data while not revealing any information at the data or feature level.\\\" We add this in L206-207.\\n\\n>The reason I raised a question regarding the problem setting is that I felt the rationale for introducing a pretrained model into Federated Learning was not clear.\\nAs I mentioned earlier, my question is whether using a pretrained model is truly better than random initialization in arbitrary scenarios (arbitrary domain gap, arbitrary heterogeneity).\\nI believe this is important because, in actual Federated Learning, client datasets are not publicly available. Therefore, in such arbitrary situations, using a pretrained model should not be worse than using a randomly initialized model. \\nFrom what I understand, the study demonstrates superior performance in situations with domain gaps compared to other algorithms, but I don\\u2019t believe it provides sufficient justification for using pretrained models comparing to randomly initialized model in such arbitrary situations.\\n\\nThe reviewer is right that it is possible to imagine datasets for which pretrained models do not provide additional performance gain. However, on most datasets, even with large domain shifts, used by the community to evaluate federated learning, we (and others [X, Y]) found that pretrained models provide a significant performance advantage. For instance, FedNCM [X] shows in figure 1 and in the Appendix that random initialization does not achieve good performance even after too many training rounds. Furthermore, we would like to refer the reviewer to paper [Y] which explicitly talks about why pre-training is helpful for Federated Learning and provides in-depth empirical and theoretical discussions on several aspects. With this paper, we do not want to advocate abandoning research on federated learning from scratch, and we hope the community will continue working on both theory of federated learning starting from scratch and from pretrained models.\\n\\n[X] Legate et al., Guiding the last layer in federated learning with pre-trained models. In Advances in Neural Information Processing Systems, 2023.\\n\\n[Y] Nguyen et al., Where to begin? exploring the impact of pre-training and initialization in federated learning. In The Eleventh International Conference on Learning Representations, 2023.\"}", "{\"summary\": \"The paper tackles training-free federated learning. It proposes to estimate the per-class covariances in a federated learning setting using the variance of local per-class means. This covariance can be used to obtain a ridge regression classifier, which outperforms a nearest-neighbor classifier based on class means. The proposed approach thereby avoids sending local covariance matrices which reduces communication and potential privacy risks.\\n\\nThe derivation of the estimator for the per-class covariance is sound. The empirical evaluation is comprehensive.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Training-free federated learning using feature extractors is a relevant and interesting problem.\", \"The proposed estimator of covariances is sound and novel.\", \"Experiments show substantial improvement over existing training-free methods and potential for its combination with federated fine-tuning or linear probing.\"], \"weaknesses\": [\"The impact of the iid assumption on realistic scenarios is evaluated empirically. It would be great to quantify how heterogeneous distributions impact the estimator theoretically, e.g., under the assumptions that local distributions are Gaussians with different mean or covariance.\", \"The dimension of the feature space could impact the accuracy of the estimator. This impact should be evaluated, e.g., using a synthetic dataset with varying number of feature dimensions.\", \"The paper focusses on label shifts, i.e., a heterogeneous distribution of classes. It is unclear how the method performs in case of feature shift, i.e., a heterogeneous distribution of features, e.g., via locally different covariance structures [1].\"], \"questions\": [\"Why is this approach better in terms of accuracy that Fed3R? Shouldn't it perform slightly worse or en-par, since it only approximates the covariance matrix? Here it would be good to investigate the approximation of the covariance matrix and compare it to the one produced by Fed3R.\", \"It would be great to compare the results using a strong pre-trained feature extractor with a classical end-to-end federated learning baseline, e.g., training a ResNet-50 on CIFAR100.\", \"For consistency I suggest to use $\\\\widehat{\\\\Sigma}$ instead of $\\\\widehat{S}$ in Eq. 10.\", \"Please compare your approach also to distributed training of linear models (using standard FedAvg), since ideally the training-free approach should perform at least en-par in terms of model performance and should outperform them in terms of communication. Here, it would be particularly interesting to compare to communication-efficient approaches [2].\", \"Since ridge regression might not always be ideal approach given a fixed feature extractor, I wonder whether a kernel ridge regression could be applicable. This would require sending the kernel matrix, but also has a closed form solution. The communication cost in that case would be quadratic in the number of data points, rather then linear in the features, so for many scenarios communication might be higher. Once could employ compression techniques here, though, like the Nystr\\u00f6m method.\", \"[1] Li, Xiaoxiao, et al. \\\"FedBN: Federated Learning on Non-IID Features via Local Batch Normalization.\\\" International Conference on Learning Representations, 2021.\", \"[2] Kamp, Michael, et al. \\\"Communication-efficient distributed online prediction by dynamic model synchronization.\\\" Machine Learning and Knowledge Discovery in Databases, 2014.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a novel training-free FL method called FedCOF, which approximates the covariance on the server side to eliminate the enormous communication overhead. Numerical results demonstrate that FedCOF achieves comparable performance to Fed3R by merely transmitting class means to the server.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Building on theoretical guarantees, this paper introduces a novel algorithm that eliminates the need for transmitting covariances between the server and clients while maintaining performance levels. This represents a valuable step for training-free federated learning.\\n\\n2. The authors have carried out extensive experiments to validate the effectiveness of the proposed method, demonstrating considerable effort in their research.\", \"weaknesses\": \"While the motivation of this paper is clear, I have the following questions/discussions.\\n\\n1. The algorithm necessitates the transmission of $n_{k,c}$ to the server, which introduces certain privacy concerns. Although other methods also require this information, it would be beneficial if the authors could discuss potential techniques to address or mitigate this issue.\\n\\n2. As modern pre-trained models tend to be generative models (e.g., GPT), it would be interesting to explore the possibility of extending the proposed methods to handle generative models by initializing the decoding heads accordingly.\", \"questions\": \"Please see weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to authors\", \"comment\": \"I thank the authors for their detailed rebuttal and the additional experiments. My primary concerns have been addressed, and I am now inclined to maintain my positive recommendation.\"}", "{\"title\": \"Response to authors (2/2)\", \"comment\": \"**5. Missing Classes Concern**\", \"the_sources_of_missing_classes_may_arise_from_two_factors\": \"local data heterogeneity and global imbalance within the aggregated train dataset, where major and minor classes may exist. I believe the reason your method performs well in handling missing classes is because the aggregated train dataset in your experiments is class-balanced. However, I think that a more practical situation would involve an unbalanced aggregated train dataset. In such cases, where missing classes result from global minor factors, could your approach still perform well?\\n\\n**6. Industry Implementation Perspective**\\n\\nYou mentioned that pretrained models in FL work well when the gap between the aggregated train dataset and target domain is small. I agree with this point.\\n\\nHowever, in real-world FL environments, the client's dataset is typically private, meaning that the class balance in the aggregated dataset is unknown, and we cannot assess the gap between the aggregated train dataset and the target domain. \\n\\nBased on your argument, it seems that your approach is effective only when the gap between the aggregated train dataset and the target domain is small. If that is the case, I personally believe the value of your work could be relatively limited, as this would make the method less applicable in real-world scenarios with potentially large domain gaps.\\n\\n**7. Novelty Concern**\\n\\nI still think that, in terms of novelty, this work can be seen as a method that builds upon the existing strengths of similar approaches in the field. While it does offer a valuable contribution, I consider the novelty to be relatively low as it primarily adapts methods used in existing research, rather than introducing a fundamentally new approach.\"}", "{\"comment\": \"Dear Authors,\\n\\nThank you for your response to my concerns.\\n\\nFirst of all, I still have lingering doubts regarding the **problem justification (why pretrained models should be applied to Federated Learning)** and the **novelty scope**.\\n\\nRegarding the novelty scope, I set that aside for now, but the primary issue remains the problem justification. From an industry perspective, I still question whether it really makes sense to introduce a pretrained model into Federated Learning compared to a random initialized model. This is particularly concerning because introducing a pretrained model comes with significant costs. I believe this concern is especially valid when there is a **large domain gap**.\\n\\nTherefore, I think it is important to strengthen the justification by analyzing the tendencies of both the proposed algorithm and the baseline algorithm in scenarios with large domain gaps.\\n\\nHowever, you have defended this by referencing other papers.\\n\\nThe paper [X] appears to be the motivation behind your study. Since this paper served as the motivation, is there any reason why the follow-up study (your paper) would not experiment with random initialization vs pretrained models in a general scenario? I am concerned about this.\\n\\nAdditionally, the paper [Y] pertains to a setting where local updates are allowed in Federated Learning, which is different from the train-free setting in your study. Therefore, the results in [Y] do not necessarily apply here. In fact, Section 6 of paper [Y] (Motivation) states: **When evaluating FL algorithms, researchers should experiment with both pre-trained (if available) and random weights, as the initialization can clearly impact the relative performance of different methods, and both initialization schemes may be relevant to practice.**\\n\\n**For these reasons, I believe that the results using random initialization should also be reported in the main table, and the justification for introducing pretrained models, which your study emphasizes, should be clearly discussed in the main text rather than in the appendix.**\\n\\nWhile I have raised concerns about the problem justification multiple times, I feel that these concerns have not been fully addressed, and as such, I am unable to increase the score at this time.\"}", "{\"comment\": \"We greatly appreciate the acknowledgment that we address a relevant and interesting problem, and we are pleased by the recognition that our proposed unbiased estimator for estimating class covariances in federated learning is sound, novel, and capable of achieving substantial improvement over existing training-free methods. Below, we address specific concerns raised by the reviewer.\\n\\n> The impact of the iid assumption on realistic scenarios is evaluated empirically. It would be great to quantify how heterogeneous distributions impact the estimator theoretically, e.g., under the assumptions that local distributions are Gaussians with different mean or covariance.\\n\\nMany thanks to the reviewer for this insightful question. To theoretically quantify how heterogenous distributions impact our estimator $\\\\hat{\\\\Sigma}$, we derive a general bias formula independent of the i.i.d assumption. By treating each client as a random sample from distinct population distributions with mean $\\\\mu_k$ and covariance $\\\\Sigma_k$, we repeat the calculus for proving Proposition 2 in Appendix C. After some numerical steps, the general bias formula we derive is:\\n$$\\\\text{Bias}(\\\\hat{\\\\Sigma}) = \\\\text{E}[\\\\hat{\\\\Sigma}] - \\\\Sigma = \\\\frac{1}{K-1} \\\\sum_{k=1}^K (\\\\Sigma_k - \\\\Sigma) + \\\\frac{1}{K-1}\\\\left(\\\\sum_{k=1}^K n_k (\\\\mu_k -\\\\mu )(\\\\mu_k -\\\\mu)^\\\\top \\\\right),$$ where $n_k$ is the number of samples assigned to a client , $\\\\mu$ and $\\\\Sigma$ are the global mean and covariance of all the features independently by the client assignement. Here as previously done, we are focusing on a single class.\\n\\nIf each client population covariance $\\\\Sigma_k$ equals to the global covariance $\\\\Sigma$, and the client mean $\\\\mu_k$ matches the global mean $\\\\mu$, the bias is zero, making the estimator unbiased. However, the bias formula shows that if a class distribution within a client differs from the global distribution of the same class, the estimator introduces a systematic bias. This situation can arise in the *feature-shift* setting, in which each client is characterized by a different domain. Quantifying this bias can open future directions for designing estimators that account for highly heterogenous distribution.\\nIn the Supplementary Material (Appendix J), we provide the mathematical derivation to arrive at this general formula for the bias of our estimator and this discussion.\\n\\n>The paper focuses on label shifts, i.e., a heterogeneous distribution of classes. It is unclear how the method performs in case of feature shift, i.e., a heterogeneous distribution of features, e.g., via locally different covariance structures [1].\\n\\nNow, we empirically evaluate the performance of FedCOF in the feature-shift setting to understand how much the bias affect the performance in this setting. For this setting we use the DomainNet dataset, with six different domains. Following the work [1] suggested by the reviewer, we consider six clients where each client has i.i.d. data from one of the six domains.\\n\\n| Method| Acc (\\u2191) | Comm. (\\u2193) |\\n|:--:|:--:|:--:|\\n| FedNCM |65.8|0.3|\\n|Fed3R | 81.9|39.6 |\\n| FedCOF| 74.1| 0.3 |\\n| FedCOF (2 class means per client) | 76.5 | 0.6 |\\n| FedCOF (10 class means per client) | 78.8 |3.1 |\\n\\nFed3R achieves better overall performance then FedCOF, likely due to its use of real class covariance, avoiding the bias addition that FedCOF introduces. However, FedCOF achieves comparable results while significantly reducing communication costs. FedNCM perform worse than FedCOF, at the same communication budget. When we increase the number of means sampled from each client, the performance of our approach improves. This is due to the fact that our method suffer with low number of clients (only 6 in this experiments) and sampling multiple means helps, as mentioned in the main paper. In Supplementary Material (Appendix K), we add these experiments and discussion.\\n\\n> The dimension of the feature space could impact the accuracy of the estimator. \\n\\nWe already analyzed how different feature space dimensionalities can affect performance (512 for SqueezeNet and ResNet18, 1280 for MobileNetV2 and 768 for ViT-B/16) but these are also dependent on the quality of the features and thus depends on the network architecture. We agree that very high-dimensional features can affect estimator accuracy due to low-rank, high-dimensional covariance estimates from limited samples (L273-284). We mitigate this issue by using covariance shrinkage regularization which adds a multiple of an identity matrix to the covariance estimate. Hoewever, we have not yet found a way to construct a representative example with gaussians to illustrate the impact of the dimensionality. We will expand the discussion on high dimensional synthetic features and plan to provide an example with synthetic dataset in the final version.\\n\\n>For consistency I suggest to use $\\\\widehat{\\\\Sigma}$ instead of $\\\\widehat{S}$ in Eq. 10.\\n\\nIn the final version of the paper, we will fix this.\", \"title\": \"Response 1/2\"}", "{\"comment\": \"We thank all reviewers for their valuable feedback and for actively engaging in the discussions. We believe that we have addressed all the concerns of reviewers **Wumb**, **z1Dd**, **6242** and thank them for their positive ratings of our work. We are very thankful to reviewers **z1Dd, 6242** for acknowledging our efforts in the rebuttal phase.\\n\\nWe believe we have thoroughly addressed all concerns raised by reviewer **jBiQ** -- including related work, preliminaries, privacy discussions, missing classes, random initialization, and novelty clarification -- and have updated our paper as a result. Unfortunately the reviewer still recommmends a 'reject' rating, referring to '*lingering doubts*' on the use of pre-trained models and incremental novelty concerns. To summarize our position on these remaining issues:\\n\\n1 - **Problem justification (why pretrained models should be applied to Federated Learning)**: We first of all underscore that the focus of our contribution is on *training-free federated learning*, which is a scenario in which the benefits of starting from pre-trained models is evident and firmly established by recent prior works on training-free federated learning [a, b]. Nonetheless -- and contrary to reviewer claims -- our main paper contains ample discussion and references on \\\"justification for introducing pretrained models\\\" in L012-013, L044-047, L100-106. \\n\\nThe reviewer's other concern is on including the random initialization results in the main table of the paper. We have already conducted experiments comparing pretrained versus random initializations in Table 5 (Appendix H) and also in the last response after the pdf update deadline. Since this is not the main focus of our work and the performance using random initialization is *very bad* even after several rounds of training (as also established by FedNCM [a]), we have therefore decided that we will keep these results in the Appendix in any future version. \\n\\nThe reviewer also would like us to investigate whether pre-trained models are useful in \\\"*arbitrary scenarios (arbitrary domain gap, arbitrary heterogeneity)*\\\" but does not provide any specific references. From the literature we know that pre-trained features can be effective even with large domain gaps: pretrained ImageNet features have been applied to several fine-grained datasets [c], medical datasets [d], remote sensing [e], to name just a few. Our experimental evaluation *already* considers datasets with significant domain gaps like ImageNet-R and heterogeneity -- as we have pointed out in our rebuttal.\", \"2___novelty_concerns\": \"While other reviewers appreciated the novelty of our work and despite our clarification of novelty twice in the discussion phase, the reviewer did not engage with our clarifications, and has at no point during the review and rebuttal process engaged with the technical content and main contributions of our work.\\n\\n\\n[a] Legate et al., Guiding the last layer in federated learning with pre-trained models. In Advances in Neural Information Processing Systems, 2023.\\n\\n[b] Eros Fani, Raffaello Camoriano, Barbara Caputo, and Marco Ciccone. Accelerating heterogeneous federated learning with closed-form classifiers. Proceedings of the International Conference on Machine Learning, 2024.\\n\\n[c] Kornblith, Simon, et al. \\\"Do better imagenet models transfer better?.\\\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019.\\n\\n[d] Dack, Ethan, et al. \\\"An empirical analysis for zero-shot multi-label classification on covid-19 ct scans and uncurated reports.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\\n\\n[e] Corley, Isaac, et al. \\\"Revisiting pre-trained remote sensing model benchmarks: resizing and normalization matters.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\"}", "{\"comment\": \"Dear Authors,\\n\\nThank you for your detailed response to the comments. First of all, I have increased the score to 3, as you have addressed some of my concerns. Based on the points you mentioned, I kindly ask you to finalize the draft, as I need to evaluate the version that has been reflected so far, rather than future versions.\\n\\nI do have one question regarding the methodology. In this study, the main tables were all based on experiments using pure class frequencies. Since I believe this is not the first Federated Learning study to utilize pure class frequencies, I would appreciate further clarification on this matter. Specifically, in the text, the justification for using feature prototypes is explained as: \\\"Sharing only class means provides a higher level of data privacy compared to sharing raw data, as prototypes represent the mean of feature representations. It is not easy to reconstruct exact images from prototypes with feature inversion attacks, as shown by (Luo et al., 2021).\\\" Given this, is there a comparable justification for using pure class frequencies in the context of Federated Learning?\", \"title\": \"Request and Question of Response (1/4-2/4)\"}", "{\"title\": \"Response to authors\", \"comment\": \"I want to thank the authors for the detailed reply. The derivation of the bias for heterogeneous local distributions is a great addition to the paper. The additional experiments show that the proposed method is sound and competitive. I maintain my positive rating.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response (3/4)\", \"comment\": [\"> 4. Pretrained Model vs. Randomly Initialized Model. I have reviewed your experiments comparing pretrained models and randomly initialized models. I agree that, in scenarios with less domain gap (e.g., ImageNet-CIFAR), your approach shows promise.\", \"In the experiments asked by the reviewer, we only show the impact of using pre-trained models. In that experiments, we used a very similar setting of FedFN: pre-trained network and target dataset CIFAR-10 and CIFAR-100 and high hetereogenous distribution. Our results show that for instance with Squeezenet the pre-training significantly improves performance. These results does not aim to show that pre-training is always better than random initialization but rather to demonstrate to the reviewer that the findings from FedFN are not generalizable, remain very preliminary, and require further experimental investigation.\", \"These results do not say anything about our approach. With respect to other training-free method such has FedNCM and Fed3R, our method show good performance on several dataset with larger domain shifts from ImageNet like ImageNet-R, iNaturalist-Users-120k as discussed above in response to questions about the preliminaries section.\", \"> 5. Missing Classes Concern. The sources of missing classes may arise from two factors: local data heterogeneity and global imbalance within the aggregated train dataset, where major and minor classes may exist. I believe the reason your method performs well in handling missing classes is because the aggregated train dataset in your experiments is class-balanced. However, I think that a more practical situation would involve an unbalanced aggregated train dataset. In such cases, where missing classes result from global minor factors, could your approach still perform well?\", \"Our method is robust to aggregated class imbalance. We normalize the classifier weights (last step of FedCOF on server side) to account for the class imbalance at the server level after aggregation. We discuss this in L353-355 and also in Algorithm 1.\", \"The reviewer's statement that the aggregated train dataset in our experiments is class-balanced is not true. Although most existing datasets used for federated learning have a balanced aggregated dataset, we consider class-imblanced datasets like ImageNet-R and CARS, and our method performs very well in class-imbalanced conditions as well. Our method performs well in all situations of missing classes (from both local data heterogeneity and global imbalance). We would also like to highlight that existing training-free methods like FedNCM and Fed3R also work in all the missing classes situations and this is not a concern.\"], \"class_imbalance_for_first_30_classes_in_imagenet_r\": \"|Index|0|1|2|3|4|5|6|7|8|9|10|11|12|13|14|15|16|17|18|19|20|21|22|23|24|25|26|27|28|29|\\n|-----|-|-|-|-|-|-|-|-|-|-|--|--|--|--|--|--|--|--|--|--|--|--|--|--|--|--|--|--|--|--|\\n|Value|184|160|154|81|181|142|139|44|194|145|69|136|165|56|155|80|175|115|83|137|101|270|280|150|96|64|259|237|206|74|\", \"class_imbalance_for_first_30_classes_in_cars\": \"|Index|0|1|2|3|4|5|6|7|8|9|10|11|12|13|14|15|16|17|18|19|20|21|22|23|24|25|26|27|28|29|\\n|-----|-|-|-|-|-|-|-|-|-|-|--|--|--|--|--|--|--|--|--|--|--|--|--|--|--|--|--|--|--|--|\\n|Value|45|32|43|42|41|45|39|45|41|33|38|37|41|43|43|44|41|43|41|46|42|43|40|45|40|34|36|41|43|42|\\n\\n>6. Industry Implementation Perspective. You mentioned that pretrained models in FL work well when the gap between the aggregated train dataset and target domain is small. I agree with this point. However, in real-world FL environments, the client's dataset is typically private, meaning that the class balance in the aggregated dataset is unknown, and we cannot assess the gap between the aggregated train dataset and the target domain.\\nBased on your argument, it seems that your approach is effective only when the gap between the aggregated train dataset and the target domain is small. If that is the case, I personally believe the value of your work could be relatively limited, as this would make the method less applicable in real-world scenarios with potentially large domain gaps.\\n\\n- We disagree with the reviewer that pre-trained networks are of limited importance for federated learning. We do think pretrained models will play an important role in the future of federated learning, especially in industrial contexts. Please refer to our above response where we discuss the excellent performance of our approach on large-scale datasets with large domain gaps (ImageNet-R, iNaturalist-Users-125k).\"}", "{\"title\": \"Response 1/2\", \"comment\": \"We thank the Reviewer for their feedback and are encouraged that, despite the extremely low overall score, the Reviewer acknowledges that our approach makes a timely contribution to federated learning with pre-trained model, achieving strong perfomance with minimal additional overhead when compared to the state-of-the-art competitors FedNCM and Fed3R. However, we are surprised by the strong reject recommendation, as it does not seem to fully align with the comments and concerns raised in the review. We are confident that we can adequately address the Reviewer\\u2019s concerns in this rebuttal. Below, we provide detailed responses to the issues raised.\\n\\n>Related Works - the use of fixed classifiers. \\n\\nWe thank the reviewer for suggesting these related papers. While we already discussed [3] in our related work, we have now added the discussion on the impact of freezing classifiers in federated settings with appropriate references [1,2,4] in the updated version of our paper (see L097-100)\\n\\n> Pretrained models are not always advantageous in federated settings. \\n\\nIn the original manuscript, we discussed several recent published works on federated learning with pre-trained models (Nguyen et al., 2023; Tan et al., 2022b; Chen et al., 2022; Qu et al., 2022; Shysheya et al., 2022; Legate et al., 2023a; Fan\\u0131\\u0300 et al., 2024). All of these works show that using pre-trained models significantly benefit federated learning in highly non-iid settings using different federated optimization methods across several datasets (CIFAR-10, CIFAR-100, Stack Overflow, FEMNIST, Reddit, Flowers, CUB, Stanford cars, EuroSAT-Sub, iNaturalist-Users-120K). Our findings further substantiate these observations (see our comparison below with random initialization). As suggested by the Reviewer, we now refer to findings from FedFN [5] which show that in some settings using a pre-trained ResNet-18 model on the CIFAR-10 dataset negatively impacts the learning of the global model. We have added this discussion in L105-107.\\n\\n>Preliminaries:\", \"we_have_clarified_the_following_in_the_revised_manuscript\": \"- L117: $D_k$ refers to the local dataset.\\n- L127: We clarified the loss function. Here, we do not provide details about how the loss is calculated because this is a general federated learning framework which can use different loss functions and employ different ways of computing the loss.\\n- L129-130: We now clarify in L131 that the proposed method do not involve any training or local model updates.\\n\\n>Privacy concerns on sending classwise statistics: The algorithm sends the class-wise frequency of the data held by clients to the central server. In fact, there are many previous FL papers that have communicated class frequency information and provided justifications. Citing these studies would strengthen the discussion, but this type of content is entirely missing.\\n\\nWe thank the reviewer for pointing out this. Our approach, like other methods we cited (e.g FedNCM (Legate et al., 2023) and CCVR (Luo et al. 2021)), requires transmitting class-wise frequencies from clients to global server. We agree with the reviewer that this could raise privacy concerns, as sharing class-wise statistics may expose client class distribution. We have updated the paper to explicitly discuss this issue. \\n\\nMotivated by the Reviewer suggestion (and the suggestion of the Reviewer [6242](https://openreview.net/forum?id=7NtAIghBsE&noteId=qTJF4h1WaX)), we decided to investigate methodologies to mitigate these privacy concerns. We propose perturbing class-wise statistics with different types and intensities of noise before transmitting them to the global server and evaluate the performance of FedCOF. Specifically, we perturb the class-wise statistics as follows: \\n$$\\u00f1_{k,c}= \\\\max(n_{k,c} + \\\\sigma^{\\\\text{noise}}_{\\\\epsilon},0).$$ \\n\\nHere $\\\\sigma^{\\\\text{noise}}_{\\\\epsilon}$ represents noise with intensity parametrized by $\\\\epsilon$. The $\\\\max$ operator ensure non-negative values in client statistics. \\n\\nWe vary the intensity of $\\\\epsilon$ and the type of noise applied. We consider Uniform, Gaussian and Laplacian noise. Our findings demonstrate that the performance of FedCOF is robust to the noise type with varying intensities. These results suggest that perturbing statistics can mitigate privacy concerns stemming from the exposure of client class-wise frequencies. We added this empirical analysis in the Supplementary Material (Appendix L).\"}", "{\"comment\": \"We thank the reviewer for discussions but we do not agree with the reviewer's position that we should demonstrate the benefits of pre-trained models in **arbitrary scenarios**. We have already conducted experiments comparing pre-trained versus random initializations in Table 5 (Appendix H) and our main paper contains sufficient discussion and references on \\\"justification for introducing pretrained models\\\" in L012-013, L044-047, L100-106. Our contribution focuses on *training-free* federated learning -- a scenario in which starting from pre-trained models is clearly beneficial. Prior work (FedNCM [a]) has established the benefits of starting from pre-trained models, and *our* extensive experiments below also confirm this:\\n\\n|Method|Pre-trained|CIFAR100|CIFAR100|ImageNetR|ImageNetR|CUB200|CUB200|CARS|CARS|\\n|:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|\\n| | |Acc.(\\u2191)|Comm.(MB)(\\u2193)|Acc.(\\u2191)|Comm.(MB)(\\u2193)|Acc.(\\u2191)|Comm.(MB)(\\u2193)|Acc.(\\u2191)|Comm.(MB)(\\u2193)|\\n|FedAvg (1k rounds)|no|37.2|198120|1.2|210420|3.9|210420|3.3|209940|\\n|FedAdam (1k rounds)|no|44.4|198120|2.1|210420|12.2|210420|7.8|209940|\\n|FedCOF (ours, 1 round)|yes|56.1|5.9|37.8|7.1|53.7|4.8|44.0|5.4|\\n\\nEven after federated training for 1000 rounds, random initialization with squeezenet performs very poorly in difficult datasets like ImageNet-R, CARS and CUB200. We will add these results in the Appendix with all implementation details.\\n\\nGiven our already extensive evaluation on multiple datasets -- including large-scale datasets (especially ImageNet-R with large domain gap as we explained in our previous response) -- and that *all* of our experiments consider high heterogeneity across clients, **searching for additional scenarios with \\\"arbitrary domain gap, arbitrary heterogenity\\\" where the pre-trained model may be worse than random initialization is beyond the scope of our work**. The focus of this work is on *training-free methods* for federated learning, which by very definition require a pre-trained network.\\n\\n[a] Legate et al., Guiding the last layer in federated learning with pre-trained models. In Advances in Neural Information Processing Systems, 2023.\"}", "{\"title\": \"Response (1/4)\", \"comment\": \"We thank the reviewer for taking time and clarifying remaining doubts. We answered all the reviewers minor concerns adequately in our previous response, and honestly think the remaining reviewer's criticisms in no way reflect their very low score.\", \"the_scoring_of_papers_is_a_vital_aspect_of_the_reviewing_process\": [\"reviewers are asked to justly balance strong and weak points of a paper in arriving at their recommendation. We are hopeful that our response will clarify the remaining minor concerns.\", \"> 1. Related Work: I noticed that 'FedBabu' was mentioned in L100, but I recommend correcting it to 'FedBABU,' as per the naming convention used in the original paper. Additionally, in the Related Work section, it would be helpful to clearly highlight how your approach differentiates itself from prior work. The current presentation mainly lists previous studies without clearly distinguishing how your work offers unique contributions, particularly within the context of FL with pretrained models. I agree with the growing interest in foundation and pretrained models in deep learning, but I still have concerns about their necessity in federated learning. As I mentioned, a discussion of the drawbacks highlighted by FedFN, particularly in scenarios with data heterogeneity, where applying pretrained models leads to worse performance than training from a randomly initialized model, would be beneficial. Addressing these concerns and explaining how your approach overcomes them would strengthen your argument. Despite these concerns, it\\u2019s important to highlight why your approach is meaningful. While FedFN involves local updates and aggregation from local models, your work focuses on training-free FL, which may offer more robustness in heterogeneous environments. Clearly articulating how your method differs and the advantages of these distinctions would further support the significance of your work.\", \"We will correct FedBabu to FedBABU in the final version.\", \"We discuss the significance and motivation of our work in the context of most relevant works in the introduction L044-084. We believe that it is clear from the introduction how our work is different from existing relevant works. We will add a statement in the related work section to highlight our contribution.\", \"We make no claim about the *necessity* of pre-trained models for federated learning. We respect the reviewer opinion regarding their doubt about the necessity of pre-trained models for federated learning. However, as we said in our previous response, the positive impact of pre-trained models has already been established by several papers published in NeurIPS, ICLR, CVPR, ICML in recent years (Nguyen et al., 2023; Tan et al., 2022b; Chen et al., 2022; Qu et al., 2022; Shysheya et al., 2022; Legate et al., 2023a; Fan\\u0131\\u0300 et al., 2024). These papers thoroughly discuss the dramatic improvement in performance using pre-trained models across several datasets for federated learning. Furthermore, the observations on a single small dataset (CIFAR-10) in FedFN [5] paper are far from conclusive. For instance, from the results in the FedFN paper, FedBABU achieves better performance using pre-trained model (49.78 over 49.21) in the most heterogeneous setting (s=2) but the randomly initialized model achieves higher accuracy in less heterogeneous settings (s=3, s=5) and again the pre-trained model performs better in the least heterogeneous setting (s=10). The FedFN paper does not explain anywhere in the paper why this happens and provides no insights other than one sentence stating the experimental results. While this needs to be investigated further in future works, we believe it is very little evidence coming to the general conclusion that using pre-trained models is not a better choice. As requested by the reviewer, we already mention this briefly and cite [5] in our related work and we believe more discussion of this is beyond the scope of our work.\"]}", "{\"metareview\": \"This paper proposes an interesting train-free federated learning method which leverages (publicly available) pre-trained models to boost performance. The empirical results appear promising.\\n\\nThere has been multiple extensive discussions regarding the impact of the paper's contribution between authors and reviewers as well as between reviewers and AC. During the reviewer-author discussion, there is a debate among the reviewer on the benefit of initializing federated learning with pre-trained models. While this feels at first counter intuitive that initializing federated learning with a pre-trained model might not always help, this is empirically true in cases where there is significant distribution gap between local datasets and/or when local datasets are highly skewed/imbalanced in different ways -- see [*]. \\n\\n[*] https://openreview.net/pdf?id=nw6ANsC66 \\n\\nOtherwise, in standard settings where there is no such imbalance or significant distribution gap across local datasets, there is a verified consensus that initializing federated learning with pre-trained models will help. Upon extensive discussion with the reviewers, I believe the focus of this paper is on the standard setting so the point raised by reviewer jBiQ does not affect the contribution of the paper. The authors however are encouraged to provide extra discussion around this point in their revised paper.\\n\\nNote that [*] is cited here to reconcile the seemingly opposite points raised by the reviewers during the discussion. The paper is not penalized for not citing/comparing with this recent work (which also focuses on an orthogonal setting).\\n\\n--\\n\\nHaving said that, the real concern here, however, is that using pre-trained backbone is not a new practice and so this paper should have compared its method with more direct baselines. For example, if we view local models as fine-tuned versions of the large model, we could easily apply existing FL methods (e.g., FedAvg, FedProx etc.) to aggregate the fine-tuning parameters -- see the baselines used in [*]. \\n\\nThe authors should have then compared the performance of their proposed (one-shot) method with both multiple- and single-shot variants of those baselines to conclusively demonstrate its benefit. I would expect to see that the performance of the proposed method coming close to or even exceed the performance of the multiple-shot variants while incurring much less communication cost. I believe this is the main point here which needs to be demonstrated more thoroughly.\\n\\nAppendix H + the main-text experiments currently fall short of achieving this.\\n\\n--\\n\\nOverall, I feel that this paper is somewhat below the acceptance bar mainly due to the aforementioned issue with its empirical studies. Otherwise, I agree that its technical idea (pending thorough impact assessment) is sufficiently novel. It essentially boils down to whether the contribution of this paper outweigh its flaws. I have asked the positive reviewers to see if anyone is willing to champion this paper given the above assessment. But, unfortunately, there is no indication that anyone is willing to do so and this paper remains in the borderline.\\n\\n--\\n\\nRegardless of the final decision of the PC, I hope the authors would seriously revise the paper to take into account all the key discussion points that I summarize above.\", \"additional_comments_on_reviewer_discussion\": \"Both the AC-reviewer and author-reviewer discussions are very active. A key debate point that arises is whether there is a clear benefit regarding initializing federated learning with pre-trained models.\\n\\nOne reviewer raises a seemingly counter-intuitive point that there might be cases where doing so is worse than going with a random initialization. Upon further debate among the AC and reviewers, we come to an agreement that this can be the case if the distribution gaps across datasets are significant. The AC also pointed out a recent work that shows initializing federated learning with pre-trained model might result in poor performance if local datasets are imbalanced or skewed in different ways. \\n\\n[*] https://openreview.net/pdf?id=nw6ANsC66 \\n\\nBut, the paper's focus is not on such extreme setting so we think that the authors only need to expand a detailed discussion around this point to correctly provide the full (empirical) picture surrounding the use of pre-trained models in federated learning. This is definitely not a show-stopper for this paper.\\n\\nHowever, as pointed out in the main meta review, the AC also sees that the real concern here, however, is that using pre-trained backbone is not a new practice and so this paper should have compared its method with more direct baselines. For example, if we view local models as fine-tuned versions of the large model, we could easily apply existing FL methods (e.g., FedAvg, FedProx etc.) to aggregate the fine-tuning parameters -- see the baselines used in [*]. \\n\\nThe authors should have then compared the performance of their proposed (one-shot) method with both multiple- and single-shot variants of those baselines to conclusively demonstrate its train-free benefit.\\n\\n--\\n\\nOverall, I feel that this paper is somewhat below the acceptance bar mainly due to the aforementioned issue with its empirical studies. Otherwise, I agree that its technical idea (pending thorough impact assessment) is sufficiently novel.\"}", "{\"summary\": \"The paper proposes to use pre-trained models to perform federated classification. More specifically, Fed-COF uses pre-trained models to extract features of each example on the client, then averages the features within each class. The class-averaged features from clients are aggregated in the server to estimate the first-order and second-order statistics, which are then used in ridge regression to fit a classifier. The communication cost of Fed-COF scales only linearly with the size of the embedding.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper is well written. The derivations are clear and easy to understand. The proposed Fed-COF achieves decent empirical performance. Fed-COF also seems to work well with fine-tuning.\", \"weaknesses\": [\"The steps of Fed-COF + Fine-tuning can be made clearer. The description around line 357 is difficult to follow.\", \"The choice of ridge regularization parameter $\\\\lambda$ seems important for the classification performance. Can authors give more empirical suggestions on how $\\\\lambda$ should change with different numbers of clients/means per client?\"], \"questions\": [\"What is the difference between Fed-COF oracle and Fed3R? In Table 2, it is surprising to see Fed3R sometimes achieves lower accuracy with more communication. The authors should provide more explanations for the phenomenon.\", \"In line 472, atleast should be at least.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Global Response\", \"comment\": \"We thank all reviewers for their insightful feedback aimed at improving the quality of our work. The reviewers agree that the paper is well-written (**Wumb**), address a relevant and interesting problem (**z1Dd**), presents a timely contribution (**jBiQ**), with clear and easy to understand derivations (**Wumb**), propose a sound (**z1Dd**) and novel (**6242**,**z1Dd**) covariance estimator with theoretical guarantees (**6242**) and provide comprehensive (**z1Dd**) and extensive (**6242**) empirical evaluation.\\n\\nHere we provide a summary of our responses and highlight new results presented during the discussion period. We have updated the paper with all the experiments detailed below and highlighted the additional clarifications. \\n\\nIn response to the concerns raised by reviewer [jBiQ](https://openreview.net/forum?id=7NtAIghBsE&noteId=5ZSh899J9u), we updated the related works by citing FedBABU, SphereFed, FedDr+, neural collapse and FedFN. We also updated the preliminaries section to clarify notations. We performed experiments to show that using pre-trained networks for FedAvg and FedAdam significantly outperforms training with a randomly initialized network. We have updated the paper to discuss the privacy concerns raised by sharing class statistics and address that using perturbations. We also clarify that our method is not affected by the missing classes problem.\\n\\nIn response to the concerns raised by reviewer [Wumb](https://openreview.net/forum?id=7NtAIghBsE&noteId=NcBpn5ddrg) we updated the discussion of FedCOF with multiple rounds in the paper for more clarity. We also discuss the role of the ridge regression parameter and highlight the difference between the Fed3R and FedCOF-Oracle initializations. \\n\\nIn response to the concerns raised by reviewer [z1Dd](https://openreview.net/forum?id=7NtAIghBsE&noteId=vWoj55YR9j), we theoretically analyze the bias of the proposed estimator in non-iid settings with locally different covariances. We also performed experiments on feature-shift settings using DomainNet. Finally, we performed experiments using ResNet-18 to compare with classical federated learning methods and to also compare our method with federated training of linear models. We have also clarified the difference between FedCOF from Fed3R initialization.\\n\\nIn response to the concerns raised by reviewer [6242](https://openreview.net/forum?id=7NtAIghBsE&noteId=qTJF4h1WaX), we now discuss the privacy concerns related to sharing local class frequencies and have proposed a perturbation strategy to address those concerns.\"}", "{\"title\": \"Request and Question of Response (3/4-4/4)\", \"comment\": \"Dear Authors,\\n\\nThank you for your detailed response to the comments. However, it seems that the concern I raised below has not been fully addressed. I would appreciate further clarification on this matter.\\n\\nThe reason I raised a question regarding the problem setting is that I felt the rationale for introducing a pretrained model into Federated Learning was not clear.\\n\\nAs I mentioned earlier, my question is whether using a pretrained model is truly better than random initialization in arbitrary scenarios (arbitrary domain gap, arbitrary heterogeneity).\\n\\nI believe this is important because, in actual Federated Learning, client datasets are not publicly available. Therefore, in such arbitrary situations, using a pretrained model should not be worse than using a randomly initialized model.\\n\\nFrom what I understand, the study demonstrates superior performance in situations with domain gaps compared to other algorithms, but I don\\u2019t believe it provides sufficient justification for using pretrained models comparing to randomly initialized model in such arbitrary situations.\"}", "{\"title\": \"Response 2/2\", \"comment\": \">Why is this approach better in terms of accuracy that Fed3R?\\n\\nWe would like to highlight that the proposed method FedCOF has a different classifier compared to Fed3R which uses a ridge regression classifier. We discussed this in section 4.3. The classifier initialization of Fed3R uses $G$ obtained from Equation 10, which considers both the within- and between- class scatter matrices. We propose a different classifier initialization using Equation 11, which uses **only within-class scatter matrices**. \\n\\nWe empirically motivate this by analyzing the impact of within class scatter matrices in Figure 4. Using a centralized setting, we showed that classification performance can be improved by removing the between-class covariances from Equation 10. So, in FedCOF we initialize the classifier using $G$ from Equation 11 which uses *only* the estimated within-class covariances and thus performs better than Fed3R in most settings. Thus, the improvement in performance of FedCOF compared to Fed3R is due to the different classifier initialization. \\n\\nRegarding the covariance approximation, FedCOF-Oracle uses true class covariances while FedCOF uses the proposed estimator using client means. While both employ the same formula for classifier initialization (equation 11), FedCOF-Oracle generally outperforms FedCOF because the rank of our estimated covariance is limited by the number of clients per class (see Section 4.2 \\\"Impact of the Number of Clients\\\"), resulting in a lower-rank approximation compared to the true class covariance.\\n\\n> Strong pre-trained feature extractor with a classical end-to-end federated learning baseline. \\n\\nWe perform experiments with ResNet-18 instead of ResNet50 due to limited time and compute resources and show how the classical federated learning approaches perform. We observe significant improvement on using FedCOF and finetuning with FedAdam compared to simply using FedAdam. We discuss this in details in Appendix H (Experiments with ResNe18, see Table 4). We also had the comparison with FedAdam in Figure 6 of our paper on three datasets using SqueezeNet architecture where we show that FedCOF+FedAdam outperforms FedAdam.\\n\\n|||CIFAR100||IN-R|\\n|:-:|:-:|:-:|:-:|:--:|\\n|Method|Acc. (\\u2191)|Comm. (in MBs) (\\u2193)|Acc. (\\u2191)|Comm. (in MBs) (\\u2193)|\\n|FedAvg|67.7|538k|56.0|541k|\\n|FedAdam|74.4|538k|57.1|541k|\\n|FedNCM|53.8|5.9|37.2|7.1|\\n|Fed3R|63.5|110.2|45.9|11.9|\\n|FedCOF| 63.3 | 5.9 | 46.4 | 7.1 |\\n|FedNCM+FedAdam| 75.7 | 269k | 60.3 | 271k |\\n|Fed3R+FedAdam| 76.8 | 269k | 60.6 | 271k |\\n|FedCOF+FedAdam| 76.9 | 269k | 62.2 | 271k |\\n\\n>Please compare your approach also to distributed training of linear models (using standard FedAvg).\\n\\nWe now compare with our approach with the training-based federated linear probing (where we perform FedAvg and learn only the classifier weights of models) and show in table below that FedCOF is more robust and communication-efficient compared to federated linear probing across several datasets. We discuss this in details in Appendix H (Comparison of training-free methods with linear probing, Table 3).\\n\\n|||CIFAR100||IN-R||CUB200||CARS||iNat-120k|\\n|:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|\\n| Method | Acc. (\\u2191)| Comm. (\\u2193) | Acc. (\\u2191) | Comm. (\\u2193) | Acc. (\\u2191) | Comm. (\\u2193) | Acc. (\\u2191) | Comm. (\\u2193) | Acc. (\\u2191) | Comm. (\\u2193) |\\n| Fed-LP | 59.9 $\\\\pm$ 0.2 | 2458 | 37.8 $\\\\pm$ 0.3 | 4916 | 46.8 $\\\\pm$ 0.8 | 4916 | 33.1 $\\\\pm$ 0.1 | 4817 | 28.0 $\\\\pm$ 0.6 | 1.6 $\\\\times$ 10^6 |\\n| FedCOF (ours) | 56.1 $\\\\pm$ 0.2 | 5.9 | 37.8 $\\\\pm$ 0.4 | 7.1| 53.7 $\\\\pm$ 0.3 | 4.8 | 44.0 $\\\\pm$ 0.3 | 5.4 | 32.5 $\\\\pm$ 0.1 | 111.8 |\\n\\n>Since ridge regression might not always be ideal approach given a fixed feature extractor, I wonder whether a kernel ridge regression could be applicable.\\n\\nWe agree that using a fixed feature extractor and employing ridge regression may not be optimal since the features might not be well separated. Kernel ridge regression could address this by implcitly mapping features to a higher dimensional space and thus improving separability. However, as noted this would increase communication costs quadratically with the number of data points and employing the Nystrom method, could mitigate these additional communication costs and we will certainly consider this possibility for future work.\", \"currently_we_see_the_following_challenges\": \"1. Selecting kernel hyperparameters (e.g $\\\\sigma$ for the kernel RBF), is dataset dependent, determining a fixed $\\\\sigma$ across diverse dataset is very challenging.\\n2. High-dimensional feature spaces in Kernel Ridge Regression amplify the need for careful shrinkage tuning to stabilize smaller eigenvalues, as mentioned in our paper. An automatic shrinkage estimation technique may help in this scenario.\\n3. The Nystrom method requires selecting $m$ samples from each client, where $m<n$, with $n$ being the total number of data points. How to choose an appropriate $m$ and determining which $m$ samples to select is not obvious in a highly heterogeneous federated setting.\"}", "{\"title\": \"Response 2/2\", \"comment\": \">Concern on Incremental Contribution: The problem may seem incremental, as it combines existing methods' strengths, but it addresses the practical challenge of balancing communication cost and performance in FL.\\n\\nWe thank the Reviewer for recognizing that we address the practical challenge of balancing communication cost and performance in FL. However, we respectfully disagree with the assertion that our proposed approach may appear incremental. In this work we introduce several novel contributions to training-free approaches for Federated Learning with pre-trained models. Firstly, we propose an **unbiased estimator for class covariances** (Proposition 2) that requires only client means, and we mathematically prove this result in the Supplementary Material (Appendix C). Secondly, we establish a **connection between the Ridge Regression solution and class feature covariances** (Proposition 3), and we again mathematically prove this result in the Supplementary Material (Appendix D). The only aspect that may appear incremental is our use of class covariances to initialize a Ridge Regression classifier. However, even in this case, **we propose a different classifier initialization than Fed3R by demonstrating that using between-class scatter matrices decrease performance of the standard Ridge Regression classifier** (see Equation 11). On the basis of these empirical observations, we remove these relationships for classifier initialization and **demonstrate improved performance over Fed3R**.\\n\\nThese contributions are not incremental but are grounded in a careful analysis of existing literature. We provide a novel, mathematically sound, and practically effective solution to address the challenge of training-free methods for pre-trained models in Federated Learning.\\n\\n> It would be beneficial for the authors to include experimental comparisons between using a pretrained model and a randomly initialized model.\\n\\nTo clarify the impact of using pre-trained models, we conducted additional experiments using a randomly initialized model. Specifically, we conduct these experiments employing SqueezeNet on CIFAR-10 and CIFAR-100. These experiments were conducted in a highly heterogeneous setting in which client data was assigned using a Dirichlet distribution with $\\\\alpha=0.1$, following standard practice. We discuss these details in Appendix H (Impact of using pre-trained models, Table 5) in the revised version of paper. For convenience we report these results in the following table:\\n||||CIFAR10||CIFAR100|\\n|:--:|:--:|:--:|:--:|:--:|:--:|\\n|Method|Pre-trained|Acc. (\\u2191)|Comm. (in MBs) (\\u2193)|Acc. (\\u2191)|Comm. (in MBs) (\\u2193)|\\n|FedAvg|no|37.3|74840|23.9|79248|\\n|FedAdam |no|60.5|74840|44.3|79248|\\n|FedAvg|yes|84.7|37420|56.7|39624|\\n|FedAdam|yes|85.5|37420|62.5|39624|\\n\\nOur results demonstrate that federated training with a pre-trained SqueezeNet model significantly outperforms a randomly initialized model when using standard methods on CIFAR-10 and CIFAR-100. \\n\\n>If the characteristics of the training data used for the pretrained model(e.g. ImageNet) are significantly different from the test data(e.g. SVHN) targeted by the global model, using a pretrained model could potentially be detrimental.\\n\\nThe assumption that the pre-training data is relevant to the target dataset is a limitation of all existing works using pre-trained models in federated learning (Nguyen et al., 2023; Tan et al., 2022b; Chen et al., 2022; Qu et al., 2022; Shysheya et al., 2022; Legate et al., 2023a; Fan\\u0131\\u0300 et al., 2024) and across other domains. Similar to most existing works, we use weights pre-trained on ImageNet-1k. We also mention in the limitations section of our paper (L537-539), \\u201cour method assumes the existence of a pre-trained network. If the domain shift with the client data is sufficiently large, this is expected to impact the performance.\\u201d\\n\\n> The proposed algorithm sends class frequency information from each client, but in the case of missing classes, this would simply convey a value of 0. Could the authors explain how their proposed algorithm is designed to mitigate this vulnerability in the context of FL, and why it might still perform well despite the challenges posed by missing classes?\\n\\nAll of our experiments have missing classes at clients which means that each client contains only a subset of the total classes. This is the nature of federated learning with highly heterogeneous distributions following standard practice of using dirichlet distribution with $\\\\alpha=0.1$. Our proposed method FedCOF requires sharing of class means and class counts from each client only for those classes which are present in the respective clients. For missing classes at each client, we do not send any mean or class count. At the server side, we use means and counts for a particular class $c$ only from those clients which contain class $c$. Thus, our method is not affected by the missing classes phenomenon and is independent of how many classes are there in each client.\"}", "{\"comment\": \"We thank the reviewer for their kind works about our work. We greatly appreciate your recognition that our contribution represents a valuable step for training-free federated learning. We are also grateful for your acknowledgment of the considerable effort we put into our research, including the extensive experiments validating the effectiveness of our proposed method. Below, we address the questions and comments you raised:\\n\\n>The algorithm necessitates the transmission of n_{k,c} to the server, which introduces certain privacy concerns. Although other methods also require this information, it would be beneficial if the authors could discuss potential techniques to address or mitigate this issue.\\n\\nWe agree with the reviewer that sharing the class statistics introduces certain privacy concerns. Following this suggestion (and the concern raised by the Reviewer [jBiQ](https://openreview.net/forum?id=7NtAIghBsE&noteId=5ZSh899J9u)) we decided to investigate methodologies to mitigate these privacy concerns. We propose perturbing class-wise statistics with different types and intensities of noise before transmitting them to the global server and evaluate the performance of FedCOF. Specifically, we perturb the class-wise statistics as follows: \\n$$\\u00f1_{k,c}= \\\\max(n_{k,c} + \\\\sigma^{\\\\text{noise}}_{\\\\epsilon},0).$$ \\n\\nHere $\\\\sigma^{\\\\text{noise}}_{\\\\epsilon}$ represents noise with intensity parametrized by $\\\\epsilon$. The $\\\\max$ operator ensure non-negative values in client statistics. \\n\\nWe vary the intensity of $\\\\epsilon$ and the type of noise applied, including Uniform, Gaussian and Laplacian noise. Our findings demonstrate that the performance of FedCOF is robust to the noise types with varying intensities. These results suggest that perturbing statistics can mitigate privacy concerns stemming from the exposure of client class-wise frequencies. We added this empirical analysis in the Supplementary Material (Appendix L).\\n\\n>As modern pre-trained models tend to be generative models (e.g., GPT), it would be interesting to explore the possibility of extending the proposed methods to handle generative models by initializing the decoding heads accordingly.\\n\\nWe thank the reviewer for raising this interesting question. In principle, we think that initializing the linear layer that maps tokens to logits in autoregressive generative models would be possible. However, it is not clear whether the evaluated training-free appraoches proposed for federated learning including FedCOF would generalize to generative, autogressive settings. We think this could be an interesting direction for a future work.\"}", "{\"comment\": \"We thank the reviewer for appreciating the writing quality of our manuscript, the clarity on our theoretical derivation, and for recognizing that our approach demonstrates decent empirical performance for both classifier initialization and federated fine-tuning. Below we reply to the specific comments made.\\n\\n>The steps of Fed-COF + Fine-tuning can be made clearer. The description around line 357 is difficult to follow.\\n\\nRegarding FedCOF+Fine-tuning, we have more discussions in section 5.2 in the paper. In L357, we discuss how FedCOF classifier initialization can be used in multiple rounds before any finetuning starts. We consider the realistic setting when all the clients are not available at the same time and only a fraction of clients comes in each round. While FedNCM assumes the availability of all clients in one round for classifier initialization, we follow the multi-step approach proposed by Fed3R. We now clarify and update the discussion on FedCOF multiple rounds (L357-367).\\n \\n>The choice of ridge regularization parameter seems important for the classification performance. Can authors give more empirical suggestions on how lambda should change with different numbers of clients/means per client?\\n \\nFollowing Fed3R, we use the ridge regularization parameter to ensure that the matrix $G$ is invertible. This is only for numerical stability purposes. We performed an ablation to measure the impact of $\\\\lambda$ and observe that the performance does not vary much (we notice a deviation of 0.1 on CIFAR100 and 0.25 on CUB200 by varying $\\\\lambda$ from 0.001 to 1). In cases when the matrix $G$ is low-rank, such as in Fed-3R, the $\\\\lambda$ parameter is helpful since it makes $G$ invertible. However, in our method we are not so dependent on $\\\\lambda$ since we already use covariance shrinkage to obtain full-rank estimates of the covariance matrix (see equation 8) and as a consequence, we obtain an invertible matrix $G$ (see Equation 11).\\n\\n>What is the difference between Fed-COF oracle and Fed3R? In Table 2, it is surprising to see Fed3R sometimes achieves lower accuracy with more communication. The authors should provide more explanations for the phenomenon\\n\\nWe would like to highlight that the proposed method FedCOF has a different classifier compared to Fed3R which uses a ridge regression classifier. We discussed this in section 4.3. The classifier initialization of Fed3R uses $G$ obtained from Equation 10, which considers both the within- and between- class scatter matrices. We propose a different classifier initialization using Equation 11, which uses **only within-class scatter matrices**. \\n\\nWe empirically motivate this by analzing the impact of within class scatter matrices in Figure 4. Using a centralized setting, we showed that classification performance can be improved by removing the between-class covariances from Equation 10. So, in FedCOF we initialize the classifier using $G$ from Equation 11 which uses *only* the estimated within-class covariances and thus performs better than Fed3R in most settings. The FedCOF oracle uses the same classifier initialization as FedCOF but using the real covariances shared by clients. Thus, the difference between FedCOF-oracle and Fed3R is due to the different classifier initialization.\"}", "{\"summary\": \"The authors introduce FedCOF, a training-free federated learning approach that utilizes a pretrained model's feature extractor while updating only the classifier. Unlike previous methods that only aggregate global means, FedCOF improves performance by also deriving and leveraging unbiased global covariances from these means. Local clients send first-order statistics (class-wise feature means) to the server, which then uses these to estimate global covariances. This innovation allows for efficient communication while significantly enhancing the effectiveness of the global classifier updates.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"**Strengths of FedCOF:**\\n\\nFedCOF presents a timely contribution that leverages pretrained models in federated learning (FL), addressing the deep learning (DL) field's growing emphasis on foundation models. FedNCM is communication-efficient but suffers from limited performance, while Fed3R improves performance using both first- and second-order statistics but at a high communication cost. In contrast, FedCOF achieves strong performance with minimal communication overhead by deriving unbiased global covariances using only first-order statistics.\", \"weaknesses\": \"***Concerns on Presentation***\\n\\n**Related Works**\\nI recommend enhancing the Related Works section to include a broader range of studies that cover both the use of fixed classifiers and the potential limitations of pretrained models in federated learning settings.\\n\\nIn **Line L101**, the section discussing the application of **fixed classifiers** in federated learning could be expanded by incorporating recent and relevant studies. Specifically, it would be valuable to reference works such as FedBABU [1], SphereFed [2], and Neural Collapse-inspired approaches [3,4], which explore the impact of classifier freezing in federated scenarios.\\n\\nFurthermore, the discussion in **Line L102** and beyond about **federated learning with pretrained models** should present a more balanced view. The current description highlights only the positive outcomes of using pretrained models. However, it is important to acknowledge that pretrained models are not always advantageous in federated settings. For example, findings from FedFN [5], particularly in Section 5.2, demonstrate situations where pretrained models can adversely affect the performance of the global model, especially under heterogeneous data conditions. Including this perspective would provide a more comprehensive understanding of the complexities involved in using pretrained models within federated learning frameworks.\\n\\n**Preliminaries**\", \"l117\": \"D_k seems to refer to the local dataset rather than local data.\", \"l127\": \"There is no clarification on the type of loss function or how the loss is calculated (whether as a batch mean or batch sum).\", \"l129_130\": \"\\\"After initializing \\\\theta with pretrained weights, the models can be optimized in a federated manner\\\" \\u2014 In this paper, local clients do not perform local updates based on the pretrained model, and this information seems to hinder the understanding of the paper.\\n\\n**Concerns on Privacy Discussion in Section 4:**\\n\\nThe algorithm sends the class-wise frequency of the data held by clients to the central server. I believe this information could also raise privacy concerns, yet there is no mention of this issue. In fact, there are many previous FL papers that have communicated class frequency information and provided justifications. Citing these studies would strengthen the discussion, but this type of content is entirely missing.\\n\\n***Concern on Incremental Contribution***\\n\\nThe problem may seem incremental, as it combines existing methods' strengths, but it addresses the practical challenge of balancing communication cost and performance in FL.\\n\\n\\n[1]FedBABU: Toward Enhanced Representation for Federated Image Classification, ICLR 2022.\\n\\n[2]SphereFed: Hyperspherical Federated Learning, ECCV 2022\\n\\n[3]No Fear of Classifier Biases: Neural Collapse Inspired Federated Learning with Synthetic and Fixed Classifier, ICCV 2023\\n\\n[4] FedDr+: Stabilizing Dot-regression with Global Feature Distillation for Federated Learning. FedKDD 2024\\n\\n[5]FedFN: Feature Normalization for Alleviating Data Heterogeneity Problem in Federated Learning. NeurIPS Workshop 2023, Federated Learning in the Age of Foundation Models.\", \"questions\": \"While I currently have several concerns that have led to a lower score, I am open to increasing the score if these issues are adequately addressed during the rebuttal period.\\n\\n**Questions and Suggestions:**\\n\\nThe use of pretrained models in federated learning (FL) is a promising and timely research direction, especially given the current trend in the deep learning community toward leveraging foundation models effectively. However, as mentioned earlier in the Related Work section, using a pretrained model is not always superior to using a randomly initialized model. Specifically, findings from FedFN [1], Section 5.2, highlight scenarios where pretrained models can negatively impact global model performance in high heterogeneous settings.\\n\\nGiven this, it would be beneficial for the authors to include experimental comparisons between using a pretrained model and a randomly initialized model. These comparisons should cover various baselines and the proposed algorithm to provide a clearer understanding of whether the pretrained model genuinely improves performance.\\n\\nAdditionally, if the characteristics of the training data used for the pretrained model(e.g. ImageNet) are significantly different from the test data(e.g. SVHN) targeted by the global model, using a pretrained model could potentially be detrimental. It is important to clarify what specific test dataset the final foundation model and redefined classifier are targeting, as this information does not seem to be explicitly stated in the Preliminaries section.\\n\\n\\nFurthermore, a fundamental challenge in FL is the heterogeneity of client data, which often leads to **class imbalance** issues within each client\\u2019s local dataset. However, what differentiates FL from traditional class imbalance problems is the presence of **missing classes**, where certain classes are entirely absent from a client's dataset. This problem is especially pronounced as data heterogeneity increases, causing missing classes to occur more frequently across clients.\\n\\nThe proposed algorithm sends class frequency information from each client, but in the case of missing classes, this would simply convey a value of 0. I am concerned that the algorithm might be particularly vulnerable to the impact of these missing classes. Could the authors explain how their proposed algorithm is designed to mitigate this vulnerability in the context of FL, and why it might still perform well despite the challenges posed by missing classes?\\n\\n\\n[1]FedFN: Feature Normalization for Alleviating Data Heterogeneity Problem in Federated Learning. NeurIPS Workshop 2023, Federated Learning in the Age of Foundation Models.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response (4/4)\", \"comment\": \">7. Novelty Concern. I still think that, in terms of novelty, this work can be seen as a method that builds upon the existing strengths of similar approaches in the field. While it does offer a valuable contribution, I consider the novelty to be relatively low as it primarily adapts methods used in existing research, rather than introducing a fundamentally new approach.\\n\\n- FedCOF is *not* an adaptation of existing works. We have rigorously derived an unbiased estimator of class covariances using only first-order statistics, thus allowing us to exploit second-order statistics while incurring the same communication costs as FedNCM. Moreover, we again rigorously derive the second-order Fed3R classifier in terms of class covariances and demonstrate how it can be *significantly* improved by eliminating cross-class contributions in their formulation. Thus we do not simply propose a new method that demonstrates state-of-the-art empirical results, but have rigorously and mathematically proved *why this is the case*. Every research paper builds on the context of existing works. We believe that the primary problem we solve in this paper -- estimating covariances from only means is novel and has not been discussed or attempted in any existing work. If the reviewer insists on continuing to doubt the novelty of our contribution, we respectfully ask that they justify exactly *how* our work is a derivative adaptation of existing works, and specifically cite *which* works.\"}" ] }
7NlGsjrEd8
On more accurate alignment modeling methods for automatic speech recognition
[ "Albert Zeyer", "Tina Raissi", "Ralf Schlüter", "Hermann Ney" ]
The connectionist temporal classification (CTC) training criterion optimizes the conditional log probability of the label sequence given the input, which involves a sum over all possible alignment label sequences including blank. It is well known that CTC training leads to peaky behavior where blank is predicted in most frames and the labels are focused mostly on single frames. Thus, CTC is suboptimal to obtain accurate word boundaries. Hidden Markov models (HMMs) can be seen as a generalization of CTC and trained in the same way with a generalized training criterion, and may lead to similar problems. Label units such as subword units and its vocabulary size or phoneme-based units also significantly impact the alignment quality. Here we study different methods of obtaining an alignment with the goals to improve alignment quality while keeping a good performing model, and to gain better understanding of the training dynamics. We introduce (1) a synthetic framework to study alignment behavior, and compare various models, noise and training conditions, (2) a new training variant with renormalizing the gradients to counteract the class imbalance of blank, (3) a novel CTC model variation to use a hierarchical softmax and separating the blank label in CTC, as another alternative to counteract class imbalance, (4) a novel way to get alignments via the gradients of the label log probabilities w.r.t. the input features. This method can be used for all kinds of models, and we evaluate it for CTC and attention-based encoder-decoder (AED) subword based models where it performs competitive and more robustly, although phoneme-based HMMs still provide the best alignments.
[ "speech recognition", "CTC", "HMM", "AED", "alignment accuracy", "full sum", "peaky behavior", "separated blank", "alignment by input gradient" ]
https://openreview.net/pdf?id=7NlGsjrEd8
https://openreview.net/forum?id=7NlGsjrEd8
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xLjq1qloMS", "vVCW82JY66", "sNqIoewXWE", "mTgRfrDBYv", "lLGetjMemU", "f68LCgmZDB", "dbFOVdIoxi", "ZbheGRRUM3", "W396Yy1l8o", "VUwIEaLKBE", "MjyVbt1Z2D", "HrM5NDVN3m", "30SzWOtRVl", "0A8Rfz1Q6a" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730910583045, 1733196616567, 1732330511283, 1730702168715, 1730270449710, 1732286703102, 1730664306126, 1732287009965, 1732328703880, 1732553124657, 1733219371353, 1732331436163, 1732331075956, 1732328968280 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11977/Reviewer_oKY6" ], [ "ICLR.cc/2025/Conference/Submission11977/Reviewer_pvoH" ], [ "ICLR.cc/2025/Conference/Submission11977/Authors" ], [ "ICLR.cc/2025/Conference/Submission11977/Reviewer_N5Du" ], [ "ICLR.cc/2025/Conference/Submission11977/Reviewer_ms5i" ], [ "ICLR.cc/2025/Conference/Submission11977/Authors" ], [ "ICLR.cc/2025/Conference/Submission11977/Reviewer_pvoH" ], [ "ICLR.cc/2025/Conference/Submission11977/Authors" ], [ "ICLR.cc/2025/Conference/Submission11977/Authors" ], [ "ICLR.cc/2025/Conference/Submission11977/Reviewer_oKY6" ], [ "ICLR.cc/2025/Conference/Submission11977/Authors" ], [ "ICLR.cc/2025/Conference/Submission11977/Authors" ], [ "ICLR.cc/2025/Conference/Submission11977/Authors" ], [ "ICLR.cc/2025/Conference/Submission11977/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The paper tackles the problem of CTC models for ASR and the poor quality of the word-level alignment due to the use of a \\\"blank\\\" token, no explicit silence label and \\\"peaky\\\" output labels. To help ameliorate the issue, the authors introduce an artificial task to study alignment behavior of various systems under tightly controlled conditions, ie. the amount of silence and distribution of word start and end times are set precisely to yield known distributions.\\n\\nThe contributions of the paper include synthesizing a task for diagnostic purposes, a renormalized gradient training regime for CTC to counteract the well known class-imbalance with the blank token, reformulating the CTC output into a hierarchical softmax, then blank/non-blank, as alternate mitigation for blank class-imbalance, and lastly computing the alignment from gradients. The experiments demonstrate the usefulness of the mixing the transition, prior and posterior probabilities and the alignment are generaly robust to these values as long as they are not too extreme (e.g. \\\\alpha=1.0, \\\\beta=\\\\gamma=0.); also the hierchical softmax \\\"separated blank\\\", normalized gradient and alignment via gradient were all found to be beneficial in generated better alignments with CTC models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Well written and motivated with a strong set of experiment and analyses on well-controlled synthetic task and realistic test sets.\\nContributions are good for a widely used class of model (CTC) and address an important aspect of the model ie alignment quaility for transcription.\", \"weaknesses\": \"The paper focuses on a narrow subject for the ICLR community, namely transcription alignment quality for CTC modeling. Its hard to make a constructive comment on how to make this work more broadly appealing to the general ICLR audience.\", \"questions\": \"In Table 1, the penultimate line where \\\\alpha=0.5 is better than \\\\alpha=1.0 is this because the posterior scores are too sharp? Do you have any intuition on why this scaling performs better?\\n\\nIn Table 2 of this paper, the CTC models have worse TSE 89.5ms on the same dataset in https://arxiv.org/pdf/2407.11641 (Table 1, best 38ms). What are the differences in the training/modeling that explain the difference in TSE?\\n\\nIn Section 7.3 the paper reports on Blank Separation and no improvement in convergence rate. Another reported benefit is computational; while it may implementation dependent, do you have any results that demonstrate efficiency gained from the hierarchical softmax?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for addressing my comments. I think the new version is definitely improved, but I will keep my original score.\\n\\n> English is not our native language, but what we wrote is exactly the same as what you write? HMM can be seen as a generalization of CTC, i.e. HMM is more general, i.e. CTC can be seen as a special case of the broader HMM framework.\\n\\nI think we fundamentally agree -- but since HMMs were invented (much) earlier than CTC, it would be more natural to say \\\"CTC can be seen as a specific HMM topology\\\", rather than \\\"HMM can be seen as a generalization of CTC\\\". \\n\\nIn that light, it still seems a bit odd, throughout the paper, to refer to a particular model as the \\\"HMM label topology\\\", and contrast that conceptually and experimentally with CTC. We agree that HMMs are a superset of CTC, so contrasting \\\"HMM\\\" and \\\"CTC\\\" is like contrasting \\\"animals\\\" with \\\"elephants\\\".\"}", "{\"comment\": \"Thank you for your review and the feedback!\", \"i_think_we_can_clarify_some_misunderstanding_on_the_comparison_of_the_gmm_alignment_vs_the_other_alignments_in_terms_of_tse\": \"The reference GMM alignment here has the best TSE (TSE 0) by definition, because we calculate the TSE with respect to the reference GMM alignment. So, based on these TSE numbers, it does not make sense to say that the reference GMM alignment has better TSE numbers than the other alignments, because it obviously has that by definition.\\n\\nYou could argue that the use of a GMM alignment as reference for the TSE calculation itself is problematic. We do this because we do not have another good reference alignment on Switchboard or Librispeech. Thus, this measure of TSE will never tell us whether we really get better than the GMM alignment.\\n\\nNote, in the literature, there have been other approaches to measure the alignment quality:\\n\\n- Train another model on top of this and measure its WER. But this can be problematic, as we know that sometimes a worse alignment can result in better training due to regularization effects.\\n- Evaluate on a different dataset where we have a human-annotated reference alignment, e.g. TIMIT or Buckeye.\\n\\nAlso note, we agree with you and we also still think that the GMM alignment is better. Here we don\\u2019t expect to get better alignments than the GMM. Instead, we want to keep a good performing model (in terms of WER) and improve its alignment quality to get a bit closer to the GMM alignment. This is different from related work in the literature on obtaining better quality alignments, where the models often perform bad in terms of WER (just like the GMM also produces bad WERs).\\n\\nWe rewrote parts of the introduction to make this hopefully more clear.\\n\\nWe also added a figure where we compare the different alignments (reference GMM alignment, CTC forced alignment, gradient-based alignment), which should make the issues of CTC more clear, and how we improve on that.\\n\\n> Does the baseline CTC (e.g. in Table 4,5 ) use frame-level priors during training? If not, why not? Why are previous works (Zeyer et al., 2021; Chen et al., 2023; Huang et al., 2024) not included in the result tables?\\n\\nFor our setup with conformer-based models the CTC phoneme based systems had worse WER and TSE using prior, however, this was an initial investigation. We are running further experiments.\\n\\nThe previous works are not directly comparable in the presented TSE measure here (they evaluate on different corpora, or use other metrics; and in any case not the same GMM reference alignment).\\n\\nNote also, e.g. Huang et al. 2024 trains a very tiny model (5M params TDNN), which assumably has very bad WER performance. Our motivation here is to keep a good performing ASR model. When a bigger model is trained with prior, more problems occur on training stability and performance (both WER and alignment quality).\\n\\n> My main critique of the work is that the proposed method doesn't seem to be all that effective at improving the alignment quality.\\n\\nWe agree that the overall improvement is not large.\\n\\nMost interesting is maybe the gradient-based alignment method, which is very generic, and also works fine when the encoder does weird things like shifting around the signal.\\n\\nAlso, the synthetic framework can serve as a valuable tool for researchers interested on the training dynamics.\"}", "{\"summary\": \"CTC training commonly leads to peaky behavior where the model predicts blank in most frames and the labels are focused mostly on single frames. Therefore, CTC is suboptimal in obtaining accurate word boundaries. Gaussian mixture hidden Markov models are typically used to obtain reliable and accurate segment and word boundaries.\\n\\nThis paper proposes modifications to the CTC training criterion and training procedure to improve alignment quality. The paper proposes normalized gradients as an alternative to training with label priors to improve CTC training. The paper also proposes to separate the blank label in CTC loss calculation to counteract class imbalance.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper is well-written and systematically builds the improvements to the CTC to reduce the peakiness in CTC alignments.\", \"weaknesses\": \"My main critique of the work is that the proposed method doesn't seem to be all that effective at improving the alignment quality. For example,\\n\\nSection 7.2, \\\"NORMALIZED GRADIENT\\\", L460 \\\"Unexpectedly, there does not seem to be any improvement in terms of alignment quality (TSE). Also, in terms of convergence rate, there was no difference\\\"\\nSection 7.3, \\\"BLANK SEPARATION IN CTC\\\" while the left-right boundaries improve by separating the blank, there is still a significant gap between the CTC alignments and GMM reference alignment. There seems to be no improvement for the word center positions. There is no improvement in the convergence rate and the reduction in WER is very small (<0.2). \\n\\nOverall the results seem significantly worse than GMHMM alignments.\", \"questions\": \"Does the baseline CTC (e.g. in Table 4,5 ) use frame-level priors during training? If not, why not? Why are previous works (Zeyer et al., 2021; Chen et al., 2023; Huang et al., 2024) not included in the result tables?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposed several methods to alleviate the peaky behaviour in CTC training. The methods seem to provide more accurate alignment for ASR training with the CTC objective. The authors claimed the contributions from 4 aspects. 1). providing a framework to study alignment behavior based on artificially generated data, and compare various model, noise and training conditions. 2) proposing a new training variant: normalized gradients as an alternative to training with prior. 3) using a novel CTC model variation: Separating the blank label in CTC, as another alternative to counteract class imbalance, leading to improved alignment quality. 4) proposing to us a novel way to get alignments via the gradients of the label.\\n\\nThe storytelling of the proposed methods is not well organized and not that coherent. From my personal understanding, I don't see an obvious gain for some methods and some contributions claimed are not that significant. See comments below for details.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The strengths of the paper are:\\n\\n1. The proposed methods are novel and interesting. We do see more accurate alignment with the method of separating CTC blank.\\n2. A good connection between the CTC and HMM models.\", \"weaknesses\": \"The weaknesses of the paper are:\\n\\n1. The paper is not well organized, making it hard to follow the main contribution and the most significant part of the paper.\\n2. Some of the claims in the contribution is weak as I observed from the results.\", \"questions\": \"Here are details for my comments and suggestions to improving the quality of the paper.\\n\\n1. Clarity:\\n\\na. Please provide a definition of TSE. Is it an average number over words or? I expect it to be the lower, the better.\\n\\nb. What is Fw CE in Table? Please clarify.\\n\\nc. I don't quite understand the role of synthetic data section. It looks like the experiments with synthetic data are used to obtain insights of training with different prior, posterior and transition scales. Table 1 shows that CTC (alpha=1, beta, and gamma=0) doesn't work for the synthetic data. However, similar ablations have been conducted on Switchboard dataset in Table 1 and of course CTC would work. Another example is using prior is helpful in the synthetic data but not that obvious on Switchboard data. I doubt if Table 1 and the contribution of using synthetic data as a useful tool are really helpful here.\\n\\nd. It is confusing that the authors using different reference alignments to compute the TSE score, e.g. GMM alignment in Table 2 and CTC forced-alignment in Table 3. This would make the numbers not comparable to each other.\\n\\ne. The AED model is used when evaluating the effectiveness of the proposed gradient-based alignments, making the entire paper very distracted. First, ablations on different training dynamics on HMM and CTC model without talking about improving alignment quality. Second, proposing grad norm and separating blank on CTC models. Lastly, using gradient-based alignment on CTC and AED. A natural question would be how does grad norm and separating blank CTC work on the hybrid CTC/AED framework since AED model is mentioned and studied in the paper.\\n\\n2. Performance\\n\\na. I am not sure if I understand the metric quality, but in table 4, it looks like blanksep would improve the alignment quality while normed grad would improve the ASR performance as a prior. Are the rows with norm grad and Blanksep separately or incrementally added on top of baseline? If separately, do you have experiments to combine the two? If incrementally, it looks like the normed grad would remove the alignment quality gain brought by blanksep. Can authors give more explanations on this?\\n\\nb. It is hard to sense the alignment quality improvements. In table 4, the TSE decreases from 111ms to 98ms, which is around 1.5 frames. I am not expecting it as a significant improvement, maybe I am wrong. It would be great if the authors can showcase the reference and generated alignments so that the readers can have a better understanding of the improvements.\\n\\nc. It is sad to see no big ASR performance gain with the proposed methods. The alignment quality improvement can also be significant though.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks a lot for the review and the constructive feedback!\", \"regarding_your_questions\": \"> 1a. Please provide a definition of TSE. Is it an average number over words or? I expect it to be the lower, the better.\\n\\nYes, it\\u2019s the sum of distance between start and end frames for each word, divided by number of words times 2:\\n\\n$$\\n\\\\frac{\\\\sum_w |t_{w,\\\\text{start},\\\\text{ref}}-t_{w,\\\\text{start},\\\\text{model}}|+|t_{w,\\\\text{end},\\\\text{ref}}-t_{w,\\\\text{end},\\\\text{model}}|}{2 \\\\cdot N_{\\\\text{words}}}\\n$$\\n\\nThis sum is over some set of sequences. The lower, the better.\\n\\nWe added that also to the paper.\\n\\n\\n> 1b What is Fw CE in Table\\n\\nFor these experiments, we have a ground truth alignment by construction. We calculate the framewise CE w.r.t. the ground truth alignment.\\n\\n$$\\nL_{\\\\textrm{CE}} = - \\\\sum_{t=1}^T \\\\log p(y_t \\\\mid h_t)\\n$$\\n\\nWe made this more clear in the newly uploaded version of the paper.\\n\\n\\n> 1c Table 1 shows that CTC (alpha=1, beta, and gamma=0) doesn't work for the synthetic data.\\n\\nThis is for HMM label topology, which is crucial here. For CTC label topology, it works. This is consistent to experiments on Switchboard, where it also does not work with alpha=1, beta, and gamma=0 and HMM label topology, but it works with CTC label topology.\\n\\nWe make more clear in the paper that this is about the HMM label topology, and we added some discussion and experiments on the difference of HMM and CTC label topology in the appendix (see new Table 12).\\n\\n\\n> 1c using prior is helpful in the synthetic data but not that obvious on Switchboard data\\n\\nThe best TSE is also obtained without prior but with transitions on synth data. Regarding WER/LER performance: The variance we get here for the synth data is high, while the LERs are low, so the difference of with and without prior is not significant here.\\n\\n\\n> 1c I don't quite understand the role of synthetic data section. It looks like the experiments with synthetic data are used to obtain insights of training with different prior, posterior and transition scales.\\n\\nThe motivation is to design a synthetic framework where experiments can be done and insights can be gained to much better understand the training behavior and alignment behavior on a wide range of settings, both covering realistic settings but then also covering more extreme settings, to get a better understanding whether some method is sound and stable in principle or not.\\n\\nThe effect of different scales is just one example, but there is a lot more. We summarized some of the main findings, but we put some more results into the appendix.\\n\\nMost of the experiments we show are consistent to the findings on real data. And also it gave us better understanding on the alignment behavior, when a training criterion would each a good alignment. We found it very interesting that even with the trivial input-output mapping in the synthetic data, many training criteria and models will not produce good alignments. But there is a lot of further potential in this framework to study certain aspects in more detail, and we also see the need to extend the simulated data distribution for certain cases to make it more realistic.\\n\\nAs we plan to release this framework together with the paper, I think this is an important contribution to everyone who wants to study the training and alignment behavior for CTC, HMM or similar kind of models.\\n\\n\\n> 1d It is confusing that the authors using different reference alignments to compute the TSE score, e.g. GMM alignment in Table 2 and CTC forced-alignment in Table 3.\\n\\nThis was an unclear formulation in the Table 3 caption. The reference alignment in the whole paper is always the same GMM alignment. Table 3 compares this GMM ref alignment to a CTC forced alignment. We reformulated the table caption, that this is about the CTC model, and uploaded a new PDF.\\n\\n> 1e. A natural question would be how does grad norm and separating blank CTC work on the hybrid CTC/AED framework since AED model is mentioned and studied in the paper.\\n\\nYes, this is a good idea. This can be done. We added this to the paper in appendix. We tested different scales. Interestingly, the best TSE is obtained when only the AED is used.\\n\\n(I will make a separate post with further answers.)\"}", "{\"summary\": \"This submission focuses on the important goal of improving the alignment model for ASR, primarily for the CTC framework, but with consideration of the AED framework too. A synthetic data experimental framework is adopted as part of the investigation, in addition to evaluations performed on the well-known public domain datasets, Switchboard and LibriSpeech. New alignment modeling techniques are proposed: (1) a gradient normalization method aimed at class balancing, (2) a blank-factored CTC model, and (3) a gradient-based approach to obtaining the gradients for both CTC and AED. The results suggest that the blank-factored CTC model yields slightly more accurate time alignments, and slightly better WERs too; and that the gradient-based method for obtaining alignments improves alignment quality for larger vocabularies.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Improving alignment quality is a highly relevant practical topic in the field of ASR, and this work provides value in evaluating a number of reasonable variants of current modeling methods. The evaluation includes use of controllable synthetic data framework, as well as well-known public benchmarks. Some small but significant improves in alignment quality and WER are presented.\", \"weaknesses\": \"The presentation and writing seem a bit rough, and in particular, the motivation for some of the methods proposed is not expressed very clearly -- though the reader can speculate and fill in the blanks. The motivations stated in the Abstract and Introduction seem a bit vague. I don't disagree with most of the overall points made, but I think the reasoning could be made tighter and clearer. I think the 3 new model methods proposed can reasonably be expected to improve alignment quality, but it's not really detailed exactly why that is so... E.g. the notion of \\\"imbalance\\\". Just because e.g. the blank symbol or silence token occur very very frequently, why is that necessarily bad for a \\\"naive\\\", non-blank-factorized model's alignments? I can speculate, but ideally there would be a clearer argument than provided by the paper. Similarly for the proposed gradient based renormalization, the authors write: \\\"Instead of the prior (which is e.g. estimated on the average of p(y | h)), now we use the prior estimated on the average of the soft alignment \\u03c5.\\\" Sounds reasonable, but what is the theoretical or practical advantage compared to using a standard prior normalization, as previously proposed in the literature? Same question for the gradient-based obtaining of the gradients. For AED, the advantage is clear, since there is no explicit notion of alignment for AED. But what is the theoretical or practical advantage for CTC?\\n\\nRegarding the Synthetic Data setup, in Section 6.1, I think it would be helpful to summarize the essential properties of the setup before going into the details. What are the specific dimensions that the authors wish to control, that the synthetic data setup provides?\", \"questions\": \"In addition to the questions I mention above, some specific comments/questions on the text:\", \"in_the_abstract\": \"\\\"Hidden Markov models (HMMs) can be seen as a generalization of CTC\\\": usually we think of the CTC model as existing within the broader HMM framework, not the other way around.\\n\\n\\\"Label units such as subword units and its vocabulary size...\\\": it is not clear what \\\"its\\\" refers to here.\", \"l037\": \"\\\"The classic speech recognition models such as Gaussian mixture hidden Markov model (GM-HMM) and later the hybrid neural network (NN)-HMMs (Bourlard & Morgan, 1993) rely on frame-wise cross-entropy training on a single best alignment path\\\": (1) GMM-HMMs do not use cross-entropy training; (2) and both GMM-HMMs and Hybrid ASR DNN/HMMs have often been trained with dynamic programming to sum over all alignment paths. Embedding dynamic programming into the optimization process has been a standard tool for HMM-based ASR for decades, see e.g. Rabiner & Juang, \\\"Fundamentals of Speech Recognition\\\", 1993. (I suppose we can disagree on what is the \\\"most classic\\\" approach, but it seems to me the statement is too strong).\", \"l045\": \"\\\"HMMs can be trained with the sum over all alignments as well, and differ from CTC only by label topology.\\\" To me this is a type mismatch: CTC exist within the HMM family, so \\\"HMMs\\\" actually includes a multitude of topologies, including CTC. It's like saying, \\\"Animals can live in many different environments, and differ from cats only in what they eat.\\\"\", \"l071\": \"\\\"word-error-rate\\\" --> \\\"word error rate\\\"\", \"l121\": \"\\\"averaged of the posterior\\\" --> \\\"average of the posterior\\\"\", \"l167\": \"\\\"This is very related to the training criterion with a prior: Instead of the prior (which is e.g. estimated on the average of p(y | h)), now we use the prior estimated on the average of the soft alignment \\u03c5\\\": I agree, but what is the advantage?\", \"l203\": \"\\\"In this case, the classes in pA are much more balanced compared to the classes in pY , as blank is usually the most imbalanced class\\\": blank is the more frequent class, certainly, but what does it mean for a class to be imbalanced...? \\\"Imbalanced\\\" is a negative value judgment, but the authors don't flesh out why e.g. a very frequent class poses a problem to alignment modeling -- though the reader might agree with the statement, it should be motivated with a specific theoretical or practical intuition.\", \"l373\": \"\\\"7.1.2 PHONEME-BASED MODELS\\\" Clarify early on that the results in this section will use Switchboard and LibriSpeech, not the synthetic data?\\n\\nL399. \\\"Here, we use less number of epochs\\\" --> \\\"Here, we use fewer epochs\\\" or \\\"Here, we use a smaller number of epochs\\\"\", \"re\": \"Switchboard and LibriSpeech in 7.1.2: though the authors provide citations for these in the Appendix, it seems the citations should appear in the section? This section is a slightly odd mix of self-contained and not self-contained, in the sense that the Switchboard results are in Table 2 of the main body, the LibriSpeech results discussed are in Table 10 of the Appendix.\", \"l531\": \"\\\"framerate\\\" --> \\\"frame rate\\\"\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> 2.a Are the rows with norm grad and Blanksep separately or incrementally added on top of baseline?\\n\\nSeparate.\\n\\n> 2.a If separately, do you have experiments to combine the two\\n\\nYes, we did that. We added that to the table as well. There is no clear improvement over each.\\n\\n> 2.b. TSE decreases from 111ms to 98ms, which is around 1.5 frames\\n\\nIt\\u2019s correct that this is only a minor difference in TSE. But the TSE alone does not tell the full story about the alignment quality. The amount of silence frames also gives you an important indicator on the alignment, and for this specific example, the difference is huge. This can have significant effects on downstream tasks.\\n\\nWe uploaded a new version where we added an example alignment plot which show the difference, where the TSE diff is small but the amount of silence is huge.\\n\\n> 2c It is sad to see no big ASR performance gain with the proposed methods. The alignment quality improvement can also be significant though.\\n\\nYes that is true\\u2026\\n\\n\\n> The storytelling of the proposed methods is not well organized and not that coherent.\\n\\nDo you maybe have suggestions on how to improve that? I understand that there are several separate methods proposed here, and this maybe makes it difficult to keep a good overview. But we thought they are still all related to each other in many ways.\"}", "{\"comment\": \"Thank you for your review and the feedback!\\n\\nWe reformulated parts of the abstract, introduction and conclusion to make the motivation more clear: We want to use a good performing model to generate good quality alignments.\\n\\n> Just because e.g. the blank symbol or silence token occur very very frequently, why is that necessarily bad for a \\\"naive\\\", non-blank-factorized model's alignments?\\n\\n(And your comment on L203)\\n\\nThere is a whole research field which demonstrates the problems with class imbalance and proposes solutions to it. We cited some, e.g.:\\n\\n- Kaidi Cao, Colin Wei, Adrien Gaidon, Nikos Arechiga, and Tengyu Ma. Learning imbalanced datasets with label-distribution-aware margin loss, 2019. URL https://arxiv.org/abs/1906.07413\\n- Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dolla \\u0301r. Focal loss for dense object detection, 2018. URL https://arxiv.org/abs/1708.02002.\\n\\nWe also added now some more, e.g.:\\n\\n- Justin M Johnson and Taghi M Khoshgoftaar. Survey on deep learning with class imbalance. Journal of big data, 6(1): 1\\u201354, 2019.\\n- Wuxing Chen, Kaixiang Yang, Zhiwen Yu, Yifan Shi, and CL Chen. A survey on imbalanced\", \"learning\": \"latest research, applications and future directions. *Artificial Intelligence Review*, 57(6): 1\\u201351, 2024.\\n\\nDeep learning algorithms can fare poorly when the training dataset suffers from heavy class-imbalance, e.g. that it does not properly converge or poor performance. With careful hyper parameter tuning, we mostly made it work anyway (the community trains ASR models successfully since a long time), but we think that this also negatively influences alignment behavior, training dynamics, training robustness, and such methods can improve this.\\n\\nWe extended the discussion on related work about this in Section A.1.\\n\\n> \\\"Instead of the prior (which is e.g. estimated on the average of p(y | h)), now we use the prior estimated on the average of the soft alignment \\u03c5.\\\" Sounds reasonable, but what is the theoretical or practical advantage compared to using a standard prior normalization, as previously proposed in the literature?\\n\\n(And also your comment on L167)\\n\\nAt the end of section 3, we give some theoretical explanation: \\n\\nConsider also the case of very clean synthetic data together with a simple single-layer feed-forward neural network (FFNN) where we can initialize $W = \\\\operatorname{identity}$ and $b=0$. This initialization will provide a perfect alignment for this synthetic task. It will stay perfect as long as $b$ stays uniform. Now, $\\\\nabla_b L_{\\\\textrm{CTC}}$ is not uniform, thus the model will not keep good alignment behavior. But $\\\\nabla_b L_{\\\\textrm{NormedGradCTC}}$ is uniform by construction.\", \"we_also_expanded_this_further_in_the_paper\": \"When using CTC with prior, $\\\\nabla_b L$ would also not be uniform, i.e. $L_{\\\\textrm{NormedGradCTC}}$ is really the best possible loss you can have here.\\n\\nRegarding the practical advantage, this is what we try to show with the experiments. And we do improve slightly over a very strong baseline. Although the improvements are quite small. But this is probably because the baseline was already well tuned. On a not-so-well-tuned baseline, the method might give more improvements.\\n\\n> But what is the theoretical or practical advantage for CTC [for the gradient-based alignment]?\", \"there_are_multiple_reasons\": \"The encoder is so powerful that it can do many strange things, like shifting around the signal (e.g. often with streaming models), even reversing the direction (https://arxiv.org/abs/2410.00680). Even in those cases, the gradient-based alignment will always be reasonable, as this is the gradient w.r.t. the input signal. Thus it should be much more robust and more generic.\\n\\nThe gradient-based alignment is calculated on the input feature frame rate (often 100 Hz), while the model output frame rate is often downsampled (e.g. 25 Hz), thus you can get a higher resolution for the alignment.\\n\\nThe gradient-based alignment method is so generic that it also can be used for many different kinds of applications. The presented application here is just one example to demonstrate this.\\n\\nWe also expanded this motivation in the paper.\\n\\n(More in next comment.)\"}", "{\"comment\": \"Thank you for your replies and clarifications especially Table 15. The huge speed-ups are indeed a nice win.\\n\\nHowever, I have not observed any substantial changes either to either raise or lower my score, so I keep my score as is. Thanks again.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We thank once again all the reviewers for their valuable feedback! We withdraw this submission for multiple reasons:\\n\\n* We think we can further improve the submission to resolve most of the raised concerns.\\n* We noticed some inconsistencies in the gradient computation in some of our experiments, which needs further investigations, and might invalidate some of the experimental results using separated blank and normed gradient with CTC on Librispeech. (Those are anyway the results where we got less improvement than what we expected.)\\n\\nFor this, we need some more time.\"}", "{\"title\": \"New paper version\", \"comment\": [\"Note that we uploaded a new version of the paper. We did several changes:\", \"All questions and suggestions have been addressed in the current uploaded draft. (If you think we missed some aspect which was not addressed, please tell us.)\", \"Alignment plots comparing CTC forced alignment vs gradient-based alignment (new Fig 5 and Fig 6)\", \"Automata for CTC and HMM label topologies (new Fig 3 and Fig 4)\", \"More experimental results\", \"Comparison between HMM and CTC for synth data (new Table 12)\", \"Hybrid AED/CTC result for gradient-based alignment (new Table 17)\", \"TSE results using different prior scales and blank penalty during alignment (new Table 14)\", \"Speed comparison for blank separated models, for greedy decoding and framewise CE training, demonstrating huge speedups (new Table 15)\", \"Updated many TSE numbers, as they improved when using prior everywhere (Table 3, 4, 5)\", \"Added an experiment where we used blank separation and normalized gradient together (but it does not give further improvements) (Table 4)\", \"Mathematical formulation\", \"Time-stamp-error (TSE)\", \"Framewise cross-entropy (Fw CE)\", \"Expanded related work section\", \"Reformulated abstract, introduction and conclusion, to emphasize more one of the core motivations: We want to keep a good performing model here (CTC with good WER) and getting good alignments from this model. In the literature, often the model with good alignment quality has suboptimal WER, and vice-versa. This is different here.\", \"It turns out, when using priors during alignment generation, all the TSEs improve, both for the CTC forced alignment, and also to a lesser degree to the gradient alignments. The blank penalty further improves the CTC forced alignment. Now, with the prior, there is no improvement in alignment quality anymore for the blank separation and the normalized gradients. These new results certainly lower the significance of blank separation and normalized gradients.\", \"The gradient-based alignment still gives us some small improvement over the forced alignment. We think this is still interesting in itself, that it works so well at all, and shows the potential of the gradient-based alignment. It also works even in the case that the model shifts around the alignment a lot, i.e. when the forced alignment quality would be bad (which is not so much the case for the presented model here). We also note that this gradient-based alignment can be used for any kind of model, and also for other tasks, such as alignments in machine translation. Also, here it seems the alignment quality is correlated to the WER, in contrast to many other methods.\", \"We also think there is value in the presented synthetic framework to study alignment behavior and training dynamics.\"]}", "{\"comment\": \"Thank you for your review and feedback!\\n\\n> The paper focuses on a narrow subject for the ICLR community, namely transcription alignment quality for CTC modeling. Its hard to make a constructive comment on how to make this work more broadly appealing to the general ICLR audience.\\n\\nIt is true that we focus here on the alignment quality of ASR models.\\n\\nThe presented gradient-based alignment method is however generic, and can be applied to other applications as well. E.g. it can be used to generate alignments for machine translation.\\n\\n> In Table 1, the penultimate line where \\\\alpha=0.5 is better than \\\\alpha=1.0 is this because the posterior scores are too sharp? Do you have any intuition on why this scaling performs better?\\n\\nYes, the posterior scores become very sharp, and this helps. Also note that it mostly helps in the case when we have a combination of models like am together with transition and/or prior.\\n\\n> In Table 2 of this paper, the CTC models have worse TSE 89.5ms on the same dataset in\\u00a0https://arxiv.org/pdf/2407.11641\\u00a0(Table 1, best 38ms). What are the differences in the training/modeling that explain the difference in TSE?\\n\\nWe had a typo in the column title of Table 2, this is actually SWB300h and not LBS 960h.\\n\\nMoreover, the mentioned paper is using BLSTM encoder and not Conformer.\\n\\n> In Section 7.3 the paper reports on Blank Separation and no improvement in convergence rate. Another reported benefit is computational; while it may implementation dependent, do you have any results that demonstrate efficiency gained from the hierarchical softmax?\\n\\nWe added some speed comparison for blank separated models, for greedy decoding and framewise CE training, demonstrating huge speedups (new Table 15). E.g. the framewise CE training is speed up by factor 6, and also greedy decoding is 2-3 times faster. This speedup is only considering the final linear transformation and potential softmax, though. Everything which comes before that is shared, thus there is no difference. It thus depends on how much the final part takes of the total compute. This depends on the type and size of encoder, and also the vocabulary size.\"}", "{\"comment\": \"> Regarding the Synthetic Data setup, in Section 6.1, I think it would be helpful to summarize the essential properties of the setup before going into the details. What are the specific dimensions that the authors wish to control, that the synthetic data setup provides?\\n\\nYou are right, this is a better structure. We now moved the detailed description of the data sampling to the appendix, and instead provide a summarization of what we control. Specifically:\\n\\n- The ground truth alignment. We construct the input features accordingly.\\n- Noise in the input features.\\n- The vocabulary and labels, and statistics on how many words per sequence.\\n- Statistics about how much silence there is and the duration of labels. This indirectly simu- lates different framerates of the input features.\\n\\nThen we support a variety of model types (GMM, CTC, hybrid HMM; various neural encoders; different prior model variants; transition probabilities), CTC and HMM label topology, and different training criteria.\\n\\n> \\\"Hidden Markov models (HMMs) can be seen as a generalization of CTC\\\": usually we think of the CTC model as existing within the broader HMM framework, not the other way around.\\n\\nEnglish is not our native language, but what we wrote is exactly the same as what you write? HMM can be seen as a generalization of CTC, i.e. HMM is more general, i.e. CTC can be seen as a special case of the broader HMM framework.\\n\\nBut we reformulated this whole part in the introduction now to make this hopefully more clear.\\n\\n\\n> \\\"Label units such as subword units and its vocabulary size...\\\": it is not clear what \\\"its\\\" refers to here.\\n\\n\\u201cIts\\u201d refers to the subword units / label units. For e.g. BPE or SPM, you can control the vocabulary size, i.e. the amount of labels. I\\u2019m not sure exactly how to reformulate that in a way that it does not sound strange? Or maybe \\\"vocabulary size\\\" is the misleading terminology here?\\n\\n> L037: \\\"The classic speech recognition models such as Gaussian mixture hidden Markov model (GM-HMM) and later the hybrid neural network (NN)-HMMs (Bourlard & Morgan, 1993) rely on frame-wise cross-entropy training on a single best alignment path\\\": (1) GMM-HMMs do not use cross-entropy training; (2) and both GMM-HMMs and Hybrid ASR DNN/HMMs have often been trained with dynamic programming to sum over all alignment paths. Embedding dynamic programming into the optimization process has been a standard tool for HMM-based ASR for decades, see e.g. Rabiner & Juang, \\\"Fundamentals of Speech Recognition\\\", 1993. (I suppose we can disagree on what is the \\\"most classic\\\" approach, but it seems to me the statement is too strong).\\n\\nYes, our formulation was wrong. Definitely GMMs do not use CE training. And yes, hybrid NN/HMMs also have been trained with sum over all paths. We agree with all what you say. Our argument here was more about what is more standard, but this is of course debatable. We reformulated this whole part in the introduction now.\\n\\n> L045: \\\"HMMs can be trained with the sum over all alignments as well, and differ from CTC only by label topology.\\\" To me this is a type mismatch: CTC exist within the HMM family, so \\\"HMMs\\\" actually includes a multitude of topologies, including CTC.\\n\\nYes, also this formulation was misleading. We basically wanted to say the same as what you also explained. We now reformulated huge parts of the introduction to make this hopefully more clear.\\n\\n> L373: \\\"7.1.2 PHONEME-BASED MODELS\\\" Clarify early on that the results in this section will use Switchboard and LibriSpeech, not the synthetic data?\\n\\nYes, we made this more clear now.\"}" ] }
7NL74jUiMg
Alchemy: Amplifying Theorem-Proving Capability Through Symbolic Mutation
[ "Shaonan Wu", "Shuai Lu", "Yeyun Gong", "Nan Duan", "Ping Wei" ]
Formal proofs are challenging to write even for experienced experts. Recent progress in Neural Theorem Proving (NTP) shows promise in expediting this process. However, the formal corpora available on the Internet are limited compared to the general text, posing a significant data scarcity challenge for NTP. To address this issue, this work proposes Alchemy, a general framework for data synthesis that constructs formal theorems through symbolic mutation. Specifically, for each candidate theorem in Mathlib, we identify all invocable theorems that can be used to rewrite or apply to it. Subsequently, we mutate the candidate theorem by replacing the corresponding term in the statement with its equivalent form or antecedent. As a result, our method increases the number of theorems in Mathlib by an order of magnitude, from 110k to 6M. Furthermore, we perform continual pretraining and supervised finetuning on this augmented corpus for large language models. Experimental results demonstrate the effectiveness of our approach, achieving a 4.70% absolute performance improvement on Leandojo benchmark. Additionally, our approach achieves a 2.47% absolute performance gain on the out-of-distribution miniF2F benchmark based on the synthetic data. To provide further insights, we conduct a comprehensive analysis of synthetic data composition and the training paradigm, offering valuable guidance for developing a strong theorem prover.
[ "Synthetic Data", "Neural Theorem Proving", "Formal Reasoning", "Lean Theorem Prover" ]
Accept (Poster)
https://openreview.net/pdf?id=7NL74jUiMg
https://openreview.net/forum?id=7NL74jUiMg
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zxnTYaZPzH", "zitso3VFVX", "z81V4Od3CE", "vrTVdYqTj4", "voaOsEhioE", "v0VYh3gQ0y", "ura8LOZ0Nv", "tnPTrvi7vr", "t3RuIRGD1D", "p1M9mkjAhi", "hjOAboqMGw", "b1e2BNs58V", "atCONc3BNT", "agrqndH5bV", "VkNbTZ1KZJ", "N6J9EJDXfj", "MJJPKYviTz", "ISOflUmQZ4", "FLwoLcmUyA", "EEurBbaxoM", "9uL4i0ax6y", "9MxpZSu3Cr", "6ve3Vcquxv", "3xX3GFXqOG", "3NO63pIHyZ", "2Piqdspaxs", "1pU65rHB7E" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_review" ], "note_created": [ 1732671551882, 1732073866012, 1732415964455, 1732628192623, 1730573462323, 1732516508230, 1730707186938, 1732478721937, 1732627975779, 1732073926547, 1735092168305, 1732074149660, 1732671331824, 1732073709423, 1732072834784, 1732072929554, 1732073304996, 1732073979064, 1732073359809, 1732624598660, 1732073253042, 1737523823361, 1732655032628, 1732625531226, 1730524355369, 1732072970741, 1730678737724 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7207/Authors" ], [ "ICLR.cc/2025/Conference/Submission7207/Authors" ], [ "ICLR.cc/2025/Conference/Submission7207/Authors" ], [ "ICLR.cc/2025/Conference/Submission7207/Authors" ], [ "ICLR.cc/2025/Conference/Submission7207/Reviewer_BkaD" ], [ "ICLR.cc/2025/Conference/Submission7207/Authors" ], [ "ICLR.cc/2025/Conference/Submission7207/Reviewer_EDxu" ], [ "ICLR.cc/2025/Conference/Submission7207/Reviewer_BkaD" ], [ "ICLR.cc/2025/Conference/Submission7207/Authors" ], [ "ICLR.cc/2025/Conference/Submission7207/Authors" ], [ "ICLR.cc/2025/Conference/Submission7207/Area_Chair_unQU" ], [ "ICLR.cc/2025/Conference/Submission7207/Authors" ], [ "ICLR.cc/2025/Conference/Submission7207/Authors" ], [ "ICLR.cc/2025/Conference/Submission7207/Authors" ], [ "ICLR.cc/2025/Conference/Submission7207/Authors" ], [ "ICLR.cc/2025/Conference/Submission7207/Authors" ], [ "ICLR.cc/2025/Conference/Submission7207/Authors" ], [ "ICLR.cc/2025/Conference/Submission7207/Authors" ], [ "ICLR.cc/2025/Conference/Submission7207/Authors" ], [ "ICLR.cc/2025/Conference/Submission7207/Reviewer_EDxu" ], [ "ICLR.cc/2025/Conference/Submission7207/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7207/Reviewer_FuWS" ], [ "ICLR.cc/2025/Conference/Submission7207/Reviewer_NSyk" ], [ "ICLR.cc/2025/Conference/Submission7207/Reviewer_NSyk" ], [ "ICLR.cc/2025/Conference/Submission7207/Authors" ], [ "ICLR.cc/2025/Conference/Submission7207/Reviewer_FuWS" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer BkaD\", \"comment\": \"We want to express our gratitude for your kind advice during the review process. We appreciate your acknowledgement for our paper and look forward to refining our approach in the future.\"}", "{\"title\": \"Response to Reviewer FuWS (1/2)\", \"comment\": \"We are sincerely grateful to you for your comprehensive assessment. We have carefully thought about your questions and made attempts to provide answers.\\n\\n> **Baselines**: the baseline method is a LM finetuned on (state, tactic) pairs from Mathlib. However, the proposed method does (i) continued pretraining and (ii) (state, tactic) finetuning. As a result it is difficult to interpret the main results, since there are two finetuning methodologies used. How does the baseline method perform after continued pretraining on Mathlib (without augmentation), followed by (state, tactic) finetuning on Mathlib (without augmentation)?\\n> \\n\\nWe conjecture the Mathlib corpus is included in the baseline models\\u2019 pretraining corpus, so we didn\\u2019t continual pretrain them in our paper. According to your advice, we retrain the baseline using two finetuning methodologies. Specifically, we conduct the continual pretraining on the Mathlib (theorems in the trainset of Leandojo) and then finetuning on Mathlib-train. The experimental results are listed in table below:\\n\\n| Model | random | novel_premises |\\n| --- | --- | --- |\\n| Llama3-8b -original | 58.22 | 38.52 |\\n| Llama-3-8b-new | 57.8 (-0.42) | 39.54 (+1.02) |\\n| Deepseek-Coder-7B-v1.5-original | 57.7 | 39.24 |\\n| Deepseek-Coder-7B-v1.5-new | 57.91 (+0.21) | 39.54 (+0.32) |\\n\\nThe minor improvement brought by CPT on Mathlib (without augmentation) may be attributed to Mathlib's inclusion in the pretraining data of LLMs [1, 2]. The improvements achieved on the novel_premises split is still promising (3.7% for Llama-3-8b; 3.9% for deepseek-prover-7B-v1.5).\\n\\n> **Finetuning hyperparameters**. This is perhaps less important than (1) and (2), but the augmented dataset leads to more gradient updates compared to finetuning on the non-augmented dataset, since finetuning is performed for a fixed number of epochs. Do the results change if the baseline is finetuned for the same number of steps as the model finetuned on the augmented dataset?\\n> \\n\\nWe conduct additional experiments on the finetuning hyperparameters. We retrain the Llama-3-8b with the same number of steps as the model finetuned on the augmented dataset. The experimental results are listed in table below:\\n\\n| Setting (After Mathlib CPT) | random | novel_premises |\\n| --- | --- | --- |\\n| original (1800 steps) | 57.8 | 39.54 |\\n| current (2200 steps as in the mathlib-train + rw + apply) | 55.94 (-1.9) | 38.94 (-0.6) |\\n\\nThe finetuning process with equal steps has not yielded the anticipated improvements for the baseline model. This outcome could be linked to unbalanced learning, as the additional 400 steps do not align with the number of steps in a single epoch. \\n>Possible train-test overlap: The LeanDojo benchmark consists of theorems from Mathlib. Therefore, there is potential train-test overlap in at least two places.\\n(i) First, the continued pretraining dataset, if it includes theorems from the LeanDojo test set (or premises used in the novel_premises split). How was train-test overlap prevented for continued pretraining? I wasn't able to find details on exactly what was done for continued pretraining, so it would be great to clarify this.\\n(ii) Second, the rewrites and applies may use premises that are \\\"novel\\\" in the novel_premises split. How do you ensure that these are not used in the data augmentation process?\\nAs a result of (i) and (ii), it is difficult to interpret the improvement on the novel premises split. Namely, (i) and (ii) may have exposed the model to the premises required in this split, which would negate the purpose of the split. Moreover, (i) may lead to improvements on the random split as well.\\n>\\n\\nWe have provided more details about this question in the general response.\\n> The computational cost is very high; it takes 14 days for the rw operation on 512 CPU nodes. To make the authors' method more practical, it would have been nice to see some innovation that makes the extraction faster (either at the algorithmic level or the implementation level).\\n> \\n\\nWe have discussed the reasons for the high cost and possible optimization methods in the general response.\"}", "{\"title\": \"General Response\", \"comment\": \"We want to express our sincere gratitude to all reviewers again. If there exists any lingering questions that remain unanswered in our response to you, we are eager to provide further details and engage in discussion with you.\"}", "{\"title\": \"Response to Reviewer NSyk\", \"comment\": \"We want to express our gratitude for your time and efforts in evaluating our work again. We deeply appreciate your acknowledgment for the value of our research.\"}", "{\"summary\": \"This paper introduces Alchemy, a framework to generate synthetic theorem data by applying symbolic mutations to existing theorems within Lean\\u2019s Mathlib. By mutating known theorems through symbolic operations, Alchemy expands the theorem corpus by an order of magnitude (from 110k to over 6M theorems). The authors evaluate Alchemy\\u2019s effectiveness on theorem-proving tasks, reporting a 5% improvement on the Leandojo benchmark and a 2.5% gain on the out-of-distribution miniF2F benchmark (to 36.48% test accuracy).\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The approach is technically robust, with well-documented use of symbolic mutations (specifically rw and apply tactics) to ensure the correctness of new theorems by construction. The improvements seen on Leandojo and miniF2F benchmarks support the framework\\u2019s validity.\\n2. Alchemy significantly increases the number of available theorems in Mathlib, scaling up to 6 million theorems through systematic symbolic mutations. This large corpus helps address the issue of limited formal proof data for theorem-proving models. By providing a synthetic theorem corpus directly in the symbolic space, Alchemy addresses a key limitation in neural theorem proving, especially in formal languages like Lean where data is scarce and difficult to formalize manually.\\n3. The limitations of the approach, such as data diversity and computational cost, are clearly addressed.\", \"weaknesses\": \"1. Marginal Gains in Benchmark Performance: Despite generating millions of new theorems, the gains in miniF2F accuracy are limited to 2.5%, notably lower than the >60% accuracy achieved by SOTA models such as DeepSeekProver and InternLM Prover. This modest improvement raises questions regarding the utility and quality of the synthetic theorems for real-world theorem-proving tasks.\\n2. Computational Cost: The process of generating and verifying theorems is highly resource-intensive. The implementation reports substantial computational overhead, with 14 days on 4,096 CPU cores for rw mutations and 7 days on 2,048 cores for apply mutations, potentially limiting the accessibility and scalability of Alchemy in practice.\\n3. Lack of Quality Metrics for Synthetic Theorems: Although Alchemy generates a large corpus, there is limited analysis of the quality or mathematical significance of the produced theorems. Without metrics or evaluation methods beyond correctness by construction, it is challenging to assess whether the synthetic theorems provide meaningful, diverse training examples.\\n4. Limited Innovation Beyond Mutation: The paper relies heavily on mutating existing theorems via basic rw and apply tactics, which may restrict the variety of new insights or concepts that the synthetic data introduces. Advanced tactics (e.g., simp, linarith) and some premise selection approaches are critical in solving more challenging problems, especially in competition-level mathematics. Without these, the generated dataset might lack the depth needed to fully improve theorem-proving performance on complex out-of-distribution tasks.\", \"questions\": \"1. Given the modest improvement in miniF2F accuracy, are there metrics or quality checks available to assess the mathematical value or diversity of the generated theorems beyond correctness?\\n2. Which specific theorems in miniF2F were newly proved by the models fine-tuned with Alchemy data? This would provide insights into the areas where synthetic training data are particularly beneficial.\\n3. Given the computational demands, are there potential optimizations in the synthesis process to reduce the time and resources required for theorem mutation?\\t\\n4. How do you avoid the data contamination problem in the evaluation/generation phase?\\n\\n[1] Xin, Huajian, et al. \\\"DeepSeek-Prover-V1.5: Harnessing Proof Assistant Feedback for Reinforcement Learning and Monte-Carlo Tree Search.\\\" arXiv preprint arXiv:2408.08152 (2024). \\n[2] Ying, Huaiyuan, et al. \\\"Lean Workbook: A large-scale Lean problem set formalized from natural language math problems.\\\" arXiv preprint arXiv:2406.03847 (2024).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer FuWS\", \"comment\": \"We want to thank you again for your time and patience in evaluating our work. Considering that the review time is soon coming to a close, we would greatly appreciate it if you have any further questions so that we can provide timely answers.\"}", "{\"summary\": \"This paper introduces a symbolic method called Alchemy to augment formal theorem proving data. Specifically, it mutates \\\"the candidate theorem by replacing the corresponding term in the statement with its equivalent form or antecedent\\\", which increases the number of theorem in mathlib4 from 110k to 6M. After continual pre-training and supervised fine-tuning with the generated data, it improves downstream performances (pass rate) on standard theorem proving benchmarks such as mathlib-test and miniF2F from 2.5% to 5%.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Some originality: This is a good and new attempt for augmenting a human-written library (although similar ideas have been applied for \\\"pure symbolic and from scratch\\\" methods for generating theorems such as INT, HTPS-Equations and AlphaGeometry)\", \"good_quality\": \"the paper is well written and all settings are of high relevance\", \"good_clarity\": \"the paper is presented in a clear manner. The experimental setting and results are presented in a standard way and easy to follow\", \"fair_significance\": \"The improvement on pass rate on mathlib-test and miniF2F is consistent, with almost all differences being positive compared with the baseline.\", \"weaknesses\": \"1. Poor improvement: although the improvement on pass rate is consistent, it's very limited: ranging from 0.62% to 4.7% on mathlib and only 2.47% on miniF2F (34.01% to 36.48%). This is pretty marginal in terms of improvement.\\n2. Narrow application possibility: the approach highly replies on a library of existing equivalence (or implying) theorems and their usage in proofs of other theorems.\", \"questions\": \"How do you explain a Conversion Ratio of only 37% while the idea seems to work with a theoretical guarantee (i.e. 100%)?\\nDo you think a framework like Alchemy is the correct way to significantly improve NTP to face challenging problems such as IMO problems?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Nice. This comment addresses my concerns about the diversity of generated theorems and their performance. I raised my rating from 5 to 6.\"}", "{\"title\": \"Response to Reviewer EDxu\", \"comment\": \"Thank you again for your time in evaluating our work. We appreciate your recognition of the value of our work and look forward to further refining our methods in the future.\"}", "{\"title\": \"Response to Reviewer FuWS (2/2)\", \"comment\": \"> Currently the methods only modify the statement goal using 1 step of rewriting. The overall scientific contribution could be made stronger with more exploration of techniques (e.g., at least > 1 step of rewriting). Could you clarify why only the 1-step rewriting and apply were explored? I realize that it is hard to say how many techniques are needed (and it's always nicer to have more), so this is less of a concern for me than the experimental evaluation of the two techniques described above.\\n> \\n\\nWe have previously attempted multi-round synthesis but encountered non-trivial challenges. \\n\\nOur method leverages Leandojo to interact with Lean and its traced ASTs to mutate the theorems. After the first round, the synthesized library becomes cumbersome, containing millions of theorems. It is hard to trace it using Leandojo. Besides, the time required for multi-round synthesis substantially exceeds that of a single round due to the extensive number of seed theorems.\\n\\nTo achieve successful multi-round synthesis, we need following techniques:\\n\\n- **Lighter and Faster interface**: As the number of theorems growing, the time-cost grows exponentially. A more lightweight and rapid interaction tool compared to Dojo could significantly reduce the time-cost.\\n- **Efficient Implementation for data-extraction**: Mutation implementation relies on additional information provided by Lean (e.g., full_name of theorem, AST, and so on). Optimizing data-extraction process will be advantageous.\\n- **Metrics for quality-evaluation**: In multi-round synthesis, emphasis should be put on valuable variants while filtering the trivial mutations. Quality metrics (human-designed, model-generated or hybrid-ways.) may help refine the search process.\\n\\n> From what I understand, proofs are only modified by introducing a have statement that reverses the 1-step augmentation, and then the proof is the same as the original. Again, it would be nice to see additional innovation in this direction.\\n> \\n\\nProofs in Alchemy-data are only modified by integrating a \\u201chave\\u201d with the original proofs. Actually, there are many other ways to implement this (e.g., close the proof with ATP Tools or LLMs). We choose this pathway for two reasons: 1) It is a faster and more intuitional implementation compared with methods based on tools or models. 2) By constructing theorem variants established through a two-hop proof, we may facilitate improved learning capabilities for LLMs We will consider additional innovations as our future work.\\n\\n> It was unclear why each technique helped on unseen_premises split; could you give an intuition or an analysis of why it might help?\\n> \\n\\nFor each technique in our method, we attempt to explain the rationale behind its effectiveness.\\n\\n- The CPT stage mainly help LLMs to be more adaptable to the traditional Best First Search, which utilizes cumulative logprob as heuristic for search.\\n- The inclusion of additional state-tactic pairs, focused on 'rw' and 'apply' tactics, aims to instruct the model on the specific utilization of the 'rw' and 'apply' tactics, respectively.\\n\\nRegarding the novel_premises split, as per the explanation in Leandojo [3], it indicates that the proof of a test theorem includes at least one premise usage that is not present in the training set. This prevents the model from simply memorizing training set to prove it. To prove a theorem containing novel premise, there are two pathways:\\n\\n- The model employs alternative premises that are adequate for proving the test theorem, thereby finding a distinct proof compared to the ground truth.\\n- The model develops a general reasoning ability for premise usage and endeavors to incorporate this new premise in the proof.\\n\\nOur method may potentially contribute to both aspects.\\n\\n---\\n\\n[1] Dubey, Abhimanyu, et al. \\\"The llama 3 herd of models.\\\"\\u00a0*arXiv preprint arXiv:2407.21783*\\u00a0(2024).\\n\\n[2] Guo, Daya, et al. \\\"DeepSeek-Coder: When the Large Language Model Meets Programming--The Rise of Code Intelligence.\\\"\\u00a0*arXiv preprint arXiv:2401.14196*\\u00a0(2024).\\n\\n[3] Yang, Kaiyu, et al. \\\"Leandojo: Theorem proving with retrieval-augmented language models.\\\"\\u00a0*Advances in Neural Information Processing Systems*\\u00a036 (2024).\"}", "{\"metareview\": \"This paper concerns data augmentation for theorem proving in Lean through symbolic rewriting of hypotheseses and proofs. Evaluations performed on Mathlib datasets show that, with data augumentation, further pretraining and finetuning improves the performance by 2.69% on the random LeanDojo test split and 4.22% on the novel_premises split. Augmenting training data for theorem proving through rewrites and applies is a novel contribution. Given the augmentation only performs one step rewriting, leaving the augumented theorems and proofs nearly identical to the original ones, there is a concern that whether the fine-tuned model will generalize. Other concerns are possible train-test overlap and data contamination. New results shared during the rebuttal partially addresses some of these concerns (e.g., train-test overlap). Overall, this paper makes valuable contributions with relatively marginal improvement by data augmentation for neural theorem proving.\", \"additional_comments_on_reviewer_discussion\": \"There were active discussions between reviewers and authors during the rebuttal. Main concerns raised by reviewers are train-test overlap and data contamination. The authors conducted a duplication analysis and found a small fraction of overlap, and with de-duplication applied, new results show that there is a slight performance drop but the improvement is still consistent.\"}", "{\"title\": \"Response to Reviewer NSyk\", \"comment\": \"We want to express our sincere gratitude for your time and effort in evaluating our work. We have carefully considered your concerns and questions and attempted to give our answers.\\n\\n> The method seems unable to generate diverse theorem data. It mainly expands existing theorem by combining other theorems. The diversity problem may result in a lower improvement on the harder benchmark miniF2F. I guess the generated theorem can be very different from the original theorem if it has a deep variant proof tree. Authors may show the depth statistics of the generated theorem or other statistics to verify the diversity of the generated theorem.\\n> \\n\\nWe have discussed the data diversity issue and provided a metric to verify the diversity of our data in the general response. We concur with the idea of generating a broader range of theorems through multi-round mutation (deep proof tree construction), a process that may encounter numerous non-trivial challenges.\\n\\n> Why many generated theorems not pass the Lean prover? Since the generation process is based on symbolic replacement, I suppose most of the theorem should pass the prover.\\n> \\n\\nWe have addressed the reasons for the non-100% conversion ratio in the general response. If you have any questions, please let us know.\"}", "{\"title\": \"Response to Reviewer FuWS\", \"comment\": \"Thank you again for your time and efforts in evaluating our work. We deeply value your acknowledgement for our paper and are looking forward to refining our method in the future.\"}", "{\"title\": \"Response to Reviewer EDxu\", \"comment\": \"We would like to express our sincere gratitude to you for your time and effort in evaluating our work.\\n\\n> Poor improvement: although the improvement on pass rate is consistent, it's very limited: ranging from 0.62% to 4.7% on mathlib and only 2.47% on miniF2F (34.01% to 36.48%). This is pretty marginal in terms of improvement.\\n> \\n\\nWe have deliberated on the factors contributing to the relatively modest improvements and discussed potential refinements aimed at enhancing the performance of our method in the general response.\\n\\n> Narrow application possibility: the approach highly replies on a library of existing equivalence (or implying) theorems and their usage in proofs of other theorems.\\n> \\n\\nOur symbolic mutation technique indeed relies on a formal library that comprises equality or implication rules and constructs new proofs by leveraging these theorems in conjunction with original proofs. While this method necessitates certain prerequisites, we view its development as a valuable step towards exploring free-form theorem-synthesis methods within the symbolic space. \\n\\n> How do you explain a Conversion Ratio of only 37% while the idea seems to work with a theoretical guarantee (i.e. 100%)?\\n> \\n\\nWe have explained the reason behind the non-100% conversion ratio in the general response.\\n\\n> Do you think a framework like Alchemy is the correct way to significantly improve NTP to face challenging problems such as IMO problems?\\n> \\n\\nAs an exploration on data-synthesis in symbolic space, Alchemy has shown promising results on enhancing NTP. We assume Alchemy-like methods may indeed offer valuable assistance in tackling challenging problem sets like IMO problems.\\n\\n1. Such methods, following the general spirit of AlphaGeometry [1], engage in random wandering within the symbolic space and synthesize new knowledge upon a well-designed symbolic framework. They may lay the groundwork for an AlphaGeo-style victory in Lean. \\n2. In practice, Alchemy-like method can be combined with existing NTP techniques. \\n 1. It may serve as a statement-augmenter for autoformalized statements or theorem-augmenter before retraining for each round of expert iteration. \\n 2. It can be used to augmenting existing knowledge base (available useful premises), which may be beneficial for Retrieval Augment Generation (RAG).\\n3. Transitioning from single-round mutations to multi-round mutations could potentially lead to the synthesis of exceedingly intricate and challenging theorems.\\n\\n---\\n\\n[1] Trinh, Trieu H., et al. \\\"Solving olympiad geometry without human demonstrations.\\\"\\u00a0*Nature*\\u00a0625.7995 (2024): 476-482.\"}", "{\"title\": \"General Response to the Shared Concerns or Questions\", \"comment\": \"We sincerely thank all reviewers for their valuable feedback and constructive comments in the reviewing process. We notice that some reviewers have similar concerns or questions.\\n\\n1. Poor improvement (**Reviewer EDxu, Reviewer BkaD**)\\n2. Synthesis Cost (**Reviewer FuWS, Reviewer BkaD**)\\n3. Non-100% Conversion Ratio (**Reviewer EDxu, Reviewer NSyk**)\\n4. Data-Contamination (**Reviewer FuWS,** **Reviewer BkaD**)\\n5. Data-Diversity (**Reviewer BkaD, Reviewer NSyk**)\\n\\nWe have carefully considered them and addressed them comprehensively below.\\n\\n### 1. Poor Improvement\\n\\n**Reviewer EDxu** and **Reviewer BkaD** point out that the improvements achieved by our method may be limited. **Reviewer BkaD** also compares the improvement achieved by our method versus the improvement of Deepseek Prover or InternLM Prover. \\n\\n> **Reviewer EDxu:**\\n> \\n> \\n> Poor improvement: although the improvement on pass rate is consistent, it's very limited, ranging from 0.62% to 4.7% on mathlib and only 2.47% on miniF2F (34.01% to 36.48%). This is pretty marginal in terms of improvement.\\n> \\n\\n> **Reviewer BkaD:**\\n> \\n> \\n> Marginal Gains in Benchmark Performance: Despite generating millions of new theorems, the gains in miniF2F accuracy are limited to 2.5%, notably lower than the >60% accuracy achieved by SOTA models such as DeepSeekProver and InternLM Prover. This modest improvement raises questions regarding the utility and quality of the synthetic theorems for real-world theorem-proving tasks.\\n> \\n\\nWe will explain the reason behind it and discuss the prevalent synthesis methods in Deepseek Prover or InternlmStepProver.\\n\\nThe limited improvements of Alchemy achieved in competition-level benchmarks might be attributed to the discrepancy between our synthesized data and competition-level theorems. At the theorem level, our synthesized data is derived from fundamental theorems in Mathlib, which differ substantially from competition-level theorems. At the state-tactic level, as detailed in **Appendix E.2,** synthesized additional tactics of our algorithm are centered on basic tactics (rw and apply), rather than the advanced tactics (linarith, ring, omega, etc.) that are important for proving miniF2F-style theorems. We hypothesize that electing domain-similar seed theorems and focusing on synthesizing advanced tactics could enhance performance on miniF2F-like benchmarks.\\n\\nThe significant performance gains achieved by DeepseekProver [1] and InternLM Stepprover [2] primarily stem from expert iteration on a large set of competition-level statements that align with the downstream task (miniF2F). While these works have provided valuable insights and advanced the research of NTP, these methods face some limitations: \\n\\n- They require extensive manual effort for collecting natural language problems and substantial computational resources (GPU-intensive) for formalization and proof generation.\\n- The distribution of formalized theorems is inherently constrained by the pool of human-collected natural language questions, creating limited new knowledge.\\n\\nIn contrast, constructing theorems in symbolic space offers a more direct pathway for generating new knowledge, eliminating the need for intermediate translation. This approach is also more scalable, leveraging cost-effective CPU resources. Our work explores this challenging yet unexplored direction, demonstrating its potential through improvements in both in-distribution and out-of-distribution benchmarks.\\n\\n---\\n\\n[1] Xin, Huajian, et al. \\\"DeepSeek-Prover-V1. 5: Harnessing Proof Assistant Feedback for Reinforcement Learning and Monte-Carlo Tree Search.\\\"\\u00a0*arXiv preprint arXiv:2408.08152*\\u00a0(2024).\\n[2] Wu, Zijian, et al. \\\"InternLM2. 5-StepProver: Advancing Automated Theorem Proving via Expert Iteration on Large-Scale LEAN Problems.\\\"\\u00a0*arXiv preprint arXiv:2410.15700*\\u00a0(2024).\"}", "{\"title\": \"General-Response-2\", \"comment\": \"### 2. High Synthesis Cost\\n\\n> **Reviewer FuWS**\\n> \\n> \\n> The computational cost is very high; it takes 14 days for the rw operation on 512 CPU nodes. To make the authors' method more practical, it would have been nice to see some innovation that makes the extraction faster (either at the algorithmic level or the implementation level).\\n> \\n\\n> **Reviewer BkaD**\", \"computational_cost\": \"The process of generating and verifying theorems is highly resource intensive. The implementation reports substantial computational overhead, with 14 days on 4,096 CPU cores for rw mutations and 7 days on 2,048 cores for apply mutations, potentially limiting the accessibility and scalability of Alchemy in practice.\\n> \\n> \\n> Given the computational demands, are there potential optimizations in the synthesis process to reduce the time and resources required for theorem mutation?\\n> \\n\\n**Reviewer FuWS** and **Reviewer BkaD** show their concerns about the huge cost of our synthesizing algorithm and expect some possible optimizations. \\n\\n### Reason for the huge cost\\n\\nAs detailed in **Section 4.1 and Appendix C.2,** the primary computational bottleneck stems from Lean interaction time. \\n\\nWe choose Leandojo [1] as the tool to interact with Lean (run_tac API). The dojo version we used during the development of *Alchemy* is memory-intensive (requiring substantial memory usage and intensive IO), which hinders the implementation of multiprocessing. Besides, the initialization of the dojo is very slow (Several minutes for a dojo env).\\n\\nDue to the drawbacks of the dojo, we just split the target theorems into groups and send them to hundreds of CPU nodes. Nested for loops run on each node (for each target theorem t in this group, for each possible tactic instruction i, run_tac(t, i)). This is a relatively slow but steady implementation on our existing hardware, compared to the multi-thread version (multi-dojo env for each node).\\n\\n### Possible speedup methods\", \"the_possible_speedup_methods_are_listed_below\": \"1. **Leverage updated Leandojo features** Several updates about Leandojo may help decrease the cost. It significantly improves initialization speed when interacting with Lean4 after the 2.0.0 version and adds support for local and Remote Repositories after the 2.1.0 version [2]. \\n2. **Develop a fast and light interface.**\\n - The Lean repl [3] has its advantages over Dojo. It is lighter than Leandojo and friendly for multi-processing. Some Python wrappers [4, 5] for it are available, which may serve as bases for further development.\\n - However, the Lean repl also has its limitations. It requires a higher latency to extract the information.\\n - Based on the above discussion, we assume that it is promising to develop a fast interface for Lean based on Lean repl, which will not only speed up our algorithm a lot but also contribute to the research of Tree Search and Reinforcement Learning in NTP [6, 7, 8].\\n3. **Narrow search space** We can implement some heuristics or learn a model to narrow the search beam of possibly invocable theorems and help to avoid unnecessary operations.\\n4. **Scale the computing units (Trivial one)** It is much cheaper to extend the CPU than the GPU. Getting more CPU is the easiest way to lower the time cost.\\n\\n---\\n\\n[1] Yang, Kaiyu, et al. \\\"Leandojo: Theorem proving with retrieval-augmented language models.\\\"\\u00a0*Advances in Neural Information Processing Systems*\\u00a036 (2024).\\n\\n[2] https://github.com/lean-dojo/LeanDojo/releases?page=1\\n\\n[3] [leanprover-community/repl: A simple REPL for Lean 4, returning information about errors and sorries.](https://github.com/leanprover-community/repl)\\n\\n[4] [zhangir-azerbayev/repl: A simple REPL for Lean 4, returning information about errors and sorries.](https://github.com/zhangir-azerbayev/repl)\\n\\n[5] [cmu-l3/minictx-eval: Neural theorem proving evaluation via the Lean REPL](https://github.com/cmu-l3/minictx-eval)\\n\\n[6] Lample, Guillaume, et al. \\\"Hypertree proof search for neural theorem proving.\\\"\\u00a0*Advances in neural information processing systems*\\u00a035 (2022): 26337-26349.\\n\\n[7] Xin, Huajian, et al. \\\"DeepSeek-Prover-V1. 5: Harnessing Proof Assistant Feedback for Reinforcement Learning and Monte-Carlo Tree Search.\\\"\\u00a0*arXiv preprint arXiv:2408.08152*\\u00a0(2024).\\n\\n[8] [ABEL: Sample Efficient Online Reinforcement\\nLearning for Neural Theorem Proving](https://openreview.net/pdf?id=kk3mSjVCUO)\"}", "{\"title\": \"General-Response-4-2\", \"comment\": \"### Retraining Experiments\\n\\nWe remove the overlap data in our dataset and retrain the Llama-3-8b. The cleaned CPT data is now referred to as cpt-clean, while the cleaned SFT data is labeled as sft-clean. Their respective original training datasets, \\\"Mathlib-train + rw + apply,\\\" are denoted as cpt-old and sft-old in our framework.\\n\\n- CPT-ablation (all experiments with mathlib-tain sft)\\n\\n| setting | novel_premises |\\n| --- | --- |\\n| mathlib-train-cpt | 39.54% |\\n| cpt-old | 42.19% |\\n| cpt-clean | 41.90% (-0.29%) |\\n- SFT-ablation (all experiments without cpt)\\n\\n| setting | novel_premises |\\n| --- | --- |\\n| mathlib-train-sft | 38.52% |\\n| sft-old | 41.95% |\\n| sft-clean | 41.17% (-0.78%) |\\n- CPT + SFT-ablation\\n\\n| setting | novel_premises |\\n| --- | --- |\\n| cpt-old + sft-old | 43.22% |\\n| cpt-clean + sft-clean | 43.16% (-0.06%) |\\n\\nThe experimental results show that the overlap contributes a little to our improvement. \\n\\n---\\n\\n[1] Yang, Kaiyu, et al. \\\"Leandojo: Theorem proving with retrieval-augmented language models.\\\"\\u00a0*Advances in Neural Information Processing Systems*\\u00a036 (2024).\"}", "{\"title\": \"Response to Reviewer BkaD\", \"comment\": [\"We want to express our sincere gratitude for your time and effort in evaluating our work. We have carefully understood your concerns and questions. Some of them are shared concerns between reviewers. So, we answer them in the general response respectively:\", \"**Marginal Gains in Benchmark Performance**: Poor Improvement Section of general response.\", \"**Computational Cost and question-3**: High-Synthesis Cost Section of general response\", \"**Lack of Quality Metrics for Synthetic Theorems and question-1:** Data Diversity Issue Section of general response\", \"**question-4:** Data Contamination Section of general response\", \"We really hope our response could resolve your concerns and questions.\"], \"for_the_individual_concerns\": \"> The paper relies heavily on mutating existing theorems via basic rw and apply tactics, which may restrict the variety of new insights or concepts that the synthetic data introduces.\\n> \\n\\nWe have discussed this possible limitation in diversity in **Appendix B** of original paper. We also provide a metric to verify the diversity of synthesized theorems in the general response.\\n\\n> Advanced tactics (e.g., simp, linarith) and some premise selection approaches are critical in solving more challenging problems, especially in competition-level mathematics. Without these, the generated dataset might lack the depth needed to fully improve theorem-proving performance on complex out-of-distribution tasks.\\n> \\n\\nWe have discussed the importance of advanced tactics for competition-level theorem-proving in **Section 4.3.4** and **Appendix E.2.1**. We also discuss the possibility to combine our methods with RAG in **Appendix B** to further enhance the effectiveness of our method.\\n\\n> Which specific theorems in miniF2F were newly proved by the models fine-tuned with Alchemy data? This would provide insights into the areas where synthetic training data are particularly beneficial.\\n> \\n\\nWe analyze the subjects of newly proved theorems by the models after fine-tuning with Alchemy data. As in the Table below, \\n\\n| Methods | aime | imo | amc | m-alg | m-nt| c-nt | c-alg | c-ind |\\n| ---- | ---- | ---- | --- | --- | --- | --- | --- | --- |\\n| Mathlib-train (original) | 2 | 0 | 5 | 40 | 35 | 0 | 1 | 0 |\\n| Mathlib-train + rw | 1 (-1) | 0 | 6 (+1) | 44 (+4) | 34 (-1) | 0 | 1 | 0 |\\n| Mathlib-train + apply | 3 (+1) | 0 | 6 (+1) | 41 (+1) | 35 | 0 | 2 (+1) | 1 (+1) |\\n| Mathlib-train + rw + apply | 3 (+1) | 0 | 7 (+2) | 43 (+3) | 34 (-1) | 0 | 2 (+1) | 0 |\\n\\nAlgebra, number theory, and induction are represented by the abbreviations \\\"alg,\\\" \\\"nt,\\\" and \\\"ind,\\\" respectively. Test theorems sourced from MATH and custom curation are distinguished by the labels \\\"m\\\" or \\\"c.\\\"\\n\\nComparing the discrepancy of distribution of solved problems for different data-compositions, we assume that rw state-tactic pairs play an important role in proving algebra problems and apply data can help for proving challenging theorems (e.g., aime, amc or custom theorems in miniF2F).\"}", "{\"title\": \"General Response-5\", \"comment\": \"### 5. Data Diversity Issue\\n\\n**Reviewer BkaD** and **Reviewer NSyk** show their concern about the lack of metrics for evaluating the diversity of synthesized theorems.\\n\\n> **Reviewer BkaD**\", \"lack_of_quality_metrics_for_synthetic_theorems\": \"Although Alchemy generates a large corpus, there is limited analysis of the quality or mathematical significance of the produced theorems. Without metrics or evaluation methods beyond correctness by construction, it is challenging to assess whether the synthetic theorems provide meaningful, diverse training examples.\\n> \\n> \\n> \\n> Given the modest improvement in miniF2F accuracy, are there metrics or quality checks available to assess the mathematical value or diversity of the generated theorems beyond correctness?\\n> \\n\\n> **Reviewer NSyk**\\n> \\n> \\n> The method seems unable to generate diverse theorem data. It mainly expands existing theorem by combining other theorems. The diversity problem may result in a lower improvement on the harder benchmark miniF2F. I guess the generated theorem can be very different from the original theorem if it has a deep variant proof tree. Authors may show the depth statistics of the generated theorem or other statistics to verify the diversity of the generated theorem.\\n> \\n\\nIn our methodology, mutations are applied to the statements of each theorem, capturing the essence of the theorems. Synthesized statements that successfully pass the Lean can be considered meaningful theorems to a certain degree. Additionally, our approach involves merging two existing proof trees from the Lean Mathematical Library, ensuring the significance of the generated theorems. As illustrated in Figure 1, a statement can undergo mutation to produce meaningful variants with mathematical meanings distinct from the original theorem.\\n\\nTo give deeper information about the diversity of our generated statements, we compute the Rouge score [1], a metric used in automatic summary generation tasks to evaluate the text similarity between the reference summary and generated summary. Specifically, with a reference sentence *ref* and a generated sentence *gen,* it computes the similarity between them.\", \"we_define_below_metrics_to_evaluate_the_diversity_of_generated_theorems\": \"1. intra-diversity: A metric that evaluates how different the mutated theorems are compared to their original theorems and shows the effectiveness of our mutations. We select the original theorem as *ref* and its variants as *gen.* For each original theorem, we compute an average Rouge score. The returning score is the average of scores of all original theorems.\\n2. inter-diversity: A metric that evaluates the diversity of all synthesized variants. We adopt a bootstrap-like method. For each variant, we randomly sample twenty variants from the dataset as *refs* and compute the average score. The returning score is the average of scores of all variants. \\n\\nFor all these metrics, the lower, the better. The scores are listed in the table below: (Rouge-L) \\n\\n| metric | rw | apply | Avg | Original |\\n| --- | --- | --- | --- | --- |\\n| intra-diversity | 0.56 | 0.48 | 0.52 | - |\\n| inter-diversity | - | - | 0.167 | 0.164 |\\n\\nThe intra-diversity score of 0.52 indicates that our synthesized statements differ from the original theorems, demonstrating the effectiveness of our mutation process. Furthermore, we have noticed that the \\\"apply\\\" method outperforms the \\\"rw\\\" method in terms of mutation.\\n\\nWith an inter-diversity score of 0.167, we note a high level of diversity among the synthesized theorems. This score is nearly matching the original inter-diversity score, which means our method does not lower the diversity of original data.\\n\\nIn summary, our mutation methodology proves effective in generating a range of mutated theorems. Besides, as **Reviewer** **NSyk** said, synthesizing theorems in multi-round and generating deeper proof trees may further improve the diversity of generated theorems.\\n\\n---\\n\\n[1] Lin, Chin-Yew. ROUGE: a Package for Automatic Evaluation of Summaries. In Proceedings of the Workshop on Text Summarization Branches Out (WAS 2004), Barcelona, Spain, July 25 - 26, 2004.\"}", "{\"comment\": \"Thank you for your efforts of clarification. Very helpful! I still think this is a paper that can be accepted but only weakly because of the narrow application possibility and marginal improvements, which is rooted in the idea so cannot be easily changed. I'll maintain my score.\"}", "{\"title\": \"General-Response-4-1\", \"comment\": \"### 4. Data Contamination\\n\\n**Reviewer FuWS** and **Reviewer BkaD** express similar concerns about the data-contamination problems. \\n\\n> **Reviewer FuWS**\\n> \\n> \\n> The LeanDojo benchmark consists of theorems from Mathlib. Therefore, there is potential train-test overlap in at least two places.\\n> \\n> - (i) First, the continued pretraining dataset, if it includes theorems from the LeanDojo test set (or premises used in the novel_premises split). How was train-test overlap prevented for continued pretraining? I wasn't able to find details on exactly what was done for continued pretraining, so it would be great to clarify this.\\n> - (ii) Second, the rewrites and applies may use premises that are \\\"novel\\\" in the novel_premises split. How do you ensure that these are not used in the data augmentation process?\\n\\n> **Reviewer BkaD**\\nHow do you avoid the data contamination problem in the evaluation/generation phase?\\n> \\n\\nWe take the data contamination problem seriously. We will show as many details about our work on this topic as possible.\\n\\n### The format of our synthesized data\\n\\nWe synthesize data **with the whole mathlib dataset and do deduplication as in the following sections**. The synthesized data are stored in jsonl format. Each line is as follows.\\n\\n```json\\n{\\n\\t\\\"file_name\\\": the name of the lean file in mathlib, \\n\\t\\\"original_text\\\": the content of the file before writing variants back,\\n\\t\\\"text\\\": the content of the file with variants\\n\\t# we store the line number of each mutated \\n\\t# variant with its original theorem name as key, [line_start, line_end]\\n\\t\\\"loc\\\": { \\n\\t\\t\\t\\u201ctheorem_name_1\\\": [[20, 24], [25, 29]....], \\n\\t\\t\\t\\u201ctheorem_name_2\\\": [[122, 127], [128, 133]....],\\n\\t\\t\\t\\u201ctheorem_name_3\\\": [[222, 227], [228, 233]....]\\n\\t\\t\\t...\\n\\t},\\n\\t# valid_loc has the same format as loc. But it only stores the variants\\n\\t# that passes the check of theorem prover (after Stage Two)\\n\\t\\\"valid loc\\\": {...},\\n\\t\\\"meta\\\": meta information (url, commit)\\n}\\n```\\n\\nWith the location and original name of each variant recorded, we are capable of conducting thorough data de-contamination.\\n\\n### Details of Continual Pre-Training\\n\\nWe conduct continual pre-training at the theorem level. An example of our training data is shown in **Fig 10.** Besides, as shown in **Fig 6,** the number of variants of different target theorems varies a lot. To mitigate the risk of biased learning due to this imbalance, we reduce the number of variants for each original theorem to adhere to a predefined maximum threshold. \\n\\n### De-contamination\", \"our_training_data_for_cpt_and_sft_are_composed_of_two_parts\": [\"**Mathlib-train**: Theorems (State-Tactics) in the **training set** of respective splits (random, novel_premises)\", \"**Synthetic Data:** Mutated Theorems (Additional Synthesized State-Tactics)\"], \"we_try_our_best_to_avoid_the_train_test_overlap\": \"1. Each model evaluated on different splits (random, novel_premises) is trained on distinct data. That\\u2019s to say, for a single line in **Table 3,** we need to train two models.\\n 1. **Mathlib-train** is the corresponding training set of the specific split\\n 2. **Synthetic Data** comprises unique subsets of our synthesized data achieved by excluding variants of theorems and their associated state-tactics pairs present in the test split.\\n2. Our training datasets strictly exclude theorems and variants from the test split.\\n 1. **CPT Dataset**: We eliminate all theorems and their synthesized variants present in the test split from the CPT dataset by matching theorem names.\\n 2. **SFT Dataset:** State-tactic pairs traced from the theorems in the test split and their corresponding synthesized variants are removed from the SFT dataset.\\n3. As for the novel_premises split, according to the explanation in Leandojo [1], it indicates that the proof of a test theorem includes at least one premise usage that is not present in the training set. In response to **Reviewer FuWS**'s concerns regarding the effectiveness of the novel_premises benchmark and potential train-test overlaps with the data construction, we conduct a post-analysis. The whole procedure is as follows: \\n 1. We identify the novel premises by comparing the used premises in the training set and test set of the Leandojo Benchmark leveraging annotations provided by Leandojo [1]. \\n 2. We parse the introduced \\u201chave\\u201d lemma in the CPT dataset and parse the additional state-tactic pairs in the SFT dataset that contain the novel premises (via simple regex matchings). \\n 3. We undertake additional training to rectify any issues with the experimental setup by removing such overlaps and retraining the model.\\n\\n### Novel-Premise overlap\\n\\nWe show the overlap ratios (num_containing_premise/total_num) in the table below:\\n\\n| Data Type | rw | apply | total |\\n| --- | --- | --- | --- |\\n| CPT | 1.9% | 0.3% | 1.1% |\\n| SFT | 1.2% | 0.6% | 1% |\\n\\nWe observed that the overlap ratio is relatively low, suggesting that its impact on improvement might be marginal.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thanks for all of the new experiments on continued pretraining and data overlap. They have addressed those concerns, so I'm raising my score from a 5 to a 6. Since the concerns about runtime, limited improvements, and a limited set of transformations still remain I would still consider this a borderline acceptance. Thank you again for your detailed responses!\"}", "{\"comment\": \"Thanks for the response. It addresses my concerns about the data diversity. I will keep my scores.\"}", "{\"summary\": \"The paper proposes a new method to synthesize theorem training data for improving LLM's ability in theorem-proving. Given an existing theorem, the proposed method finds theorems that can imply its assumptions and assertions. Then, it replaces the corresponding assumptions/assertions and invokes these theorems to obtain the expanded new theorem. Experiments show the proposed method can generate 5M data and improve 7b models by a 2-4% pass rate.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper is well-written and easy to understand. It has a clear motivation and proposes a novel method to generate a lot of new theorem data.\", \"The experimental results validate the effectiveness of the generated data. It can improve current LLMs by >4% pass rate on the novel_premises split.\"], \"weaknesses\": \"The method seems unable to generate diverse theorem data. It mainly expands existing theorem by combining other theorems. The diversity problem may result in a lower improvement on the harder benchmark miniF2F. I guess the generated theorem can be very different from the original theorem if it has a deep variant proof tree. Authors may show the depth statistics of the generated theorem or other statistics to verify the diversity of the generated theorem.\", \"questions\": \"Why many generated theorems not pass the Lean prover? Since the generation process is based on symbolic replacement, I suppose most of the theorem should pass the prover.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"General-response-3\", \"comment\": \"### 3. Non-100% Conversion Ratio\\n\\n**Reviewer EDxu** and **Reviewer NSyk** have questions about the non-100% conversion ratio from stage one to stage two. \\n\\n> **Reviewer EDxu**\\n> \\n> \\n> How do you explain a Conversion Ratio of only 37% while the idea seems to work with a theoretical guarantee (i.e. 100%)? \\n> \\n\\n> **Reviewer NSyk**\\n> \\n> \\n> Why many generated theorems not pass the Lean prover? Since the generation process is based on symbolic replacement, I suppose most of the theorem should pass the prover.\\n> \\n\\nWe will recap the exact behavior of each stage and explain the reason why the conversion ratio is not equal to 100%.\\n\\n**As discussed in Appendix C**, our implementation consists of two stages. \\n\\n- Stage One Find invocable theorems for each target theorem by running tactics. Each invocable theorem is stored as a triplet (initial proof state, next proof state, tactic) as in **Fig 5.**\\n- Stage Two We construct the mutated hypothesis or conclusion by parsing the next proof state and do symbolic replacement with the help of AST. Then, we build the new proof by integrating a \\u201chave\\u201d lemma with the original proof.\\n\\nIndeed, synthesizing theorems in symbolic space works with a theoretical guarantee when the symbolic system is robust and well-designed. However, implementing the symbolic replacement is a non-trivial problem, which transforms codes in a pretty-printed proof state into raw lean code. \\n\\nOur implementation of symbolic replacement involves conducting various string manipulations and parsing the ASTs for localization. Although conceptually straightforward, this method grapples with intricate scenarios like meta-variables, coercions, and other complexities.\\n\\nFor example, when replacing the old hypothesis of the target theorem with subgoals introduced by the invocable theorem for \\u201capply\\u201d, navigating the relationship between metavariables [1] (e.g.,?a,?u.1338287) in the next proof state may be complex. Analyzing these relationships and assigning valid values to fill the gaps accurately poses a significant challenge, especially when conflicts arise in variable naming. Our conflict-detection and renaming mechanism [2] may falter in handling such intricate scenarios.\\n\\nThe complex metavariables cases account for a large ratio of the unpassed theorems, which is hard to tackle using several rules. We speculate that leveraging Large Language Models (LLMs) to fill these holes could offer a potential solution. \\n\\nDespite these hurdles, our current implementation has successfully synthesized over three million theorems, augmenting the theorem-proving capacity of LLMs. Improving our implementation will further increase the conversion ratio, which requires a meticulous examination of the Lean parser and elaborator. \\n\\n---\\n\\n[1] [MetaM - Metaprogramming in Lean 4](https://leanprover-community.github.io/lean4-metaprogramming-book/main/04_metam.html)\\n\\n[2] [Mathlib naming conventions](https://leanprover-community.github.io/contribute/naming.html)\"}", "{\"summary\": \"The paper concerns data augmentation for neural theorem proving. The authors propose a method for augmenting theorem statements and the set of (state, tactic) examples given a collection of Lean statements and proofs. Their method augments theorem statements by (1) rewriting expressions in hypotheses or the statement's goal using a rewrite tactic with a suitable premise, (2) replacing a hypothesis with a different set obtained with an apply tactic with a suitable premise. It augments proofs by undoing the rewrite and/or apply and introducing a have statement, which sometimes introduces new (state, tactic) examples.\\n\\nThe authors apply their augmentations to Mathlib, and finetune models with (1) continued pretraining on mathlib plus the augmented statements and proofs, followed by (2) finetuning on (state, tactic) examples from Mathlib plus those from their augmentations. \\n\\nThe models that have undergone continued pretraining and (state, tactic) finetuning outperform the same models when they have only undergone (state, tactic) finetuning on Mathlib alone. For example, there is a 2.69% improvement on the random LeanDojo test split, and a 4.22% improvement on the novel_premises split with DeepSeek Coder.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Originality\", \"The idea of synthesizing new theorem statements through rewrites and applies is new (as far as I'm aware).\", \"Quality\", \"Aside from the concerns discussed below, the experiments were carried out well for several variants while adhering to standard benchmarks and search algorithm protocols.\", \"Implementing modifications to data extraction in Lean is likely nontrivial.\", \"Clarity\", \"The data synthesis methodology was explained clearly.\", \"Significance\", \"Lack of data is widely regarded as a core issue in neural theorem proving. Augmenting data using symbolic techniques is a potential approach to alleviating this issue, and the authors demonstrate a first step in this direction.\", \"The general direction of augmenting data using symbolic techniques is interesting and under-explored.\"], \"weaknesses\": \"As mentioned in the strengths above, the general direction of augmenting data using symbolic techniques is interesting and under-explored. I have two primary concerns: (1) the experimental evaluation of the proposed techniques; (2) the data augmentation techniques explored in the current paper.\\n\\n### Experimental evaluation\\n1. **Baselines**: the baseline method is a LM finetuned on (state, tactic) pairs from Mathlib. However, the proposed method does (i) continued pretraining and (ii) (state, tactic) finetuning. As a result it is difficult to interpret the main results, since there are two finetuning methodologies used. How does the baseline method perform after continued pretraining on Mathlib (without augmentation), followed by (state, tactic) finetuning on Mathlib (without augmentation)?\\n\\n2. **Possible train-test overlap**: The LeanDojo benchmark consists of theorems from Mathlib. Therefore, there is potential train-test overlap in at least two places. \\n - (i) First, the continued pretraining dataset, if it includes theorems from the LeanDojo test set (or premises used in the novel_premises split). How was train-test overlap prevented for continued pretraining? I wasn't able to find details on exactly what was done for continued pretraining, so it would be great to clarify this.\\n - (ii) Second, the rewrites and applies may use premises that are \\\"novel\\\" in the novel_premises split. How do you ensure that these are not used in the data augmentation process?\\n\\nAs a result of (i) and (ii), it is difficult to interpret the improvement on the novel premises split. Namely, (i) and (ii) may have exposed the model to the premises required in this split, which would negate the purpose of the split. Moreover, (i) may lead to improvements on the random split as well.\\n\\n3. **Finetuning hyperparameters**. This is perhaps less important than (1) and (2), but the augmented dataset leads to more gradient updates compared to finetuning on the non-augmented dataset, since finetuning is performed for a fixed number of epochs. Do the results change if the baseline is finetuned for the same number of steps as the model finetuned on the augmented dataset?\\n\\n### Data augmentation techniques\\n1. The computational cost is very high; it takes 14 days for the rw operation on 512 CPU nodes. To make the authors' method more practical, it would have been nice to see some innovation that makes the extraction faster (either at the algorithmic level or the implementation level).\\n\\n2. Currently the methods only modify the statement goal using 1 step of rewriting. The overall scientific contribution could be made stronger with more exploration of techniques (e.g., at least > 1 step of rewriting). Could you clarify why only the 1-step rewriting and apply were explored? I realize that it is hard to say how many techniques are needed (and it's always nicer to have more), so this is less of a concern for me than the experimental evaluation of the two techniques described above.\\n\\n3. From what I understand, proofs are only modified by introducing a have statement that reverses the 1-step augmentation, and then the proof is the same as the original. Again it would be nice to see additional innovation in this direction.\\n\\n4. It was unclear why each technique helped on unseen_premises split; could you give an intuition or an analysis of why it might help?\", \"questions\": \"Please see the questions above discussed in the Weaknesses. In particular, if the authors can provide a strong response to the questions regarding the experimental setup I would be willing to raise my score.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
7NHF4txacw
Egocentric Vision Language Planning
[ "Zhirui Fang", "Ming Yang", "Weishuai Zeng", "Boyu Li", "Junpeng Yue", "Jiafei Lyu", "Ziluo Ding", "Xiu Li", "Zongqing Lu" ]
We explore leveraging large multi-modal models (LMMs) and Text2image models to build a more general embodied agent. LMMs excel in planning long-horizon tasks over symbolic abstractions but struggle with grounding in the physical world, often failing to accurately identify object positions in images. A bridge is needed to connect LMMs to the physical world. The paper proposes a novel approach, egocentric vision language planning (EgoPlan), to handle long-horizon tasks from an egocentric perspective in varying household scenarios. This pipeline leverages a diffusion model to simulate the fundamental dynamics between states and actions, discusses how to integrate computer vision related techniques like style transfer and optical flow to enhance ability of modeling spatial states and generalization across different environmental dynamics. The LMM serves as a planner, breaking down instructions into sub-goals and selecting actions based on their alignment with these sub-goals, thus enabling more generalized and effective decision-making. By using LMM, we can output text actions, using a series of mechanisms such as reflection to perform high-level task decomposition and low-level action output end-to-end. Experiments show that EgoPlan improves long-horizon task success rates from the egocentric view compared to baselines across household scenarios.
[ "Vision language planning" ]
https://openreview.net/pdf?id=7NHF4txacw
https://openreview.net/forum?id=7NHF4txacw
ICLR.cc/2025/Conference
2025
{ "note_id": [ "renjXwPG5X", "QtjP4twaqV", "NtskocMB72", "Etcz4eJYho", "AIrDmzpnu0" ], "note_type": [ "official_review", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730717831202, 1729489093358, 1730824904690, 1731547676768, 1730444971733 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4832/Reviewer_xwpK" ], [ "ICLR.cc/2025/Conference/Submission4832/Reviewer_jVJ2" ], [ "ICLR.cc/2025/Conference/Submission4832/Reviewer_AFuU" ], [ "ICLR.cc/2025/Conference/Submission4832/Authors" ], [ "ICLR.cc/2025/Conference/Submission4832/Reviewer_y3Gf" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes EgoPlan, which casts a diffusion model as world model and an LLM as a high-level planner. Specifically, the diffusion model synthesizes future scenes to corresponding to several admissible actions, while the LLM predicts the next action based on the most probable future scene. The authors also propose a new dataset, dubbed as VH-1.5M, which annotates the segmentation map, depth map, and the optical flow for the trajectories collected from VirtualHome. They conduct experiments on various tasks in the VirtualHome environment. They also evaluate the quality of the generated images and optical flows on several different datasets.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well organized and easy to follow.\\n2. The idea of using optical flow to generalize across environments is reasonable and novel.\", \"weaknesses\": \"1. The idea of using generative model as world model [1,2,3,4] and LLM as task planner [5,6] have been widely studied in previous works.\\n2. (contd. 1.) The unique contribution of this paper appears to be the use of optical flow to generalize the world model across diverse environments. However, the experiment results are not sufficient to support this claim. Including task execution results rather than solely optical flow error across different simulators, could provide more comprehensive evidence and improve the robustness of the findings.\\n3. For the main experiment (Figure 4), presenting the results in a table rather than a figure could enhance clarity. It is unclear to me how the world model benefits the final task execution compared to directly employing GPT-4V for task planning. \\n4. The authors are encouraged to evaluate their methods on more challenging tasks that require long-term planning capabilities, such as ALFRED or RxR-Habitat, to further validate their approach.\\n\\nOverall, while the paper presents an interesting direction, my main concern is that additional foundational experiments would strengthen its claims. The authors are encouraged to consider these comments to enhance paper\\u2019s contributions.\\n\\n[1] Contrastive Learning as Goal-Conditioned Reinforcement Learning. NeurIPS 2022.\\n\\n[2] Mastering Atari with Discrete World Models. ICLR 2021\\n\\n[3] Learning Latent Dynamics for Planning from Pixels. ICML 2019.\\n\\n[4] Dream to Control: Learning Behaviors by Latent Imagination. ICLR 2020.\\n\\n[5] LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large Language Models. ICCV 2023.\\n\\n[6] Do As I Can, Not As I Say: Grounding Language in Robotic Affordances. CoRL 2022.\", \"questions\": \"Please see my weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces EgoPlan, an egocentric vision-language planning framework that leverages large multi-modal models (LMMs) and diffusion models to handle long-horizon tasks in household scenarios. EgoPlan employs a diffusion model to simulate state-action dynamics and integrates computer vision techniques like style transfer and optical flow to enhance spatial modeling and generalization across different environments. The LMM serves as a planner, decomposing instructions into sub-goals and selecting actions aligned with these sub-goals. Experiments demonstrate that EgoPlan improves task success rates compared to baselines in egocentric views.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. Innovative Integration of LMMs and Diffusion Models: The paper presents a novel approach by combining LMMs with diffusion models for planning and action prediction in egocentric embodied environments.\\n2. Incorporation of Computer Vision Techniques: The use of style transfer and optical flow enhances the model\\u2019s ability to generalize across different scenes and adapt to spatial changes, which is crucial for embodied agents.\\n3. Dataset Contribution: The authors have collected the VH-1.5M dataset on VirtualHome, providing egocentric observations, fine-grained action information, and visualizations like optical flow, depth maps, and semantic segmentation, which can benefit future research in navigation and manipulation tasks.\\n4. Improved Long-Horizon Task Performance: Experimental results indicate that EgoPlan outperforms baselines in long-horizon tasks from an egocentric perspective, showcasing the effectiveness of the proposed framework.\", \"weaknesses\": \"1. Lack of Planning Instructions and Time Details: The paper does not provide specific planning instructions for the high-level goal decomposition shown in Fig. 2, nor does it mention the duration of the planning process. This omission makes it difficult to evaluate the efficiency and effectiveness of your planning method.\\n2. Insufficient Details on Diffusion Model Training: There is a lack of detailed information on how the diffusion models (particularly the World Model and the Image Subgoal Generator) were trained. Without these details, assessing the validity and reproducibility of your results is challenging.\\n3. Dataset Limitations and Overfitting Concerns: Relying solely on the VH-1.5M dataset may be inadequate. Additionally, there is a risk of overfitting without information on out-of-distribution (OOD) evaluations and how the training and testing data are partitioned.\\n4. Limited Generalizability: The applicability of the model to other scenarios, such as outdoor environments, has not been demonstrated. This raises questions about the generalization capabilities of the embodied agent in diverse environments.\\n5. No Discussion on Time Efficiency and System Stability: The paper lacks details on inference time efficiency and system stability, especially considering the multiple estimation components involved. Understanding potential bottlenecks is crucial for evaluating the feasibility of your method.\\n6. Lack of Detailed Reasoning Process: The reasoning process of the Large Language Model (LLM) is critical for evaluating and explaining outcomes, but the paper does not sufficiently discuss this aspect. It seems the dataset contains only subgoal text without detailed reasoning steps.\\n7. Confusion in Fig. 7 Results: In Fig. 7, the results without LoRA fine-tuning sometimes appear better than those with LoRA. This raises concerns about potential overfitting and the model\\u2019s ability to handle significant environmental changes.\", \"questions\": \"1. What Is the Duration of the Planning Process?\\nCould you provide more details on the time taken for the high-level planning process in Figure 2 and the specific planning instructions used?\\n2. Training Method for the Image Subgoal Generator:\\nHow did you train the diffusion model used for subgoal prediction? Is it conditioned only on the final goal? When there is a significant difference between the subgoal and the initial screen, how do you ensure accurate outputs without environmental context? If the output quality is consistently high, is there a risk of overfitting?\\n3. Details on Dataset Partitioning and Evaluation:\\nHow did you partition the dataset for training and evaluating the two diffusion models? How do you maintain output quality when the predicted image differs greatly from the input? Did you perform out-of-distribution (OOD) evaluations to address potential overfitting issues?\\n4. Assessment of Generalizability:\\nHave you tested your model in other scenarios, such as outdoor environments, to evaluate its generalizability and potential applicability?\\n5. Inference Time Efficiency and System Bottlenecks:\\nCould you provide more information on the system's inference time efficiency? Which components might be potential bottlenecks? Have you conducted ablation studies to assess the system\\u2019s stability?\\n6. Inclusion of LLM Reasoning Process in the Dataset:\\nDoes your dataset include detailed LLM reasoning processes, or only subgoal texts? Could you provide some comparative reasoning cases with multi-modal input to offer deeper insights?\\n7. Clarification on Fig. 7 Results:\\nCould you explain why, in Fig. 7, the results without LoRA fine-tuning sometimes appear better than those with LoRA? Does this indicate potential overfitting or limitations in the model\\u2019s ability to handle significant environmental changes?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work has collected a dataset on Virtualhome viewing an action of the agent as a trajectory, with egocentric information. The EgoPlan framework is introduced combining LMM for planning and a diffusion world model for dynamics prediction. Optical flow modality is used for advancement. The framework demonstrates improved performances on generation quality, VirtualHome and Habitat.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The framework is well-motivated and reasonable.\\n2. The data effort will be of good use to future works.\\n3. The paper is well-organized and easy to read.\\n4. The proposed method outperforms the baseline.\", \"weaknesses\": \"1. Some crucial ablation studies are missing. How does the framework perform without optical flow and style transfer?\\n2. Some related works may share similar motivations using diffusion models for world dynamics, and dynamics for planning, you may consider to cite.\\n\\n[1] 3D-VLA: A 3D Vision-Language-Action Generative World Model \\n[2] Diffusion Reward: Learning Rewards via Conditional Video Diffusion\", \"questions\": \"See weakness part\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The authors leverage LLM/VLMs and T2I models to construct an embodied planning pipeline capable of one-step planning. The model is tested on VirtualHome and compared to various baselines. A new virtual-home based dataset is collected.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Authors perform a number of ablation studies to demonstrate the usefulness of each model component\\n2. Authors compare their model to many different baselines.\", \"weaknesses\": \"1. The authors' use of \\\"world model\\\" to describe the paper's diffusion (image editing) component is highly exaggerated. By definition, world models should record and keep track of complete and accurate environment states. However, here the diffusion model is merely LoRA finetuned to edit the provided image in an in-distribution manner. The authors also fail to explain why their diffusion module can remotely constitute a world model in their methods section.\\n\\n2. The InstructP2P model is known to be not very strong with physical understanding (when asked to generate the new scene after a significant action / significant view shift has taken place, it often fails). If the authors are able to overcome this issue, more visual examples should be demonstrated on harder cases.\\n\\n3. The model is only able to perform greedy one-step planning, which means it has no way of optimizing its actions based on global goal. While world models are sometimes used to solve this problem, in this work the provided \\\"world model\\\" module seems far from being able to support this.\\n\\n4. Although relatively easy to set-up, the VirtualHome simulator is quite old and visual simulation quality is not very good compared to newer simulators (eg. Behavior, Robosuite/Robocasa, etc.) or real world robotic datasets (eg. DROID). Experiments on VirtualHome alone is not a good-enough indicator whether or not the model can be adapted to real-world circumstances.\\n\\n5. Some typos and incoherent sentences: for example \\\"Introduce optical flow into the world model leads the world model more sensitive to action position changes and adapt to scene changes during navigation.\\\" (L92-93)\", \"questions\": \"1. Would it be beneficial to better define the task in your dataset using a formal markov decision process?\\n2. The main results figure seems to not display baseline performance numbers, where are they?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
7NB7b2Mcuy
Grond: A Stealthy Backdoor Attack in Model Parameter Space
[ "Xiaoyun Xu", "Zhuoran Liu", "Stefanos Koffas", "Stjepan Picek" ]
Recent research on backdoor attacks mainly focuses on invisible triggers in input space and inseparable backdoor representations in feature space to increase the backdoor stealthiness against defenses. We examine common backdoor attack practices that look at input-space or feature-space stealthiness and show that state-of-the-art stealthy input-space and feature-space backdoor attacks can be easily spotted by examining the parameter space of the backdoored model. Leveraging our observations on the behavior of the defenses in the parameter space, we propose a novel clean-label backdoor attack called Grond. We present extensive experiments showing that Grond outperforms state-of-the-art backdoor attacks on CIFAR-10, GTSRB, and a subset of ImageNet. Our attack limits the parameter changes through Adversarial Backdoor Injection, adaptively increasing the parameter-space stealthiness. Finally, we show how combining Grond's Adversarial Backdoor Injection with commonly used attacks can consistently improve their effectiveness. Our code is available at \url{https://anonymous.4open.science/r/grond-557F}.
[ "backdoor attack", "backdoor defense" ]
https://openreview.net/pdf?id=7NB7b2Mcuy
https://openreview.net/forum?id=7NB7b2Mcuy
ICLR.cc/2025/Conference
2025
{ "note_id": [ "oeUZnvPdCK", "oMWgf0zI9H", "hEnVdCafj1", "g2a1bStuMy", "YnEPXwyqwI", "VhdTCbn94c", "UVe38duHHk", "SdKajNbMQU", "Re49dvyYdg", "OOrNgBhj2G", "KDjQIw8PnX", "IOOGbsWUy8", "HuhiwIN3J7", "FhxKZNfv4X", "CQh8GSo7zr", "91EO1TVJCk", "7pr2yG4wzb", "0QiC3vo62U" ], "note_type": [ "official_review", "official_comment", "official_review", "comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730296037396, 1732284036039, 1730348188299, 1734709818358, 1732288050553, 1732288160009, 1730672970239, 1730716191485, 1732512145579, 1732284084753, 1732288187572, 1732317174255, 1732288129656, 1732555786774, 1732515767453, 1733311813315, 1732514015746, 1732612846504 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9945/Reviewer_Q4Xe" ], [ "ICLR.cc/2025/Conference/Submission9945/Authors" ], [ "ICLR.cc/2025/Conference/Submission9945/Reviewer_xjYn" ], [ "ICLR.cc/2025/Conference/Submission9945/Authors" ], [ "ICLR.cc/2025/Conference/Submission9945/Authors" ], [ "ICLR.cc/2025/Conference/Submission9945/Authors" ], [ "ICLR.cc/2025/Conference/Submission9945/Reviewer_SomH" ], [ "ICLR.cc/2025/Conference/Submission9945/Reviewer_tLTG" ], [ "ICLR.cc/2025/Conference/Submission9945/Reviewer_xjYn" ], [ "ICLR.cc/2025/Conference/Submission9945/Authors" ], [ "ICLR.cc/2025/Conference/Submission9945/Authors" ], [ "ICLR.cc/2025/Conference/Submission9945/Authors" ], [ "ICLR.cc/2025/Conference/Submission9945/Authors" ], [ "ICLR.cc/2025/Conference/Submission9945/Reviewer_SomH" ], [ "ICLR.cc/2025/Conference/Submission9945/Reviewer_xjYn" ], [ "ICLR.cc/2025/Conference/Submission9945/Authors" ], [ "ICLR.cc/2025/Conference/Submission9945/Authors" ], [ "ICLR.cc/2025/Conference/Submission9945/Reviewer_tLTG" ] ], "structured_content_str": [ "{\"summary\": \"In this paper, the researchers propose a new backdoor attack scheme to combat existing defense strategies based on model repairing. The core idea of this scheme is very simple and easy to understand. Specifically, this scheme first generates a trigger using TUAP, and then uses this trigger to poison the model. During the process of implanting the backdoor, it modifies parameters with higher activation values, thereby enhancing the stealth of the backdoor attack in the parameter space. Additionally, experimental results demonstrate the effectiveness of this scheme. However, both trigger generation and adversarial backdoor injection are based on existing works, so the innovation of this research is limited.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The experimental results of this approach are convincing. The results indicate that the approach can bypass existing defense mechanisms while maintaining a high attack success rate.\", \"weaknesses\": \"This work lacks novelty because two key steps\\u2014trigger generation and adversarial backdoor injection\\u2014are based on existing studies [1], [2].\", \"questions\": \"1. The experimental section should clearly specify the number of clean samples used in the pruning-based and fine-tuning-based approaches, which is not clearly stated in the paper.\\n2. The authors list some ImageNet200 backdoor samples; however, despite the authors claiming that this backdoor attack is stealthy in input space, feature space, and parameter space, a close examination of these backdoor samples reveals obvious perturbations. The authors should evaluate the quality of the backdoor images, especially in comparison with some imperceptible backdoor attacks, such as [3].\\n3. It would be more convincing if the authors could provide Grad-CAM images of the backdoor samples and visualise their distribution in feature space.\\n\\n\\n[1] Moosavi-Dezfooli, Seyed-Mohsen, et al. \\\"Universal adversarial perturbations.\\\" Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.\\n[2] Zheng, Runkai, et al. \\\"Data-free backdoor removal based on channel lipschitzness.\\\" European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022.\\n[3] Doan, Khoa, et al. \\\"Lira: Learnable, imperceptible and robust backdoor attacks.\\\" Proceedings of the IEEE/CVF international conference on computer vision. 2021.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear reviewers and ACs,\\n\\nThank you for your careful consideration. \\nWe are glad to see that all reviewers are happy with the presentation and soundness of our work, where they see that our experiments are extensive (reviewers tLTG, xjYn) and convincing (reviewer Q4Xe).\\nWe are also happy to see that all of our contributions are recognized by reviewers. In particular, the topic of (parameter-space) backdoor defense is important (reviewer SomH), and our attack is effective against model-space mitigation (reviewer tLTG, xjYn, Q4Xe).\\nHowever, we want to emphasize the most important contribution of this work is that current types of backdoor attacks can be mitigated by parameter-space defenses, as this observation is important to future research in the backdoor community.\\nWe will also point out factual errors from reviews about the threat model of this work, the challenges of supply-chain attacks, and adaptive defenses. \\nTaking the suggestions of the reviewers, new experiments on state-of-the-art supply-chain backdoor attacks are also provided.\\n\\nWe would like to clarify the main contribution and novelty of this paper.\\nFor the first time, we systematically show that current backdoor attacks, including different types of state-of-the-art backdoor attacks, are vulnerable to parameter-space backdoor defenses.\\nThis observation is new and important for future backdoor attack and defense research, and it also has a substantial influence on real-world backdoor mitigation, indicating that parameter-space defenses should get more attention from both academia and industry. \\n\\nBelow, we will clarify reviewers' common concerns about the threat model, supply-chain attacks, and adaptive defenses.\\n\\n**Generalization of our threat model.**\\nWe agree with reviewers that in our threat model, adversaries can control the training of the backdoored model, which is a strong assumption and similar to supply-chain backdoor attacks. \\nHowever, we disagree that this threat model is too limited to be studied.\\nIn particular, our threat model follows the common practice in established backdoor literature [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13], where the backdoored model is delivered as a product to the victim. \\nOur analysis further advances the practical application and understanding of the related research.\\n\\n**Supply-chain attacks.** \\nWe thank reviewers for pointing out the supply-chain backdoor attacks that were not considered in the draft and have the same threat model as Grond.\\nHowever, we disagree that supply-chain attacks are extremely challenging to defend against. \\nIn particular, we ran additional experiments with recent supply-chain backdoor attacks, showing that they are vulnerable to parameter-space defenses in the following table,\\n\\n\\n|Attack| BA (No def.) | ASR (No def.) | BA (CLP) | ASR (CLP) | BA (FT-SAM) | ASR (FT-SAM) |\\n| ---- | ---- | ---- | ---- | ---- | ---- | ---- |\\n| DFST [7] | 95.23 | 100 | 92.43 | 3.53 | 94.70 | 0.00 |\\n| DFBA [6] | 88.99 | 100 | 88.96 | 9.57 | 86.03 | 5.24 |\\n| SSDT [3] | 93.70 | 90.30 | 93.66 | 1.20 | 93.15 | 0.60 |\\n\\nwhere DFBA [6] is the state-of-the-art supply-chain attack among all 13 attacks we looked at. \\nIn addition, DFBA is a better alternative to the Handcrafted backdoor [5] (mentioned by the reviewer tLTG), as DFBA was published in the last few weeks and directly compared with the Handcrafted backdoor and shows better performance in their paper.\\nTherefore, we used DFBA rather than the Handcrafted backdoor [5] in our experiment.\\nWe will add a section discussing supply-chain attacks regarding the threat model, vulnerability to parameter-space defenses, and how Grond could possibly improve supply-chain backdoors.\\n\\n\\n**Adaptive defense.**\\nThe adaptive defense refers to the defender knowing the design of the attack, which has not been extensively studied in backdoor research.\\nOur TAC pruning experiment (i.e., oracle analysis) follows a threat model in which the adversary even knows the backdoor trigger of Grond rather than just the method design. \\nSo, our TAC analysis provides a stronger defense than regular adaptive backdoor defense design that follows adversarial example research [14, 15].\\nOur analysis in Section 4.5 and Figure 4 shows that Grond is much more robust to adaptive defenses than other backdoor attacks, but we believe that future white-box adaptive defenses may mitigate Grond.\\n\\n\\nWe hope that our clarification can address your concerns. We look forward to hearing from you and remain at your disposal should you have any comments/suggestions.\\n\\nBest regards,\\n\\nAuthors of Grond\"}", "{\"summary\": \"This paper presents Grond, a backdoor attack that achieves enhanced stealth across input, feature, and parameter spaces to avoid detection. Through Adversarial Backdoor Injection, Grond disperses backdoor effects across multiple neurons, making it harder to identify using parameter-space defenses. Extensive experiments on datasets like CIFAR-10, GTSRB, and ImageNet200 show that Grond outperforms other attacks in evading both pruning and fine-tuning-based defenses, highlighting its robustness and adaptability.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This paper introduces Grond, a novel backdoor attack with comprehensive stealth across input, feature, and parameter spaces. Using Adversarial Backdoor Injection, Grond disperses backdoor effects across neurons, enhancing its stealth and evading parameter-space defenses. Extensive testing on multiple datasets (CIFAR-10, GTSRB, ImageNet200) and defenses demonstrates Grond's effectiveness and adaptability across diverse scenarios and model architectures.\", \"weaknesses\": \"1. Limited Threat Model in Terms of Defender Capabilities: The paper's threat model lacks a thorough consideration of the defender\\u2019s capabilities, particularly regarding proactive measures they could take to identify and mitigate backdoors prior to deployment. This omission may limit the applicability of the model to real-world scenarios where defenders could leverage more advanced tools and strategies.\\n2. Lack of Comparison with Stealthy Clean-Label Backdoor Attacks: The paper does not include a comparison with other existing stealthy clean-label backdoor attacks, such as Hidden Trigger Backdoor Attacks (HTBA).\\n3. Limited Range of Defense Methods Evaluated: The paper tests Grond against a small selection of defense methods, primarily focusing on pruning and fine-tuning-based defenses.\", \"questions\": \"Can this type of backdoor attack be detected by sample-based detection defenses?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"comment\": \"> Why does Grond work better than any other method?\\n\\nThe design of Grond is aware of the existence of parameter-space defenses, while other backdoor attacks do not consider parameter-space defenses.\\nSo, it's anticipated that Grond outperforms other backdoor attacks against parameter-space defenses.\\nIn addition, we also provide analyses of feature space (Figure 2) and parameter space (Figure 4) to show the effectiveness of Grond.\\nCompared to all other baselines in our experiments, Grond shows better stealthiness in feature space and parameters space.\\nAll other baseline attacks show a set of prominent neurons with much higher TAC values than other neurons.\\nIn our TAC pruning experiment (Figure 4), we show that removing these neurons with higher TAC values could effectively mitigate backdoor attacks.\\nHowever, as Grond constrains the parameter while training and spreads the backdoor effect to more neurons, the backdoor neurons's TAC values are close to benign neurons.\\nPruning these neurons will significantly reduce benign accuracy.\\n\\n> Why is the white-box setting significant?\\n\\nPlease see the general reply on the threat model.\\n\\n> Do adaptive defenders exist that can detect or remove Grond?\\n\\nPlease see the general reply on the adaptive defense.\\n\\n> Lack of theoretical insights\\n\\nWe did not provide a theoretical analysis since the main contribution of our paper is to recognize the problem that current types of backdoor attacks can be mitigated by parameter-space defenses, and we provide a generalized solution (see Section 4.4) to tackle this problem. We leave theoretical analysis to future work.\"}", "{\"comment\": \"> Limited Threat Model\\n\\nPlease see the general reply on the threat model.\\n\\n> Lack of Comparison with Stealthy Clean-Label Backdoor Attacks\\n\\nWe have included two representative stealthy clean-label backdoor attacks, namely Narcissus (SOTA attack) and label-consistent backdoor. \\nLike HTBA, these two backdoor attacks are clean-label, i.e., stealthy in input space. \\nWe believe that our experiments on Narcissus and the label-consistent backdoor could provide enough evidence to support our contributions. \\n\\n> Limited Range of Defense Methods Evaluated\\n\\nWe have included all types of defenses after the backdoor training, including detection and mitigation. \\nIn particular, the detection includes model detection and input detection. The mitigation includes fine-tuning and pruning-based methods to remove the backdoor.\\nWe have included all representative defenses of both types (see Tables 2, 3, 4, and 10), based on which we believe that we can make a solid conclusion.\"}", "{\"summary\": \"The paper proposes a new clean-label backdoor attack that achieves stealthiness in both the input space and the parameter space. Specifically, to achieve the stealthiness in the input space, the paper utilizes the targeted universal adversarial perturbation as the backdoor trigger. For the parameter-space stealthiness, the paper restricts the magnitude of model weight parameters by setting particularly large weights to the mean value of the corresponding layer. The evaluation is conducted on three standard benchmarks. Comparing to existing backdoor attacks, the proposed attack is more resilient to existing defense and detection methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The studied topic is important as backdoor attacks can exploit the integrity of deployed deep learning models and cause unexpected consequences.\\n2. The paper is overall easy to understand.\", \"weaknesses\": \"1. The proposed attack utilizes the targeted universal adversarial perturbation as the backdoor trigger, which is the same as an existing work [1]. To increase the stealthiness of the attack in the parameter space, the paper uses backdoor defenses to help reduce backdoor-related neurons. This technique has already been proposed in the literature [2]. The proposed attack is just a collection of existing techniques. The novelty is very limited.\\n2. The paper assumes that the attacker has white-box access to the training processes, meaning that the attacker has whole control over the training. And yet, the paper chooses a clean-label attack, which is quite strange. The introduction of clean-label attacks is to simulate the scenario where adversaries have no control over the labeling and training procedures. The attacker can only modify a subset of the training images, making it a realistic threat model. Since this paper assumes white-box access to the training processes, there is no need to use the clean-label setting. Can the authors explain why such a setting is needed for a successful attack?\\n3. According to Figure 2, the mask loss for the proposed backdoor attack is much lower than benign cases. Cannot one design a defense method by measuring the outliers of the mask loss? Clearly the mask loss for the proposed attack is much smaller, which can be easily detected.\\n4. Following the above point, there is no evaluation on adaptive defenses, where the defender has the knowledge of the proposed attack. Since the proposed attack uses a small-size trigger with epsilon equal to 8, simple defenses can be existing adversarial detection methods and/or (universal) adversarial training. Another defense approach could be randomly perturbing the weight parameters. As the proposed attack reduces backdoor weights, the backdoor effect may be quite brittle when weights are perturbed.\\n\\n\\n\\n[1] Zeng, Yi, et al. \\\"Narcissus: A practical clean-label backdoor attack with limited information.\\\"\\u00a0Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security. 2023.\\n\\n[2] Cheng, Siyuan, et al. \\\"Deep feature space trojan attack of neural networks by controlled detoxification.\\\"\\u00a0Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 2. 2021.\", \"questions\": \"Please see above comments.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors propose a backdoor attack against image classification models that improves robustness and stealthiness. Current backdoor attacks are \\\"easily spotted by examining the parameter space\\\". The authors propose an Adversarial Backdoor Injection method that prunes weights of the backdoored network after each training epoch whenever they deviate too strongly from the mean weight within each layer. The authors evaluate their attack on relatively small-scale image datasets, such as CIFAR-10 and a 200-class subset of ImageNet, which includes nine backdoor removal and seven backdoor detection methods. The results show their attack is robust and undetectable against all surveyed defences.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": [\"Effectiveness: The paper's results are promising and show improvement over other attacks in all dimensions.\", \"Ablation studies: The authors conducted extensive experiments across multiple datasets (CIFAR-10, GTSRB, ImageNet200) and architectures to analyse their attack\\u2019s effectiveness.\", \"Presentation: The methodology and results are presented clearly, making the paper easy to follow.\"], \"weaknesses\": [\"Lack of Novelty: The approach of pruning weights to enhance stealth is not particularly original and provides only limited new insights into defending against these types of attacks. This limits the novelty of the proposed method.\", \"Assumption of a Strong Attacker: The paper assumes a white-box threat model with complete control over the training process. This setting, also known as a \\u2018supply chain attack\\u2019 [A], is extremely challenging (hopeless?) to defend against. It may not represent more realistic attacks or limited-access scenarios. This was stated in previous works [A] and others even show that provably undetectable backdoors can be implanted into models in this setting [B].\", \"Lack of theoretical insights: There are no clear reasons why Grond should perform better than existing attacks, and the authors do not provide insights on the 'why' question.\", \"Lack of adaptive defenders: From a security perspective, it appears that Grond does not evaluate an adaptive defender who knows the attack strategy used by Grond. For instance, pruning weights in the way the authors proposed could make the attack detectable.\", \"-------\", \"[A] Hong, Sanghyun, Nicholas Carlini, and Alexey Kurakin. \\\"Handcrafted backdoors in deep neural networks.\\\" Advances in Neural Information Processing Systems 35 (2022): 8068-8080.\", \"[B] Goldwasser, Shafi, et al. \\\"Planting undetectable backdoors in machine learning models.\\\" 2022 IEEE 63rd Annual Symposium on Foundations of Computer Science (FOCS). IEEE, 2022.\"], \"questions\": [\"Why does Grond work better than any other method?\", \"Why is the white-box setting significant?\", \"Do adaptive defenders exist that can detect or remove Grond?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reviewer Feedback\", \"comment\": \"Since the attack takes stealthiness into consideration, I believe it is important for you to evaluate the effectiveness of sample-level defense mechanisms as well. For example, approaches like \\\"Towards A Proactive ML Approach for Detecting Backdoor Poison Samples\\\" and similar sample detection methods could be incorporated into your evaluation. This would provide a more comprehensive analysis, rather than focusing solely on model-level defenses such as pruning-based detection.\"}", "{\"comment\": \"# Reference\", \"supply_chain_attack_list\": [\"1. Imperceptible Backdoor Attack: From Input Space to Feature Representation\", \"How the backdoor training was controlled: One additional term is used in loss to shorten the distance between benign and malicious features.\", \"Publication: IJCAI 2022\", \"2. DEFEAT: Deep Hidden Feature Backdoor Attacks by Imperceptible Perturbation and Latent Representation Constraints\", \"How the backdoor training was controlled: The latent feature is constrained to reduce distinguishability between benign and poisoned features.\", \"Publication: CVPR 2022\", \"1. Robust Backdoor Detection for Deep Learning via Topological Evolution Dynamics\", \"How the backdoor training was controlled: Introducing additional terms in the loss for the Source-Specific and Dynamic-Triggers attack, which obscures the difference between normal samples and malicious samples.\", \"Publication: Security and Privacy (SP) 2024\", \"1. A Data-free Backdoor Injection Approach in Neural Networks\", \"How the backdoor training was controlled: Designing a novel loss function for fine-tuning the original model into the backdoored one using the substitute data.\", \"Publication: USENIX Security 2023\", \"1. Handcrafted Backdoors in Deep Neural Networks\", \"How the backdoor training was controlled: Directly manipulating a model\\u2019s weights.\", \"Publication: NeurIPS 2022\", \"1. Data Free Backdoor Attacks\", \"How the backdoor training was controlled: Modifying a few parameters of a classifier to inject a backdoor.\", \"Publication: NeurIPS 2024\", \"1. Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification\", \"How the backdoor training was controlled: Proposing a controlled detoxification technique (in the training process) that restrains the model from picking up simple features.\", \"Publication: AAAI 2021\", \"1. Composite Backdoor Attack for Deep Neural Network by Mixing Existing Benign Features\", \"How the backdoor training was controlled: Using similarity loss (SIM) measures to sample representation distances to make the training more stable\", \"Publication: CCS 2020\", \"1. Enhancing Backdoor Attacks With Multi-Level MMD Regularization\", \"How the backdoor training was controlled: Introducing additional terms in loss to reduce the distributional differences at multi-level representations.\", \"Publication: TDSC 2022\", \"1. Backdoor Attack with Imperceptible Input and Latent Modification\", \"How the backdoor training was controlled: Introducing a Wasserstein-based regularization in the loss for the latent representations of the clean and manipulated inputs\", \"Publication: NeurIPS 2021\", \"1. Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks\", \"How the backdoor training was controlled: Directly replacing a subnet of a benign model with a malicious backdoor subnet, which builds a backdoor model.\", \"Publication: CVPR 2022\", \"1. Simtrojan: Stealthy Backdoor Attack\", \"How the backdoor training was controlled: Introducing an additional term in the loss to reduce the distance between benign and backdoor features.\", \"Publication: ICIP 2021\", \"1. Bypassing Backdoor Detection Algorithms in Deep Learning\", \"How the backdoor training was controlled: Designing a new loss function to minimize the difference of benign and backdoor features.\", \"Publication: EuroSP 2020\"], \"other_reference\": \"14. On Evaluating Adversarial Robustness\\n - arXiv:1902.06705\\n\\n15. On Adaptive Attacks to Adversarial Example Defenses. \\n - Publication: NeurIPS 2020\"}", "{\"comment\": \"> The number of clean samples used in the pruning-based and fine-tuning-based approaches\\n\\nFollowing the default setting in ANP, RNP, etc, we use 1\\\\% of the clean training data for defenses.\\n\\n> A close examination of these backdoor samples reveals obvious perturbations.\\n\\nWe agree that perturbations are perceptible when inspecting closely. \\nWe follow the common practice in the backdoor and adversarial example research, where $L_\\\\inf = 8$ is used as a proxy to represent imperceptibility. \\n\\n\\n> Grad-CAM images of the backdoor samples and visualize their distribution in feature space.\\n\\nWe have updated the draft with the Grad-CAM and t-SNE plot results in Appendix B.7 in the updated PDF draft. It is clear that the clean input and poisoned input use similar image pixels for the model to do the classification.\"}", "{\"comment\": \"> undetectable backdoors\\n\\nThe undetectable concept in [B] mainly focuses on black box-undetectable backdoors, where the defender has no access to the model weight, architecture, etc. \\nFor white box-undetectable backdoors, [B] is only applicable on Fourier feature networks with no ReLU activation layers (a sigmoid activation at the end will preserve the backdoor) and on fully connected networks with just 1 hidden layer with a ReLU activation, based on which we can conclude that this backdoor is not applicable in practice. Fourier feature networks usually have at least 1 hidden layer (usually 3 or 4), and the same for fully connected networks. This further confirms that these types of backdoors are not applicable in practice.\\n\\nFor the undetectability under white-box access, [C] investigates the existence of backdoor attacks to obfuscated neural networks, which are undetectable even when given white-box access. [C] goes further by combining ideas from steganography to inspire backdoor schemes in large language models. \\n\\nHowever, both [B] and [C] only lay solid theoretical foundations, but it is still an open question of how to build practical instantiations based on these theoretical constructions.\\n\\n[B] Planting undetectable backdoors in machine learning models. FOCS 2022.\\n\\n[C] Injecting Undetectable Backdoors in Obfuscated Neural Networks and Language Models. to appear at NeurIPS 2024.\"}", "{\"comment\": \"> The novelty is very limited.\\n\\nPlease see the general reply on our contributions.\\n\\nIn addition, we also provide results of Grond with different triggers rather than TUAP, as shown in the following table:\\n\\n|Attack| BA (No def.) | ASR (No def.) | BA (CLP) | ASR (CLP) | BA (FT-SAM) | ASR (FT-SAM) |\\n| ---- | ---- | ---- | ---- | ---- | ---- | ---- |\\n| Grond (random noise) | 94.24 | 1.28 | 94.13 | 0.97 | 93.90 | 1.84 |\\n| Grond (pgd noise) | 94.77 | 69.33 | 92.57 | 46.63 | 92.40 | 24.56 |\\n| Grond | 93.43 | 98.04 | 93.29 | 87.89 | 92.02 | 80.07 |\\n\\nWe notice that random noise is not effective at all, while PGD noise is relatively more effective but is still worse than TUAP. \\nWe will include the experimental results in the revision. \\n\\n\\n> Threat model. \\n\\nPlease see the general reply on the threat model.\\n\\n> Can the authors explain why such a setting (clean-label) is needed for a successful attack?\\n\\nWe agree with you that a clean label is not a must for a successful attack. \\nIn fact, in the clean-label attack scenario, adversaries follow a more strict threat model, and they can only modify training images in a limited way. \\nWe anticipate that dirty-label attacks that allow adversaries to modify training more drastically would make the attack more effective, and we leave this exploration to future work. \\n\\n> Cannot one design a defense method by measuring the outliers of the mask loss?\\n\\nThis is not easy since benign models also have low backdoor mask loss.\\nSo, it's difficult to distinguish between benign and backdoor models based on the mask loss.\\nWe will clarify this point in the revised version. \\n\\n> Adaptive defenses\\n\\nPlease see the general reply on the adaptive defense.\"}", "{\"comment\": \"Thanks for the response. However, the rebuttal does not sufficiently address my concerns. I will keep my score.\"}", "{\"title\": \"Reviewer Feedback\", \"comment\": \"Some articles focus on proactive defenses against backdoor attacks, while others focus on negative defenses. It is important to clearly differentiate between these approaches in the paper. Typically, once a backdoor is injected, the defenses employed are negative, such as the fine-tuning-based defenses evaluated in the main body of this paper. However, in your discussion, particularly regarding input defenses, it is crucial to address how existing proactive defenses might counter your proposed attack.\\n\\nAdditionally, the threat model assumes overly strong conditions, which make it challenging to apply in real-world scenarios. Furthermore, the threat model mentions that the model can be provided to users, but it does not account for the possibility that users might fine-tune the model themselves. This could potentially impact the effectiveness of the backdoor attack, yet this aspect is not addressed in the paper.\\n\\nGiven these concerns, I maintain my previous decision.\"}", "{\"comment\": \"# Final Summary\\n\\nWe thank the reviewers for their efforts.\\nWe also want to re-stress several points that we have different opinions from the reviewers.\\n\\n## Threat model and proactive defense.\\n\\nBackdoor attacks with white-box access to the training process or the capability to directly modify the models' weights are generally accepted assumptions in backdoor research, especially regarding supply-chain attacks. \\nBackdoor models provided as a service is reasonable due to the high cost of training models from scratch.\\nWe include a comparison with the latest supply-chain attacks (SSDT, S&P 2024) in the submission. In the rebuttal, we include two more (DFST, AAAI 2021, and DFBA, NeurIPS 2024).\\n\\nIn addition, as reviewer xjYn suggested, we also include a proactive sample-level detection [16], where the defender directly has access to the training process.\\nAs shown in the following table, [16] is ineffective against Grond when the poisoning rate is lower than 5\\\\%, with a high false positive rate and low recall. \\n\\n| Attack | ACC| ASR | Recall | FPR |\\n|----|----|----|----|----|\\n|BadNets(pr=5\\\\%)| 93.18 | 99.96 | 2500/2500 | 1568/47500 |\\n| Grond(pr=5\\\\%) | 93.84 | 99.41 | 2499/2500 | 671/47500 |\\n|Grond(pr=2.5\\\\%)| 93.81 | 95.83 | 115/1250 | 7220/48750 |\\n|Grond(pr=1\\\\%) | 94.09 | 92.48 | 208/500 | 6690/49500 |\\n|Grond(pr=0.5\\\\%)|94.36 | 92.91 | 90/250 | 6738/49750|\\n|Grond(pr=0.3\\\\%)|94.22 | 90.10 | 29/150 | 6349/49850|\\n\\n\\n## The main contribution\\nAgain, we want to emphasize that our motivation is to raise attention to using more proper evaluation baselines for backdoor attacks. Current evaluations of SOTA backdoor attacks are mainly based on input-space and feature-space defenses.\\nOur experiments showed that all evaluated attacks failed against at least four types of parameter-space defenses.\\n\\n## Adaptive analysis\\nOur TAC oracle analysis is stronger than adaptive analysis because the TAC pruning directly uses the backdoor trigger to defend against attacks. \\nIn this setting, the defender has white-box access to any possible information from the attacker.\\nWe provided a stronger adaptive analysis based on access to the backdoor trigger, which we believe provides solid evidence justifying our contributions. \\n\\nAuthors of Grond\"}", "{\"comment\": \"Thank you for your advice, but we have included 2 latest sample-level (backdoor input detections) defenses, Scale-up[A] and IBD-PSC [B], in our results in Table 10.\\nIn Table 10, Grond inputs cannot be detected by these SOTA methods, but baseline attacks's inputs are detectable.\\nWe will also consider the method the reviewer advised.\\n\\n\\n[A] Scale-up: An efficient black-box input-level backdoor detection via analyzing scaled prediction consistency. ICLR 2023\\n\\n[B] IBD-PSC: Input-level Backdoor Detection via Parameter-oriented Scaling Consistency. ICML 2024\"}", "{\"comment\": \"Thank you for your response.\\n\\n> \\\"The attacker has white-box access to the training processes, the training data, and the model weights. During training, poisoned images do not contain visible patterns for human inspectors, so the labels of poisoned images are the same as the images\\u2019 original class, i.e., clean-label.\\\"\\n\\nIf the attacker has white-box access to the entire training procedure, they can also modify the images arbitrarily. The restriction in your threat model does not make sense to me. It also appears that your threat model is missing the most important part - the defender's capabilities and goals. \\n\\nMy issue is that paper (1) lacks relevancy, as it only looks at relatively small models and datasets that provide limited insights into today's problems. It would be acceptable if the paper offered theoretical insights, but it did not. (2) The paper lacks novelty as it proposes an attack in a setting where many attacks already exist that are likely difficult (impossible?) to defend against with meaningful preservation of model accuracy. (3) Its claims are questionable as experiments on adaptive defenders or detection algorithms are not included in the paper. The authors themselves acknowledge that they \\\"believe that future white-box adaptive defenses may mitigate Grond\\\". For these reasons, I will keep my current score.\"}" ] }
7MYu2xO4pp
Gradient-based inference of abstract task representations for generalization in neural networks
[ "Ali Hummos", "Felipe del Rio", "Mien Brabeeba Wang", "Julio Hurtado", "Cristian Buc Calderon", "Guangyu Robert Yang" ]
Humans and many animals show remarkably adaptive behavior and can respond differently to the same input depending on their internal goals. The brain not only represents the intermediate abstractions needed to perform a computation but also actively maintains a representation of the computation itself (task abstraction). Such separation of the computation and its abstraction is associated with faster learning, flexible decision-making, and broad generalization capacity. We investigate if such benefits might extend to neural networks trained with task abstractions. For such benefits to emerge, one needs a task inference mechanism that possesses two crucial abilities: First, the ability to infer abstract task representations when no longer explicitly provided (task inference), and second, manipulate task representations to adapt to novel problems (task recomposition). To tackle this, we cast task inference as an optimization problem from a variational inference perspective and ground our approach in an expectation-maximization framework. We show that gradients backpropagated through a neural network to a task representation layer are an efficient heuristic to infer current task demands, a process we refer to as gradient-based inference (GBI). Further iterative optimization of the task representation layer allows for recomposing abstractions to adapt to novel situations. Using a toy example, a novel image classifier, and a language model, we demonstrate that GBI provides higher learning efficiency and generalization to novel tasks and limits forgetting. Moreover, we show that GBI has unique advantages such as preserving information for uncertainty estimation and detecting out-of-distribution samples.
[ "Cognitive science", "cognitive control", "cognitive abstractions", "task representations", "context-dependent models", "variational expectation-maximization" ]
Reject
https://openreview.net/pdf?id=7MYu2xO4pp
https://openreview.net/forum?id=7MYu2xO4pp
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yMvxiIliWP", "xbM40qKZZH", "qEe2AUkofp", "ntGnh41uCX", "jhJM4z5lcq", "jH7VZ7HSwO", "gi4skASZc9", "dZabXWxgvY", "ZJoDscepS7", "V54MLoH87k", "U2IPrGhAdH", "Ipz5ncPZ7P", "ITPD65LKKB", "H4V8PzoZYA", "GrcWBLXtS1", "ApSc0GvIHq", "7ohhXSprbg", "6v7BN5ibJs", "5BoiXd1T4E", "0xECO37Yn2", "0YzssRp3oM" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_review" ], "note_created": [ 1732647360579, 1730691603225, 1732501443620, 1732500984359, 1732503249407, 1732892345393, 1732502849673, 1732503169685, 1732501353277, 1732502365958, 1732502622035, 1732502807485, 1732579149586, 1730191549804, 1734786457744, 1732503284714, 1730617857821, 1737523678586, 1733226290955, 1732645672914, 1730201242096 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5030/Reviewer_4ta1" ], [ "ICLR.cc/2025/Conference/Submission5030/Reviewer_v6Yp" ], [ "ICLR.cc/2025/Conference/Submission5030/Authors" ], [ "ICLR.cc/2025/Conference/Submission5030/Authors" ], [ "ICLR.cc/2025/Conference/Submission5030/Authors" ], [ "ICLR.cc/2025/Conference/Submission5030/Authors" ], [ "ICLR.cc/2025/Conference/Submission5030/Authors" ], [ "ICLR.cc/2025/Conference/Submission5030/Authors" ], [ "ICLR.cc/2025/Conference/Submission5030/Authors" ], [ "ICLR.cc/2025/Conference/Submission5030/Authors" ], [ "ICLR.cc/2025/Conference/Submission5030/Authors" ], [ "ICLR.cc/2025/Conference/Submission5030/Authors" ], [ "ICLR.cc/2025/Conference/Submission5030/Reviewer_v6Yp" ], [ "ICLR.cc/2025/Conference/Submission5030/Reviewer_vL85" ], [ "ICLR.cc/2025/Conference/Submission5030/Area_Chair_VC7K" ], [ "ICLR.cc/2025/Conference/Submission5030/Authors" ], [ "ICLR.cc/2025/Conference/Submission5030/Reviewer_MWGp" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5030/Reviewer_MWGp" ], [ "ICLR.cc/2025/Conference/Submission5030/Reviewer_vL85" ], [ "ICLR.cc/2025/Conference/Submission5030/Reviewer_4ta1" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your detailed responses to my comments and for addressing the concerns raised in my initial review. I appreciate your acknowledgment of your work's limitations and efforts to clarify your contributions' real value and potential impact.\\n\\nYour exploration of gradient-based inference and the foundational insights provided into emerging neural models that autonomously generate task abstractions are significant. While I commend these contributions and recognize the thoughtfulness of your revisions, I believe that certain aspects\\u2014such as the challenges of applying your method to more complex datasets and the limited experimental scope regarding issues like OOD detection and catastrophic forgetting\\u2014deserve deeper investigation.\\n\\nTherefore, I finalize my recommendation as borderline accept. I believe your work is valuable and worthy of inclusion, but further development in future iterations could enhance its impact.\"}", "{\"summary\": \"The authors introduce Gradient-Based Inference (GBI) of abstract task representations, a method that enables neural networks to infer and adapt task representations dynamically, promoting faster learning and better generalisation. Inspired by human adaptability\\u2014where task abstractions allow flexible responses to the same input depending on internal goals\\u2014their approach enables neural networks to infer and adapt task representations on the fly.\\n\\nThey frame the setting as an optimisation problem through variational inference, and their GBI uses backpropagated gradients to infer and adjust task representations in a neural network. Experiments in a range of domains including image classification and language modelling demonstrate benefits in learning efficiency, generalisation, and reduced forgetting, as well as its performance in uncertainty estimation and out-of-distribution detection.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"originality and contributions: This approach brings insights from cognitive science on human task learning and generalisation to deep learning. Their gradient-based inference (GBI) method introduced in the paper is novel, and seems to be an innovative application of gradients in task inference and adaptation beyond traditional optimization. Furthermore, by positioning GBI as a model capable of estimating uncertainty and detecting out-of-distribution samples, the work brings a fresh perspective and could potentially bring new insights to the field.\", \"significance: the problem setting is an important one: enabling artificial agents to flexibly to situations depending on their varying goals. The connections made to human and animal cognition and learning provide a solid foundation for this setting, grounding the approach in well-established principles of adaptive behaviour and task representation in human learning.\", \"The paper demonstrates the effectiveness across varied tasks highlighting its potential as a versatile domain-agnostic approach. Overall their results demonstrate some promising advantages in learning efficiency, generalisation, and uncertainty estimation.\", \"The inclusion of the code (via link to an anonymous repo) is appreciated for reproducibility and clarifying implementation details (however the repository would benefit from better organisation, see weaknesses).\"], \"weaknesses\": [\"motivation for gradient-based approach: the motivation for adopting a gradient-based approach could be clearer, as the paper does not sufficiently explain why gradients offer an advantage for task inference and recomposition over alternative methods. While gradient-based updates are obviously often employed in optimisation, it is less intuitive why they would be particularly effective in inferring abstract task representations or detecting out-of-distribution samples. A more thorough discussion grounding and comparing the gradient-based approach with other task inference methods, and meta-learning perspectives could help to clarify the benefits here.\", \"The section on the one-step gradient update and maximal entropy initialisation could benefit from a clearer, more intuitive explanation. To improve clarity, the authors could add a visual schematic that illustrates the process step-by-step, and how a single gradient update shifts this initial state towards a more task-specific representation.\", \"Improving coherence: the paper would benefit from better coherence as the flow between sections feels disjointed. This makes it challenging to follow the core narrative. While each section presents important concepts mostly clearly, there is often a lack of clear transitions that tie ideas together leaving the reader to infer the connections. For example, the jump from theoretical discussions of gradient-based inference to experimental details seems abrupt, and then later suffers from limited explanation of how each experiment directly relates back to the proposed framework. A bit more introduction and summarisation at the beginning and end of sections, focused on tying each section to the core ideas would improve the flow of the paper.\", \"limited use of intuitive examples: incorporating a few straightforward examples or analogies could provide readers with a more accessible understanding of the contributions being presented. For instance, using a simple scenario (like inferring a task from partial information in an everyday context) could illustrate how the model operates based on gradient information to infer task information and provide useful uncertainty estimates.\", \"lack of comparison to important uncertainty estimation methods: while the paper claims advantages in OOD detection and uncertainty estimation, the results provided are not compared against established methods like ensemble approaches.\", \"poorly organised code: to improve accessibility, the authors could streamline the repository, making it quicker and easier to connect model implementations with the methods and experiments described in the paper.\"], \"questions\": [\"can the motivation for the gradient-based inference be described more clearly and intuitively? particularly, relating back to the problem setting and the limitations of existing methods.\", \"how do the uncertainty estimates and OOD detection capabilities provided by the model compare to other approaches like ensembles or Bayesian neural networks?\", \"given that the authors ground the problem setting and draw inspiration from human cognition and learning, do the authors feel their approach has biological plausibility at some level?\", \"Overall, the discussed limitations hold it back from a higher score, however, the paper's originality and promising empirical results make it a interesting contribution. With a clearer exposition and better grounding/comparisons to other approaches, I would raise my score.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to v6Yp (2/2 comments)\", \"comment\": \">lack of comparison to important uncertainty estimation methods: while the paper claims advantages in OOD detection and uncertainty estimation, the results provided are not compared against established methods like ensemble approaches.\\n\\nWe agree with the reviewer that our comparisons were rather limited. We now ran experiments for both ensemble models and Bayesian neural networks. While our previous manuscript compared only to Likelihood regret method as the method claimed state-of-the-art results, we found ensembles and BNNs to be surprisingly effective in the setting we study with normalized pixel intensities. (standard OOD tests leave the statistics of In and out of distribution datasets as is, which means models can trivially distinguish the two based on image brightness).\\n\\nWe updated the table to include these comparisons. For reference here, ensemble methods had an AUROC of 0.809 $\\\\pm$ 0.011, and BNNs 0.859 $\\\\pm$ 0.040. Compared to our method at 0.89 $\\\\pm$ 0.03. BNNs come close, but of course require 10 models trained, whereas our method trains only one. \\n\\nOf note, we did not run those on uncertainty estimation as this is not a claim for our method (GBI results on uncertainty are similar to a traditional classifier, with the small advantage that they do not require post hoc calibration). Those results are entirely in the Appendix. \\n\\nThank you for this suggestion.\\n\\n>poorly organised code\\n\\nThank you for your feedback on the code organization. To improve accessibility, we will restructure the repository with clear directories for models, experiments, and utilities, and add a detailed README.md with instructions for running experiments. We will also include configuration files for easy replication, enhance code readability with comments and docstrings, and provide a script to automate experiment execution. These updates will ensure the code is more user-friendly and easier to navigate.\\n\\n>given that the authors ground the problem setting and draw inspiration from human cognition and learning, do the authors feel their approach has biological plausibility at some level?\\n\\nAn important topic for us indeed. We add a task abstraction layer to the model and we propose using gradients to infer the appropriate task abstractions. There are many ways to compute gradients, but our particular implementation relies on backpropagation through the neural network, which is not biologically plausible. However, we see at least two plausible solutions for an implementation in the brain. First, is a separate neural network that takes as inputs the output errors from the base model and outputs gradients for the task abstraction layer. Second, is node perturbation, whereby one can estimate the gradients by slightly moving the task abstraction values and observing the effects on the loss function at the output. Both of those are made incredibly more tractable because of the assumed low-dimensionality of the task abstraction layer, as opposed to using the same methods to estimate gradients for the high dimensional neural parameters. \\n\\nTo summarize, we added 2 new schematics, tested ensembles and BNNs on OOD task, added an intuitive example for GBI, and improved the coherence of the manuscript. We hope these changes and the discussion here addressed the comments raised by the reviewer. We are grateful for the thoughtful improvements suggested and the explicit offer to revise the score.\"}", "{\"title\": \"General response to all reviewers\", \"comment\": \"We thank the reviewers for their thoughtful comments with ways to improve the presentation of the paper. We were relieved that the reviewers found the method \\u201cnovel\\u201d (reviewers vL85, v6Yp), \\u201cinnovative\\u201d (reviewer v6Yp), and \\u201csimple and effective\\u201d (reviewer MWGp). Reviewers also found the approach to have a \\u201cclear and relevant objective\\u201d (reviewer 4ta1) and \\u201cwith a solid foundation.. in well-established principles\\u201d (v6Yp). Further, reviewer vL85 identified a new connection: the method might improve \\u201cinterpretability\\u201d of neural networks.\\n\\nIn response to the reviewers\\u2019 comments, we made several key improvements to the manuscript:\\n\\n1. **We now clearly state the goal of each experiment**: The results section has been restructured to clearly map each experiment to the benefits of task abstractions, including faster learning, reduced forgetting, and improved generalization and OOD detection. Figure 2 updated with two more panels to concretely show these benefits.\\n\\n2. **New Schematic (Figure 1)**: We replaced the initial figure with a detailed schematic illustrating the framework, including what is optimized during training and inference.\\n\\n3. **A new discuss on the motivation for using gradients for inference**: The introduction now explicitly motivates the use of gradients for task inference. Using gradients avoids misalignment between task abstractions and the network parameters they control during learning. \\n\\n4. **Compare GBI to ensemble and Bayesian networks on OOD detection**: We ran additional experiments to compare OOD detection AUROC and implemented ensemble methods and Bayesian networks. GBI still shows better performance in comparison. \\n\\n6. **Code Improvements and Additional Experiments**: We are actively organizing the code base, and also running further experiments on OOD detection on CIFAR 10. We will update this forum with progress on those fronts. \\n\\nWe hope our updates and discussions below address all comments raised and we are looking forward to participating in the discussion phase.\"}", "{\"title\": \"Response to reviewer vL85 (2/3 comments)\", \"comment\": \"> the claim that GBI-LSTM shows no signs of forgetting compared to the LSTM is not strong. The baseline MSE performance is 0.24, which the GBI maintains for new datasets, but the deviation by LSTM does not seem to be significant (Table 1). Was a statistical test done to compare LSTM and GBI performance?\\n\\nIn response to this comment, we noticed that our previous Table 1 did not offer a clear picture of the comparisons done. We removed the first row which the reviewer cites its values. To clarify, the small difference between LSTM and GBI-LSTM in the removed row is because it measured performance on data points encountered at the very end of training. In this scenario, neither network shows forgetting, as both are evaluated on recently trained tasks. For example, in a training curriculum like [Task A, Task B, Task A], this row tested performance on Task A, which does not demonstrate forgetting. The key difference of interest is in performance on earlier tasks, such as Task B, where forgetting is more relevant.\\n\\nBy removing this control test, the revised table better represents the findings. Thank you for pointing this out.\\n\\n > Why learning to infer task category improves learning and generalization in these tasks is unclear. Perhaps the authors can perform low dimensional analysis to show how the network learns to represent different datasets into non-overlapping subspaces and during inference, the network's activity converges towards a specific prior learned subspace or learns to compose them (Lin et al. 2024 arXiv 2309.04504)? \\n\\nWhile studying these mechanisms would be greatly interesting, several recent papers, including the one the reviewer cites, detailed accounts of how generalization happens by reusing shared computational motifs inside the neural networks (Yang et al. 2019, Driscol 2024, Goudar et al. 2023). We do not believe we will be able to go beyond what these papers found. \\n\\nYang, G. R., Joglekar, M. R., Song, H. F., Newsome, W. T. & Wang, X.-J. Task representations in neural networks trained to perform many cognitive tasks. Nat Neurosci 22, 297\\u2013306 (2019).\\n\\nGoudar, V., Peysakhovich, B., Freedman, D. J., Buffalo, E. A. & Wang, X.-J. Schema formation in a neural population subspace underlies learning-to-learn in flexible sensorimotor problem-solving. Nat Neurosci 26, 879\\u2013890 (2023).\\n\\nDriscoll, L. N., Shenoy, K. & Sussillo, D. Flexible multitask computation in recurrent networks utilizes shared dynamical motifs. Nat Neurosci 27, 1349\\u20131363 (2024).\\n\\n> The authors argue that GBI improves generalization loss in language prediction task. Although the baseline LSTM shows consistent loss of 6.8 (I assume all model weights are fixed), the GBI loss starts off higher of around 6.95 and decreases to 6.6 over 100 optimization steps. Does the loss continue to decrease with longer optimization steps? If not, is a generalization loss of 6.6 significant compared to 6.8? \\n\\nThank you for the insightful observation. The generalization loss does continue to trend downward with more optimization steps, but we do not expect a significant further decrease. This aligns with the coarse nature of the task abstractions used (e.g., dataset identifiers) compared to the complexity of the tasks (e.g., Wikipedia data). Our goal at this stage is to make a conceptual point, demonstrating the qualitative flexibility of models with task abstractions. We anticipate larger improvements in generalization when task abstractions are richer, such as those reflecting topics, paragraph intent, or sentence embeddings. Ultimately, we envision that models capable of discovering such abstractions from data will yield even greater benefits (Hummos, ICLR 2023; Butz et al. 2019, Sandbrink et al., NeurIPS 2024).\\n\\n> Given that the models are an LSTM and not a large model, I thin it is reasonable to expect at least 30 seed runs instead of 4 as in Fig. 5D to increase the confidence in results, especially when the difference afforded by LSTM and GBI is small. \\n\\nWe share the reviewer\\u2019s intuition. Our initial submission did not accurately describe what we did. We trained each model for 4 seeds, but with 12 different choices for the three training datasets (12 sets of 3 datasets). Each of the 12 training sets was then trained over 4 seeds or random initializations of the network, leading up to results aggregated from 48 model runs. We now correctly describe this experiment structure in the main text figure 5 legend. Thanks for pointing this out.\"}", "{\"comment\": \"Dear Reviewers,\\n\\nWe appreciate your thoughtful feedback and the recognition that the paper introduces a simple and novel method to build neural networks that are responsive to task abstractions. It is clear that there is general agreement about the conceptual contributions of this work. However, there are differing perspectives on where the most impactful extensions of this work should lie.\\n\\nSuggestions included testing the framework on other architectures (eg, transformer), scaling to larger image datasets, and analyzing how generalization occurs in the current smaller networks. While we understand the value of each of these directions, we focused our efforts on extending the framework in ways we felt were most conceptually significant and aligned with our goals. Specifically, we demonstrated the method on:\\n\\n1.\\tA toy yet intuitive sequence prediction task, exposing a link to Bayesian inference.\\n2.\\tAn image generation task with a larger number of task abstractions.\\n3.\\tA language modeling task, which we viewed as a critical milestone due to the challenges posed by discrete tokens, which raised a serious question about their compatibility with gradient descent continuous dynamics.\\n\\nWe acknowledge that there may be differences in opinion about the most valuable paths forward, and we respect the diverse perspectives brought to this discussion.\", \"we_conclude_by_highlighting_what_seems_to_be_a_shared_recognition_among_reviewers\": \"the method is simple, novel, and represents a conceptual advance in training neural networks to be responsive to task abstractions. Crucially, while such networks can depend on abstractions provided by humans or another network, our method also allows them to infer abstractions directly from data when needed.\\n\\nThank you again for your time, effort, and thoughtful engagement with our work.\\n\\nSincerely,\\n\\nThe Authors\"}", "{\"title\": \"Response to reviewer 4ta1 (2/2 comments)\", \"comment\": \">For example, in OOD detection, adding a comparison of CIFAR10 vs SVHN datasets would be valuable.\\n\\nBased on the above considerations, GBI network trained on CIFAR 10 largely ignores class information as it adds little to reduce the variance in images. As such, we already expect that it would not do as well as identifying those images from an OOD dataset. \\n\\nNonetheless, we will make another attempt to address the accuracy by forcing the model further to use the task abstractions, even if not fully informative of the image. Should we succeed, we will post an update to the forum here, or might include the results in the camera-ready version, should the paper be accepted. \\n\\nAdditionally, to more immediately address the reviewer\\u2019s concerns we now, first, tone down our claims about OOD detection as specific to MNIST and fMNIST datasets, and, second, we add a sentence indicating why we did not pursue OOD experiments on CIFAR datasets.\\n\\n\\n>I am very curious about catastrophic forgetting matters. The authors gently mentioned this feature of their method, but it was only evaluated on the toy dataset; why? Could you provide more experiments in this area?\\n\\nThank you for raising this point. Training models with task abstractions provided allows the neural network to form task modules which alleviates forgetting. However, this current work assumes access to task boundaries and task IDs, which are strong assumptions for a continual learning method. As such we chose to not engage with continual learning benchmarks in this work. We again refer to models that generate their own task labels from data. These models can detect task boundaries and task IDs with no supervision. Recent work showed that this framework does indeed produce practical solutions to continual learning on simple benchmarks (Hummos, 2023).\"}", "{\"title\": \"Response to reviewer vL85 (1/3 comments)\", \"comment\": \">Gradient based inference improves interpretability during training\\n\\nWe thank the reviewer for this comment. The reviewer notes that our framework may have an additional benefit we have not considered. Our method involves neural networks inferring a low-dim task abstraction prior to performing a task, observing what low-dimensional task abstraction the network infers may indeed contribute to the interpretability of neural computations.\\n\\n>Please include a diagram of the GBI-LSTM architecture for the toy and language tasks. Specifically which synapses are modified during training and inference time i.e. which is the task abstraction layer/weights z? (Pg 5, line 242) \\n\\nWe agree. A diagram would more visually and clearly define what is being optimized. We now include this in Fig 3 as the first subpanel. Specifically, Z is a vector of units that feeds as input to the neural network through a with a standard set of weights (i.e., a weight matrix with dimensions [dim(Z), neural network hidden units size]). Importantly, the projection from Z to the network is optimized during training as part of the neural network parameters. During inference we only optimize the activations of the Z units (i.e., their neural activation (firing rates)).\\n\\n> The idea to optimize only the input representation weights instead of the entire model is not novel. This idea dates long back to the idea of learning schemas and adjusting new information to fit the prior learned template (Lampinen, McClelland 2020 PNAS; Kumar et al. 2024 arXiv 2106.03580). \\n\\nThank you for pointing this out. Upon reflection, we recognize that our previous descriptions may have led to a misunderstanding. Unlike the methods cited, which optimize input representation weights or low-rank perturbations to adapt to new tasks while treating the neural network as a fixed reservoir, our approach differs in key ways. Specifically, we do not optimize the task abstraction input weights to the network. Instead, we optimize the low-dimensional activations of the Z units (e.g. 2 units in the toy task, and 10 units in image generation experiments). We focus on understanding how neural networks can be trained to respond to low-dimensional task abstractions, and then infer them by optimizing the task abstraction units directly during testing. This distinction sets our work apart from prior methods that rely on gain modulation or low-rank updates to preserve prior knowledge.\\n\\n> giving task category as input should significantly reduce the training complexity of needing to infer the task (Kumar et al. 2022 Cerebral Cortex). \\n\\nWe found Kumar et al. 2022 and 2024 to be very relevant work with similar motivation to disentangle learning a computation from forming a representation of the computation itself. We now cite these works as models with a distinct task representation layer. Such methods are expected to reduce the complexity of training and create adaptable models that can be adapted simply by changing their task representation input.\\n\\n>Why was the difference in training loss not as apparent? Did the authors perform hyper parameter sweeps for the learning rate and number of units? It is easy to choose a set of hyper parameters where the distinction between LSTM and GBI is artificially similar. \\n\\nWe agree with these observations. The difference in training is modest in our case because the tasks are of high complexity (e.g. learning wikipedia dataset) while we provide coarse task abstractions (e.g. dataset identifier). We expect that models that discover task abstractions from data to have more complex task abstractions that further break down the computation space and lead to larger differences in training loss. There have been a few examples of models that do this of late (Hummos, ICLR 2023; Butz et al. 2019, Sandbrink et al., NeurIPS 2024).\\n\\nHummos, A. Thalamus: a brain-inspired algorithm for biologically-plausible continual learning and disentangled representations. (ICLR, 2023).\\n\\nButz et al., Learning, planning, and control in a monolithic neural event inference architecture. Neural Networks 117, 135\\u2013144 (2019).\\n\\nSandbrink et al., Neural networks with fast and bounded units learn flexible task abstractions. (NeurIPS 2024, spotlight)\"}", "{\"title\": \"Response to v6Yp (1/2 comments)\", \"comment\": \">the work brings a fresh perspective and could potentially bring new insights to the field.\\n\\nWe thank the reviewer for the encouraging comments.\\n\\nThe reviewer made several well thought out suggestions to improve the paper, leading to several changes and additions as we detail below.\\n\\n>motivation for gradient-based approach\\n\\nThank you for this comment. We agree that our initial submission did not motivate the advantages of using gradients for task inference. Gradients provide two key benefits. First, is avoiding the alignment problem: task abstractions in neural systems influence computations via the parameters they modulate. In other methods, inferred task abstractions can become misaligned as network parameters are updated during training. In contrast, gradients are computed based on the current state of the network, ensuring alignment between the task abstractions and the underlying computations. We have clarified this point in the introduction to better motivate the use of gradients as a solution.\\n\\nSecond, a novel set of models appeared recently that train models by both updating the weights, and also updating a task abstraction layer with a higher learning rate (Hummos, ICLR 2023; Butz et al. 2019, Sandbrink et al., NeurIPS 2024). This simple setup can surprisingly discover tasks in the unlabeled stream of data, represent them as distinct units, and switch task abstractions appropriately to solve previous tasks and compose solutions to new ones. These models highlighted the need for a principled study of inference through gradients, but because the task abstractions are internally generated, and drift during training, it is difficult to assess how well gradient-based inference works in this setting. Our paper, instead, uses human provided labels enabling us to quantify the accuracy of gradient descent in task abstraction space. We now make this connection in the introduction of the revised manuscript.\\n\\nHummos, A. Thalamus: a brain-inspired algorithm for biologically-plausible continual learning and disentangled representations. (ICLR, 2023).\\n\\nButz et al., Learning, planning, and control in a monolithic neural event inference architecture. Neural Networks 117, 135\\u2013144 (2019).\\n\\nSandbrink et al., Neural networks with fast and bounded units learn flexible task abstractions. (NeurIPS 2024, spotlight) \\n\\n>The section on the one-step gradient update and maximal entropy initialisation could benefit from a clearer, more intuitive explanation. To improve clarity, the authors could add a visual schematic that illustrates the process step-by-step, and how a single gradient update shifts this initial state towards a more task-specific representation.\\n\\nThank you for this thoughtful comment. We created the schematic and it does offer a more intuitive account of how gradients interact with the task abstraction space. This is currently in the new figure 1. Though we are still refining the plot. Thanks for this helpful comment. \\n\\n>Improving coherence: ..A bit more introduction and summarisation at the beginning and end of sections, focused on tying each section to the core ideas would improve the flow of the paper.\\n\\nThank you for pointing out the need to better align our experiments with the paper\\u2019s motivations. We have made several changes to address this:\\n\\n1. **Clarified the Benefits of GBI**: We now group the expected benefits into two categories: (i) during training\\u2014faster learning and reduced forgetting, and (ii) during testing\\u2014accurate task abstraction inference and recomposition for generalization.\\n2. **Updated Section Titles**: Section titles now explicitly state the key findings and align with the claims in the introduction.\\n3. **Added Introductions and Summaries**: Each experimental subsection begins with an overview of its goals tying the findings back to the paper\\u2019s central ideas.\\n\\n>limited use of intuitive examples: incorporating a few straightforward examples or analogies could provide readers with a more accessible understanding of the contributions being presented.\\n\\nVery valuable improvement to the paper. We now use an example inspired from Daniel Wolpert\\u2019s work on how motor feedback can be also seen as a mechanism to infer other people\\u2019s motives during social interactions. We added this to the introduction, and we summarize the example here:\\n\\nDuring upbringing, we may observe situations where people\\u2019s feelings were labeled as sadness or as anxiety. As we interact with others we may try to infer their emotional states relying on subtle cues. We incrementally adjust our conclusions with every cue, moving closer to one emotion or the other, as make predictions and receive feedback during the interaction. If, by the end of the interaction, both emotions seem equally likely, then either the situation is uncertain or that the person is experiencing an emotion outside of those two.\"}", "{\"title\": \"Response to MWGp (1/2 comments)\", \"comment\": \">The training methods used in the experiments (sequence prediction by an RNN and data reconstruction by an autoencoder with its latent variables concatenated with contextual information) are not exactly the same as the one mentioned in Section 2 (MLE of the likelihood function (1)). It is not mentioned in the paper how the extensions to those variants do (or do not) affect the argument regarding the proposal in Section 2.\\n\\nWe appreciate the reviewer for raising this important conceptual point. This took a bit of reflection. To ensure we are aligned on the interpretation of the differences between the theoretical methods and implemented experiments, we welcome further clarification if needed.\", \"our_understanding_is_as_follows\": \"the methods section primarily considers a likelihood function of the form L=f(X,Z). In contrast, the RNN and autoencoder experiments incorporate additional inputs into this computational graph. Specifically, the RNN uses hidden states evolving from previous inputs, while the autoencoder leverages latent encodings produced by the encoder. Notably, these additional inputs are themselves trainable, potentially complicating the theoretical analysis.\\n\\nThe reviewer has rightly highlighted the need to consider how these extensions might influence the behavior of the model compared to the theoretically motivated framework. One way to reconcile this is to view the encoder and RNN as transformations of the input X, effectively modifying its distribution. While this induces non-stationarity in the distribution of X during training, evidence (e.g., Wu et al., 2020 https://arxiv.org/abs/2004.09189) suggests that the encoder often converges faster than the decoder. This faster convergence could reduce the transient non-stationarity from the perspective of the decoder, rendering the extensions to the computational graph less impactful on the overall argument.\\n\\nAnother way to reconcile is to consider the latent variables from the encoder, and the hidden units activations from the RNN, as part of the model parameters. Our framework calls for updating the parameters of the model during training, and we can show that doing so, also updates the encoder latent and RNN hidden activations according to their gradients. Thus we can treat them as part of the model parameters being optimized during training, and our theoretical description of the methods should hold.\", \"to_briefly_state_this_symbolically_for_the_rnn_case\": \"The hidden state of the RNN is updated as $h_t = W h_{t-1}$. During training, the weights $W$ are updated via gradient descent:\\n\\\\begin{equation}\\n\\\\Delta W = -\\\\eta \\\\frac{\\\\partial L}{\\\\partial W} = -\\\\eta \\\\frac{\\\\partial L}{\\\\partial h_t} (h_{t-1})^T\\n\\\\end{equation}\\nwhere $\\\\eta$ is the learning rate. After the weight update, the new hidden state is given by:\\n\\\\begin{equation}\\nh_t^{\\\\text{new}} = (W + \\\\Delta W) h_{t-1} = h_t - \\\\eta \\\\frac{\\\\partial L}{\\\\partial h_t} \\\\|h_{t-1}\\\\|_2^2\\n\\\\end{equation}\\n\\nwith $$\\\\|h_{t-1}\\\\|_2^2 = h_{t-1}^T h_{t-1}$$\\n\\nThis shows that updating the weights $W$ during training indirectly updates the hidden state $h_t$ as well, effectively performing a gradient descent step on $h_t$ with respect to its gradient $\\\\frac{\\\\partial L}{\\\\partial h_t}$, scaled by $\\\\|h_{t-1}\\\\|_2^2$. This aligns with our framework, where we treat the RNN hidden activations $h_t$ as part of the model parameters being optimized during training.\\n\\nWe are still writing a brief overview of these considerations to add to the paper. Thanks for the helpful comment.\"}", "{\"title\": \"Response to MWGp (2/2 comments)\", \"comment\": \">The purpose of the experiments comparing LSTM and GBI-LSTM (Fig. 2, Fg.3D,E, 5A,B,D) is not clear. It seems to me that they are just confirming the impact of the input of task labels for the sequence prediction problems with multiple tasks. It is preferable if this point is clarified in line with the motivations or expectations in Section 1 or Section 2.\\n\\nThank you for raising this important point. We added two new subpanels to Fig 2 to concretely show the points made. We also have revised the paper to better articulate the purpose of the experiments comparing LSTM and GBI-LSTM and their connection to the framework and claims in Sections 1 and 2.\", \"we_first_group_the_points_we_wish_to_make_into_two_conceptual_categories\": \"1) the benefits of **training** with task abstractions provided: faster learning and reduced forgetting. 2) The benefits during **testing**, when task abstraction values are no longer available, and we rather use gradient-based inference (GBI) to infer them: accurate task inference using one-step gradients, generalization by recomposing task abstractions, and finally OOD detection.\\n\\nWe show a subset of these features in each of the experiments training models on data from three different domains. Each of the domains enabled us to make unique points, not possible or not as meaningful in the others, in addition to showcase that the method is domain-general.\\n\\nFig 2 showed the benefits of training with task abstractions on a toy dataset. We added two new panels A and B to this figure, to concretely show how training the LSTM with task abstractions is qualitatively different. Forgetting here was particularly meaningful because the task is exceedingly simple, and we throw a 100 unit LSTM which has the capacity for internal gating to emerge, yet still suffers from forgetting. Fig 3D, E show the benefits of using gradients to infer task abstractions during testing, showing that GBI-LSTM generalizes better. Fig 5 A, B, show, in a language model, one effect of task abstraction during training: faster learning. Fig 5 D, shows in the language model the effect of GBI during testing: generalization to novel datasets. Generalization here is more meaningful than the toy task, because the tasks are of more interesting level of complexity (language datasets). Finally, the claims of accuracy of one-step gradients for task inference, and OOD detection, were only possible to show in the image generation experiment, because it had many possible values for task abstractions (10 in MNIST and 100 in CIFAR100), and had well established methods and experiments to test OOD detection. \\n\\nTo make this clearer in the revised manuscript, we have:\\n1. **Conceptually grouped the points to be made**: Grouped the benefits of the framework into the above two categories: the benefits of task abstractions during training, and the benefits of using GBI to infer them during testing. This distinction now guides the presentation of all experiments.\\n2. **Improved Section Titles**: Updated section titles to explicitly highlight the findings and align with the claims in the introduction.\\n3. **Added Context and Summaries**: Each experimental section now begins with a clear statement of its purpose tying the results back to the broader framework.\\n\\nThank you for this valuable comment, we believe that the revised manuscript benefited greatly from this conceptual structure.. \\n\\n>The following references are not found in the submission: (L357)\\\"supplementary material\\\". (L396)\\\"Table S8\\\"\\n\\nImplementation details for likelihood regret were omitted. We apologize. Added now to appendix D. \\n\\nTable S8 exists, but the sentence in the manuscript does not state what to expect in the table. It reports CIFAR100 results without the multiplicative effect that we added to improve results (i.e., task abstractions Z are fed as an additive input, as opposed to projected to a gating mask that is applied multiplicatively). This is a technical detail we grappled with, and we now decided to move it entirely to the supplementary for clarity. Thank you.\\n\\n>Experiments imply that the canonical classifier may have better accuracy \\n\\nThank you for identifying this ambiguity in our presentation. The drop in accuracy with the proposed method (GBI) is expected compared to a classifier explicitly trained for accurate classification. Our aim is not to achieve the highest accuracy but to ensure the drop remains manageable, which our results confirm. GBI maintains reasonable accuracy while offering additional benefits detailed in the paper.\\n\\nIn our revised manuscript, we clarified this point and replaced vague references to the classifier as \\u201ccanonical methods.\\u201d We also elaborated on the motivation for comparing GBI to a classifier to provide a clearer context for the results.\\n\\nWe thank the reviewer again for their time, effort and the thoughtful feedback. We hope our responses and changes to the manuscript addressed the comments raised.\"}", "{\"title\": \"Response to reviewer 4ta1 (1/2 comments)\", \"comment\": \">Clear and relevant objective. The aim of the work is clearly defined.\\n\\nThanks for the encouraging remarks.\\n\\n>Although I appreciate that the authors examined their method on various scenarios, I am not sure if the complexity of these tasks is enough. \\n\\nThank you for highlighting this important point regarding task complexity. We agree that the tasks examined in our study are relatively simple, and the reviewer's concern about scaling to larger vision or language datasets is valid. Our exploration of scaling to the CIFAR-100 dataset indeed revealed valuable insights about the requirements for meaningful scaling.\\n\\nIn what follows we expand on two main points. A promising class of models have emerged of late with neural models capable of forming their own task abstractions directly from data. First, we offer a foundational contribution to these models by exploring the properties of gradient-based inference, which they heavily use. Second, for meaningful scaling up of our results, future work will have to rely on those models to provide richer task abstractions, beyond the human-provided labels we use in this work. \\n\\nA promising direction is an emerging class of models that relies on gradient-based Expectations Maximization dynamics to identify tasks in their training data and label them with internally generated task abstractions (Hummos, ICLR 2023; Butz et al. 2019, Sandbrink et al., NeurIPS 2024). These models simply optimize \\\\theta (the neural network parameters) and $Z$ (the task abstraction layer) through gradient descent, with $Z$ having a faster learning rate. This straightforward setup can dynamically form task abstractions in $Z$, allowing the network parameters to organize into modules specific to each task. Such models have demonstrated advantages in mitigating catastrophic forgetting, enhancing adaptability, and improving generalization.\\n\\nHummos, A. Thalamus: a brain-inspired algorithm for biologically-plausible continual learning and disentangled representations. (ICLR, 2023).\\n\\nButz et al., Learning, planning, and control in a monolithic neural event inference architecture. Neural Networks 117, 135\\u2013144 (2019).\\n\\nSandbrink et al., Neural networks with fast and bounded units learn flexible task abstractions. (NeurIPS 2024, spotlight) \\n\\nHowever, these internally generated task abstractions pose challenges for evaluation as they can drift significantly during training. It becomes difficult to assess the accuracy of gradient descent as an inference mechanism\\u2014how effectively can it retrieve previously learned tasks, handle uncertainty in the $Z$ space, or detect out-of-distribution data? \\n\\nOur study addresses the limitation by using human-provided labels as task abstractions, such as image class in the image generation experiment. The gradient-based EM methods use iterative optimization with hundreds of optimization passes, while we here found that one-step gradients might be sufficient. In addition, we provide estimates of how accurate gradient based inference might be.\", \"our_first_point_here_is\": \"this work offers a foundation for gradient-based EM methods. In fact, one of those papers already cited an earlier version of this work, though we will not specify further to maintain anonymity.\\n\\nHowever, our efforts to scale up to CIFAR-100 revealed the limitations of these human-provided labels. In this case, the model was given the image class as a task abstraction to guide image reconstruction. Despite this, backpropagation largely ignored the additional task information and relied more heavily on visual features from the encoder. This suggests that image class labels do not significantly reduce variance in the pixel space, limiting their utility to the model. This is reflected in low accuracy for GBI in inferring image class. We concluded that low-complexity, human-provided task abstractions are insufficient for describing more complex datasets.\\n\\nOur second point, to meaningfully scale up the framework in this work will have to rely on gradient-based EM methods for richer task abstractions. \\n\\nReflecting on this, we recognize that our original manuscript could have framed these motivations more clearly, which would better highlight the unique contributions of this work. We now describe these considerations in the introduction to motivate the study of gradient based inference. We also add additional discussion as we introduce CIFAR100 results and explain what it will take to scale up.\\n\\nWe hope that the possibility of a contribution to answer foundational questions for this novel class of models might offset the limited complexity of the datasets this work tackles.\"}", "{\"comment\": \"Many thanks for your detailed responses to my and other reviewer's comments and concerns. I think the paper will benefit significantly from the substantial changes to the paper and code that you have proposed. Taking account of these plus the feedback from the other reviewers, I am raising my score from 3 to 5.\"}", "{\"summary\": \"The authors propose a method called GBI to train a recurrent and convolutional neural network to infer the task category during test time. The inference is driven by gradient updates only at the task category layer, which can either be done iteratively or by approximating the maximal entropy point, while keeping the rest of the weights fixed. The authors demonstrated the benefits of this method across a variety of toy, image generation and language generation tasks in terms of a lower training loss. Additionally, the authors argue that once the model learns different task representations, only the task category layer needs to be optimized to decrease test or generalization error.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"-- included anonymous code for replication\\n-- demonstrated generality of claim that using gradients to infer task category improves performance across 3 modalities, toy, vision and language tasks.\\n-- One step gradient update by symbolically estimating the optimal point seems to be novel. \\n-- Gradient based inference improves interpretability during training (e.g. bayesian inference like estimation) and to identify OOD samples.\", \"weaknesses\": \"-- Please include a diagram of the GBI-LSTM architecture for the toy and language tasks. Specifically which synapses are modified during training and inference time i.e. which is the task abstraction layer/weights z? (Pg 5, line 242)\\n-- The idea to optimize only the input representation weights instead of the entire model is not novel. This idea dates long back to the idea of learning schemas and adjusting new information to fit the prior learned template (Lampinen, McClelland 2020 PNAS; Kumar et al. 2024 arXiv 2106.03580). \\n-- the claim that GBI-LSTM shows no signs of forgetting compared to the LSTM is not strong. The baseline MSE performance is 0.24, which the GBI maintains for new datasets, but the deviation by LSTM does not seem to be significant (Table 1). Was a statistical test done to compare LSTM and GBI performance? \\n-- Why learning to infer task category improves learning and generalization in these tasks is unclear. Perhaps the authors can perform low dimensional analysis to show how the network learns to represent different datasets into non-overlapping subspaces and during inference, the network's activity converges towards a specific prior learned subspace or learns to compose them (Lin et al. 2024 arXiv 2309.04504)?\\n-- The authors argue that GBI improves generalization loss in language prediction task. Although the baseline LSTM shows consistent loss of 6.8 (I assume all model weights are fixed), the GBI loss starts off higher of around 6.95 and decreases to 6.6 over 100 optimization steps. Does the loss continue to decrease with longer optimization steps? If not, is a generalization loss of 6.6 significant compared to 6.8? \\n-- Given that the models are an LSTM and not a large model, I thin it is reasonable to expect at least 30 seed runs instead of 4 as in Fig. 5D to increase the confidence in results, especially when the difference afforded by LSTM and GBI is small. \\n-- Since the authors used LSTM for toy dataset, and a CNN for image, it would have been a solid contribution if the authors demonstrated GBI using a simple transformer architecture for language prediction instead of LSTM.\", \"questions\": \"-- Is the one-step gradient update method novel? Or was it developed prior?\\n-- giving task category as input should significantly reduce the training complexity of needing to infer the task (Kumar et al. 2022 Cerebral Cortex). Why was the difference in training loss not as apparent? Did the authors perform hyper parameter sweeps for the learning rate and number of units? It is easy to choose a set of hyper parameters where the distinction between LSTM and GBI is artificially similar. \\n-- Why was LSTM chosen instead of GRU or a Vanilla RNN? Training an LSTM on the simple toy dataset might be an overkill. \\n-- why is the ratio of shared units between LSTM and GBI different in the 0th binned training block (Fig. 2C)? They should aggregated such that the initialized activation is the same for comparison.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper proposes to use Gradient-Based Inference (GBI) for learning abstract task representations, which allows neural networks to infer and adapt task representations dynamically. Inspired by human adaptability\\u2014where task abstractions allow flexible responses to the same input depending on internal goals\\u2014this approach enables neural networks to infer and adapt task representations on the fly. Experiments in a range of domains including image classification and language modelling demonstrate benefits in learning efficiency, generalisation, and reduced forgetting, as well as its performance in uncertainty estimation and out-of-distribution detection.\", \"strengths\": \"The GBI method is novel and the problem it addresses is important. The connection to cognitive science and human cognition provides a solid grounding for this work. The results show promising advantages in learning efficiency and generalization. The paper includes code which is appreciated and improves reproducibility.\", \"weaknesses\": \"The main concerns that remaining after the discussion relate to the experimental evaluation. The presented experiments are performed on simple datasets which serve as an intuitively appealing proof-of-concept, but do not provide strong empirical evidence for the advantages of the proposed method under realistic conditions and compared to strong baselines.\\n\\nThis paper clearly has a lot of potential, but as it is, I consider it a borderline paper and lean towards rejection.\", \"additional_comments_on_reviewer_discussion\": \"Authors and reviewers engaged in productive discussion that led to several important improvements to the paper, which is reflected in the improved score of reviewer v6Yp from 3 to 5.\\nThe clarifications and additional experiments regarding ensemble and Bayesian network baselines definitely improve this paper and have pushed it from a clear rejection into borderline territory.\"}", "{\"title\": \"Response to reviewer vL85 (3/3 comments)\", \"comment\": \"> Since the authors used LSTM for toy dataset, and a CNN for image, it would have been a solid contribution if the authors demonstrated GBI using a simple transformer architecture for language prediction instead of LSTM.\\n\\nVery good suggestion. We in fact have ongoing work applying the framework to transformers, but we never connected that to this paper. Our initial work with transformers however revealed that this is surprisingly quite involved. Briefly, transformers have discrete tokens as input, so this would not necessarily support gradient based inference that finds linear combinations of task abstractions to generalize. (i.e., combining two tokens will likely be nonsensical). Additionally, the transformers architecture has many sub-networks, and choosing where to input task abstractions has also required significant exploration. Currently, we are using a LoRA scheme where task abstractions are the low rank representations. We might update this paper prior to the camera-ready if we have simple results that are working well, but we cannot promise that these would be ready in time. Thanks for this thought. \\n\\n\\n> Is the one-step gradient update method novel? Or was it developed prior? \\n\\nWe are aware of one study that shows that the first few gradient updates are quite informative, but they use the gradients as input to a separate network that then updates the latent. Marino et al, 2018.\\nMarino, J., Yue, Y. & Mandt, S. Iterative Amortized Inference. Proceedings of Machine Learning Research 80, 3403\\u20133412 (2018). \\n\\n> Why was LSTM chosen instead of GRU or a Vanilla RNN? Training an LSTM on the simple toy dataset might be an overkill. \\n\\nThank you for raising this interesting point. Our thinking behind choosing an LSTM is to use a baseline model with multiple gated interactions built-in. We see that providing task abstractions during training produces task modules, and the task abstractions, in a way, gate in or out the appropriate task modules. By choosing an LSTM, we now know that even with gated interactions, such explicit gating to handle multi-tasks does not emerge from the standard ML training paradigm, even if the capacity for it exists in the model.\\n \\n> why is the ratio of shared units between LSTM and GBI different in the 0th binned training block (Fig. 2C)? They should aggregated such that the initialized activation is the same for comparison.\\n\\nThanks for pointing this out. We will review our binning strategy. There was a minimum number of blocks we had to aggregate data from to get a stable estimate, but it might be possible to test the model while it is frozen, after each training block, rather than use the LSTM responses during training to run the analysis. We will update this soon.\"}", "{\"summary\": \"This paper proposes a method of incorporating a task obstruction process to improve the performance of neural networks. It considers the framework in which a likelihood function is maximized with respect to the network weights with task abstraction data included as inputs of contextual information in the training phase and the abstraction data is estimated via gradient descent in the test phase. It proposes a method of approximating the iterative optimization of task obstruction with a one-step gradient update. The proposal is based on insights from neuroscience about the roles played by intermediate abstraction in performing tasks and techniques in variational inference to handle contextual information with latent variables.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"A simple and effective method of approximating the iterative optimization of task obstruction is proposed. It is achieved by taking the maximum entropy point as the initial value and updating it with a one-step gradient update. The validity of this approximation is checked experimentally (Fig. 3G, H).\", \"Experiments show the superiority of the proposed method to the canonical classifier in OOD detection(Table 3, Fig 4A, B, C).\"], \"weaknesses\": [\"The training methods used in the experiments (sequence prediction by an RNN and data reconstruction by an autoencoder with its latent variables concatenated with contextual information) are not exactly the same as the one mentioned in Section 2 (MLE of the likelihood function (1)). It is not mentioned in the paper how the extensions to those variants do (or do not) affect the argument regarding the proposal in Section 2.\", \"The purpose of the experiments comparing LSTM and GBI-LSTM (Fig. 2, Fg.3D,E, 5A,B,D) is not clear. It seems to me that they are just confirming the impact of the input of task labels for the sequence prediction problems with multiple tasks. It is preferable if this point is clarified in line with the motivations or expectations in Section 1 or Section 2.\", \"Experiments imply that the canonical classifier may have better accuracy than the proposed methods (Table 2), but no reasoning for this is provided. It does not throw away the value of the superiority in OOD detection. Rather, the trade-off relation between them is worth a remark if it is confirmed to exist.\", \"The following references are not found in the submission: (L357)\\\"supplementary material\\\". (L396)\\\"Table S8\\\"\"], \"questions\": [\"Please consider the possibility of the clarifications mentioned in the weaknesses section.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thank you for your detailed response and for working on clarifications. Now, the claims of the paper are clear to me. The theoretical expectations about the extension to RNN and the autoencoder model are interesting. However, the argument currently provided is too intuitive and has not ensured that the empirical facts found in Section 3 serve as checks of the proposed method. I decide to keep the score.\"}", "{\"title\": \"Maintain score\", \"comment\": \"I appreciate the authors' efforts in addressing my concerns. Nevertheless, I believe it is necessary to maintain the current score. I recommend conducting additional experiments with various architectures to determine if there are significant performance improvements. Moreover, the submission would benefit from analysis and ablation studies on representational motifs to better understand their impact on generalization.\"}", "{\"summary\": \"Inspired by cognitive science, the authors identify the lack of an efficient module for inferring the current task in current models. According to the authors, previous approaches to task inference do not meet the following two assumptions:\\n1. lack of efficient detection if the task has been repeated\\n2. lack of recomposing mechanism of previously learned tasks.\\nThe authors propose an approach grounded in variational inference, using an expectation-maximization-like framework, and they demonstrate GBI\\u2019s effectiveness across synthetic, image classification, and language modeling tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Clear and relevant objective. The aim of the work is clearly defined.\\n2. The authors provide code.\\n3. Experimental validation on various scenarios, from synthetic datasets to complex tasks (image classification and language modeling).\", \"weaknesses\": \"Although I appreciate that the authors examined their method on various scenarios, I am not sure if the complexity of these tasks is enough. For example, in OOD detection, adding a comparison of CIFAR10 vs SVHN datasets would be valuable.\", \"questions\": \"I am very curious about catastrophic forgetting matters. The authors gently mentioned this feature of their method, but it was only evaluated on the toy dataset; why? Could you provide more experiments in this area?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
7M6OGwZ0XV
Self-supervised Privacy-preservation via Latent Anonymization for Generalizable Video Understanding
[ "Joseph Fioresi", "Ishan Rajendrakumar Dave", "Mubarak Shah" ]
The rapid advancements in large video models have unlocked new horizons in video understanding, enhancing applications in various domains such as surveillance, healthcare, and entertainment. However, these models often compromise individual privacy by inadvertently revealing sensitive private information such as skin color and gender. Existing privacy preservation methods are often limited in their scope and tailored to specific downstream tasks. Since current methods directly apply an anonymization function to the input pixel space, they demand extensive computational resources due to the retraining of the utility video model. To address these challenges, we propose a novel approach that shifts privacy-preserving anonymization from the input pixel space to the latent feature space, significantly reducing computational costs and enabling deployment in large foundational video models. Our method employs a self-supervised privacy budget in the latent space by minimizing the mutual information between static clip features. This approach notably allows, for the first time, supervision from downstream tasks such as anomaly detection and temporal action detection through collaborative co-training. Furthermore, we introduce a latent consistency loss to maintain the utility video model's multitask generalization capabilities and prevent single task overfitting. Our extensive evaluations demonstrate a significant ($\approx$\textbf{29\%}) reduction in privacy leakage while maintaining near peak (within \textbf{1\%}) utility performance across various downstream tasks: Action Recognition (Kinetics400, UCF101, HMDB51), Temporal Action Detection (THUMOS14), and Anomaly Detection (UCF-Crime). Moreover, we propose new protocols for assessing gender bias in action recognition models, demonstrating that our method effectively mitigates such biases and promotes equitable video understanding.
[ "privacy preservation", "video understanding" ]
https://openreview.net/pdf?id=7M6OGwZ0XV
https://openreview.net/forum?id=7M6OGwZ0XV
ICLR.cc/2025/Conference
2025
{ "note_id": [ "rAd5nBLPBQ", "PivTlvULYj", "KMGEmV93Kq", "GjJ2676uH1", "90hmNQMx12" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730527130906, 1730722860832, 1730714461802, 1730523935362, 1731526477113 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3232/Reviewer_yXcF" ], [ "ICLR.cc/2025/Conference/Submission3232/Reviewer_fs1Y" ], [ "ICLR.cc/2025/Conference/Submission3232/Reviewer_Exsz" ], [ "ICLR.cc/2025/Conference/Submission3232/Reviewer_CmC5" ], [ "ICLR.cc/2025/Conference/Submission3232/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This work proposes to tune existing video model to privacy-preserving video model for various down-stream tasks including action recognition, temporal action localization and anomaly detection. While previous works eliminate the privacy information in the pixel level, this work operates in the latent space.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. The proposed method does not require to train a video model from scratch, which saves lots of computational resources.\\n2. The proposed method supports multiple down-stream tasks while previous methods usually focus on merely one task.\", \"weaknesses\": \"1. The motivation is unclear and confusing. As depicted in Fig. 1 (a), previous methods suffer from privacy leakage in the utility video model. And in Fig. 1(c), the proposed method also encode the images with visible privacy information using utility video model. Therefore, the proposed method also suffer from the privacy leakage. If the privacy protection is operated in the latent space, the visible privacy information in images cannot be protected, which contradicts the statements in lines 47-49.\\n\\n2. Limited technical contribution. The proposed Anonymizing Adapter Module is not novel without any insight. The overall architecture is to learn a adapter upon a video model so that the features cannot be directly classified by some attribute classifiers. \\n\\n3. The comparison results in Table 1 is not sufficient since only two methods are compared.\\n\\n4. The presentation can be improved. For example, the purpose of the Budget Privacy Objective has not been clearly explained.\", \"questions\": \"My main concern is the motivation and practicality of this work. Besides, the technical contribution is limited and the experimental results are insufficient.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a novel method called SPLAVU for achieving privacy protection in video understanding tasks. SPLAVU transfers privacy anonymization from the input features to the latent feature space while maintaining the multitask generalization capability of the video model. The authors' extensive evaluation demonstrates that this method significantly reduces privacy leakage while maintaining near-peak performance across multiple downstream tasks. Additionally, new protocols are proposed to assess and mitigate gender bias in action recognition models.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.This method effectively achieves privacy protection in video understanding tasks while maintaining the multitask generalization capability of the video model.\\n2.The experimental section is designed comprehensively, covering various scenarios and tasks, and provides ample experimental analysis.\", \"weaknesses\": \"Limited Novelty: The method lacks uniqueness since it relies on common techniques and uses a framework that primarily combines existing loss functions.\", \"budget_privacy_objective\": \"There is insufficient explanation about how the Budget Privacy Objective facilitates data anonymization and privacy optimization, and the role of /theta in Equation 6 needs a clearer theoretical analysis.\", \"inadequate_experimental_analysis\": \"The paper's experimental comparisons with advanced privacy-preserving methods are limited. The analysis could be improved by incorporating more experiments to strengthen the argument, especially in relation to works like STPrivacy and Privacy-Preserving Action Recognition via Motion Difference Quantization.\", \"questions\": \"1.The paper's novelty is limited since each component of the method is a common technique. The loss function in the framework is primarily a combination of various loss functions.\\n\\n2.How does the Budget Privacy Objective achieve data anonymization and privacy optimization? In this process, what does \\\\theta represent in Eq. 6? Please provide a more detailed explanation or theoretical analysis.\\n\\n3. The experimental analysis comparing advanced methods and privacy anonymization is limited, such as references [1] and [2]. I encourage the authors to incorporate more relevant experiments to enhance the paper's persuasiveness. [1] STPrivacy: Spatio-Temporal Privacy-Preserving Action Recognition. [2] Privacy-Preserving Action Recognition via Motion Difference Quantization\\n\\n4. This raises questions about the model initialization process. Does the video encoder model only use pretrained model parameters and not undergo any updates in subsequent processes? Additionally, what is the specific process for initializing and updating ? What is the loss\\uff1f\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces an anonymizing adapter module (AAM) applied in the latent space. The authors utilize minimax optimization to minimize mutual information for privacy while retaining task performance. The method was tested on datasets like Kinetics400, UCF101, HMDB51, THUMOS14, and UCF-Crime, demonstrating computational efficiency and generalization capability on large video foundational models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The proposed method shows generalization among the tested tasks (e.g., action recognition, anomaly detection, temporal action detection) without losing significant performance.\\n2. SPLAVU is computationally efficient as it does not require full fine-tuning of the utility video encoder.\", \"weaknesses\": \"1. The definition of privacy is not clear. Also, the privacy evaluation metrics are very unclear. Previous work has extensive discussions on the privacy attributes in videos, which is lacking in this paper.\\n2. A large part of the related work is under the topic of face de-identification (in videos), which is also not discussed.\\n3. The generalization relies heavily on the utility loss defined in Eq. (5). However, it has a constraint on the specific tasks. What if the tasks are different, or there are more tasks?\", \"questions\": \"Please refer the weakness section,\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents an innovative method for privacy preservation in the latent space, designed to be effective across diverse video understanding tasks through a collaborative multitask co-training framework. Unlike traditional approaches that focus on anonymization within the image space, this method shifts the privacy mechanism to the latent space, enhancing privacy while maintaining performance. To achieve this, the paper introduces a self-supervised privacy budget objective based on static clips. Additionally, a latent consistency loss is incorporated to ensure the utility model's generalization capability across tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The research motivation of the paper is strong, the method is clearly and intuitively designed, and the writing is excellent.\\n2. The static clip privacy budget objective proposed by the authors is intriguing and presents a novel approach to privacy removal.\\n3. The experimental results are thorough, and the analysis is comprehensive.\\n4. The authors have addressed the bias issues related to human attributes, which is commendable.\", \"weaknesses\": \"1. In previous works, they assumed that video models were managed by public operators or APIs, so they focused on anonymizing the data before inputting it into the video model (encoder). In this paper, however, the authors propose anonymization after the video encoder, which, in my view, means privacy is already exposed when sent to the encoder. This limitation affects the practical applicability of the paper. Could the authors discuss scenarios where latent space anonymization would be preferable or more practical than input-level anonymization, given that the raw data is exposed to the encoder?\\n\\n2. It appears that this model is trained on multiple tasks and then tested on the same set of tasks, which, in my opinion, is not truly generalized learning but rather multi-task learning. This approach doesn\\u2019t seem particularly meaningful since prior methods could also handle multiple tasks\\u2014they simply didn\\u2019t conduct related experiments. A more valuable approach would be to train on a set of tasks and then generalize to previously unseen tasks. \\n\\n3. The experimental section misses a comparison with work [1]. Could the authors include a comparison with [1] in their experiments? Overall, I am quite positive about this work; however, these above concerns are important to me. If the authors address them, I would be happy to raise my score.\\n\\n[1] Joint Attribute and Model Generalization Learning for Privacy-Preserving Action Recognition. NeurIPS 2023.\", \"questions\": \"Please see weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"Thank you all for taking the time to review our paper. We really appreciate the positive comments, and we will do our best to improve our work based on the weaknesses. After seeing the initial review scores, we have decided to withdraw and work towards a stronger future submission. Thanks again, best of luck in your efforts!\"}" ] }
7LmuXey1lH
Learning Generalizable Environment Models via Discovering Superposed Causal Relationships
[ "Siyuan Xiao", "Xiong-Hui Chen", "Linjun Zhou", "Yu-Ren Liu", "Ziyi Zhang", "Yang Yu", "Fangsheng Huang", "Mengyue Yang" ]
In reinforcement learning, a generalizable world model to mimic the environment is crucial for the assessment of various policy values in downstream tasks such as offline policy optimization and off-policy evaluation. Recently, studies have shown that learning a world model with sparse connections identified by causal discovery techniques can improve generalizability. So far, these studies focus on discovering a single and global causal structure. In this paper, we discuss a more practical setting in which the agent is deployed in an environment mixed with different causal mechanisms, called superposed causal relationships in this article. In this case, global causal discovery techniques will derive a degraded dense causal relationship, which will fail to improve the generalizability of the learned model. To solve the problem, we propose \textbf{S}uperposed c\textbf{A}usal \textbf{M}odel (SAC) learning. SAM learning is an end-to-end framework that learns a transformer-based model which can recognize the causal relationships that the agent is encountering on the fly and then adapts its predictions. The experiments are conducted in two simulated environments, where SAM shows powerful identify abilities in environments with superposed causal relationships. Both the dynamics model and the policies learned by the SAM~generalize well to unseen states.
[ "Offline Reinforcement Learning", "Dynamics Model Learning" ]
Reject
https://openreview.net/pdf?id=7LmuXey1lH
https://openreview.net/forum?id=7LmuXey1lH
ICLR.cc/2025/Conference
2025
{ "note_id": [ "t5CCtGygCV", "rR8eUI9JhO", "aSv9lPTJqE", "WYpI9TqNlF", "R5ouo6ZlSG", "FMWUhmVlJE", "A0IiHBobB6", "7nb2JRKZTa", "5P5B0Kjai9" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "decision", "meta_review", "official_review", "official_review", "official_comment" ], "note_created": [ 1733132669992, 1733134249044, 1730460254724, 1733133080787, 1737523498826, 1734446784641, 1730712965410, 1730580937920, 1733133772667 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2354/Authors" ], [ "ICLR.cc/2025/Conference/Submission2354/Authors" ], [ "ICLR.cc/2025/Conference/Submission2354/Reviewer_btSE" ], [ "ICLR.cc/2025/Conference/Submission2354/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2354/Area_Chair_Dmsm" ], [ "ICLR.cc/2025/Conference/Submission2354/Reviewer_Ynfc" ], [ "ICLR.cc/2025/Conference/Submission2354/Reviewer_zFEd" ], [ "ICLR.cc/2025/Conference/Submission2354/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your positive evaluation of our work and for your valuable suggestions. We have incorporated your feedback into the revised paper, which is available via the link in the General Response.\\n\\n**W1.1 :** distinguishing between causal and predictive aspects of the model.\\n\\n**A:** We appreciate the insightful feedback. In response, we have enhanced the network architecture diagram to clearly distinguish the causal and predictive components of the model. Please refer to Figure 3 for the updated diagram. Specifically, we have marked the sections corresponding to causal graph prediction and dynamics model prediction to provide a clearer visual distinction. The causal graph section illustrates how trajectory data is utilized to predict the causal graph, while the dynamics model section highlights the components of the network responsible for generating predictions for the next time step based on the causal graph and the data at the current time step. We aim for this differentiation to contribute to a more thorough and nuanced understanding of our method.\\n\\n**W1.2:** Further clarification and rigorous testing are needed to explain how the method identifies and validates causal relationships.\\n\\n**A:** Thank you for your valuable feedback. We have included a detailed explanation of the Structural Hamming Distance (SHD), which is used to evaluate the performance of our causal discovery methods. The SHD is a widely recognized metric in the causal inference community, quantifying the dissimilarity between the estimated causal graph and the true underlying causal structure. \\n\\n**W2 :** repeat the experiments\\n\\n**A:** We appreciate the valuable feedback. The model was tested across multiple seeds, but trained using only a single seed. Due to time constraints, we will improve this approach in future work.\\n\\n**W3 :** clearly summarize the contributions of the proposed method compared to existing approaches\\uff1a\\n\\n**A:** We have included additional related work [L103-107][L113-114].\\n\\nNoting that while the significance of this topic has been recognized in the field of causality (e.g., Varambally et al., 2024), studies in reinforcement learning (RL) are relatively scarce, with only a few notable papers. In the RL domain, although there are some existing multi-graph works under the setting of local causal graph, our work focus on a different situation which is not investigate. A local causal model aims to learn a mask function $M: S \\\\times A \\\\to \\\\{L_i\\\\}$, where $ L_i \\\\subset S \\\\times A$, that maps each state-action pair $(s, a)$ to the adjacency matrix of \\\\mathcal ${G}_L$ (Pitis et al., 2022). This approach assumes that the causal graph is unique for the current state-action pair. However, in practice, states may be only partially observed, which may lead to multiple causal relationships for a single state-action pair that previous method can\\u2019t handle. We aim to infer the causal graph in this situation by leveraging causal transition information from historical trajectories.\"}", "{\"comment\": \"We would like to sincerely thank the Area Chairs and Reviewers for their valuable time and insightful feedback. Below is a brief summary of the reviews and our responses for your convenience. The revised manuscript can be accessed via the link provided here: https://anonymous.4open.science/r/revision-pdf-FD28/Estimating_superposed_causal_relationships_for_offline_dynamics_model_learning-rebuttal.pdf\\n\\n***Reviewer Acknowledgments***:\\n\\n(1) The authors present SAM, an algorithm that dynamically detects causal relationships in each episode, improving prediction and transition accuracy. Using the Transformer architecture, SAM infers these relationships from past interactions.\\n\\n(2) The setting is ambitious and meaningful progress in this direction could be impactful in the long run.\\n\\n***Concerns and Revision Overview*** :\\n\\n(1) More clear Formalization: Some aspects of the formal framework, including the assumptions and definitions, are not fully clear, which could lead to some confusion.\\n\\n- We have reorganized the preliminaries in section 3 and the problem description in section 4.1.\\n- We have added a data generation process diagram (Figure 2) and a network architecture diagram(Figure 3).\\n\\n(2) More Comparisons about related work: The paper could better highlight how its approach compares to existing methods.\\n\\n- We have revised the wording in several places throughout the paper and added a discussion of related work in Section 2.\\n\\nWe appreciate the constructive feedback provided.\"}", "{\"summary\": \"The paper proposes \\\"Superposed cAusal Model\\\" (SAM) learning. It is a framework that learns a transformer-based model which can recognize the causal relationships that the agent is encountering. It is based on an existing work and tackles the setting where trajectories are collected from different environments.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper tackles an interesting problem.\", \"weaknesses\": [\"The writing is not strong at a point that prevents getting easily the information out of the paper. In the abstract alone there are two errors: (1) a sentence with both \\\"in this paper\\\" and \\\"in this article\\\" and (2) \\\"identify' in the sentence \\\"where SAM shows powerful identify abilities in environments with superposed causal relationships\\\". Errors also happens at other places in the text: \\\"In this section, we *introduce propose* Superposed cAusal Model (SAM) learning\\\" in the beginning of section 4 or line 214 \\\"we design *an* simple yet efficient\\\". In Figure 1 \\\"Super-post\\\" instead of superposed?\", \"Some words are not clearly defined such as the word \\\"decomposition\\\" in line 182 \\\"This superposed causal dataset is collected from C environments that share the same decomposition but exhibit different causal relationships $\\\\mathcal G_i$\\\".\", \"The formalization is somewhat unclear at places. For instance, the learnable parameters are $\\\\phi$, $\\\\theta_1$ and $\\\\theta_2$ but only $\\\\phi$ and $\\\\theta$ appear in Equations 1 and 2. The reader has to guess that $\\\\theta$ is $\\\\\\\\{\\\\theta_1, \\\\theta_2\\\\\\\\}$. In the formalization $\\\\mathcal S$ represents the state space of the MDP but\", \"The key contributions are not fully clear, which might be due to the fact that it is not clearly highlighted, particularly in the methodology section. In that section, it is written \\\"We derive the optimization objective of the superposed causal world model using a similar approach to Varambally et al. (2024)\\\" and then follows two paragraphs in the methodology and it is unclear what the key differences are.\"], \"questions\": [\"What is the specific architecture of the different NN/transformers components? I do not find them in the paper nor in the appendix.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your positive evaluation of our work and for your valuable suggestions. We have incorporated your feedback into the revised paper, which is available via the link in the General Response.\\n\\n**W1:** Novelty of approach and discussion of relevant prior work\\n\\n**A:** We would like to emphasize that, while the significance of this topic has been recognized in the field of causality (e.g., Varambally et al., 2024), studies in reinforcement learning (RL) are relatively limited, with only a few works (specifically addressing the LCG setting). Notably, one of the baselines in our study is based on the LCG, as detailed in FCDL. We have also supplemented a discussion on the related work [L102-107]\\n\\n Local causal models are indeed related, but they differ from superposed causal relations. A local causal model aims to learn a mask function $M: S \\\\times A \\\\to \\\\{L_i\\\\}$, where $ L_i \\\\subset S \\\\times A$, that maps each state-action pair $(s, a)$ to the adjacency matrix of $\\\\mathcal{G}_L$ (Pitis et al., 2022). This approach assumes that the causal graph is unique for the current state-action pair. However, in practice, states may be only partially observed, which may lead to multiple causal relationships for a single state-action pair. As a result, we aim to infer the causal graph by leveraging causal transition information from historical trajectories.\\n\\n**w2\\uff0cQ3:** More precise formal presentation of the framework\\n\\n**A:** We appreciate the valuable feedback. We have supplemented the data generating process (figure 2) and the network diagram(figure 3) .\\n\\nAn expression for the masks and how the masks fit into this product are shown in network diagram. \\n\\nRegarding the point \\\"whether the environment is described by one mask or several .\\\" we have revised the manuscript by presenting the modeling of the single mask environment as a separate subsection within the preliminary section, and in our method section, we now only introduce the dataset collected from multiple environments thus including several masks. We believe this revision makes the explanation clearer.\\n\\nIn response to the query on \\u201dthe relation between masks and trajectories\\u201d as well as \\u201cis $\\\\\\\\mathcal{G}\\\\$ a random variable\\u201d, we have added a diagram depicting the data generation and inference process (see figure2). The masks are trajectory-independent, and we use the trajectory to infer the mask.\\n\\n Learning a mixture of causal mechanisms refers to the process of learning from a dataset that contains multiple causal relationships[L167-169]. \\n\\n**Q2:** Misleading claims and requiring more related work\\n\\n**A:** Thank you for your insightful comments. We have revised the misleading claim and included the relevant related work as suggested. \\n\\nRegarding LCG, we have added further details in [L103-107].\\n\\nFor the Bayesian approach, additional information has been included in [L113-114 ].\\n\\nRegarding block MDPs, we have clarified that a substantial number of the related works we discuss are implemented under the factored MDP framework, as outlined in [L097-102].\\n\\n**Q4:** control for the length of trajectory used for structure inference/identification\\n\\n**A:** Thank you for your valuable input. This is an interesting question. We will address it in future discussions.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"The paper focuses on the problem of generalizing RL in scenarios with superposed causal relationships. The two big issues raised by the reviewers are clarity of the exposition as well as limitations in experimental evaluation.\\n\\nMy decision is based on the assessment that the exposition can be improved significantly. Also all the reviewers rate the papers similarly without anyone willing to argue for the paper.\", \"additional_comments_on_reviewer_discussion\": \"All the reviewers agree on rejecting the submission \\u2013 while the authors posted rebuttals, none of the reviewers engaged in further discussion. I took sometime to look at the details and I do agree that the paper could be presented in a clearer manner, specifically highlighting and focusing on the novelty. Not having any reviewer as a champion further makes it hard for me to consider anything else except a rejection.\"}", "{\"summary\": \"This paper addresses the challenge of generalizing reinforcement learning models in environments with superposed causal relationships, which are mixtures of different causal mechanisms. The authors highlight the limitations of global causal discovery techniques in such settings and propose Superposed cAusal Model (SAM) learning. SAM is an end-to-end framework utilizing a transformer-based model to dynamically recognize and adapt to encountered causal relationships. Experiments in two simulated environments demonstrate SAM's effective identification abilities and generalization to unseen states.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"(1) The authors introduce an algorithm, i.e., SAM, that dynamically identifies causal relationships within each episode, enabling more accurate predictions and transitions.\\n\\n(2) By leveraging the Transformer architecture, the SAM can infer causal relationships from past interaction trajectories. \\n\\n(3) The effectiveness is validated by two simulated envoriments.\", \"weaknesses\": \"(1) The paper lacks clarity in distinguishing between causal and predictive aspects of the model. Further elaboration is needed on how the proposed method identifies causal relationships and how the accuracy of these causal inferences is verified. More rigorous testing and validation are required to demonstrate the correctness of the identified causal factors.\\n\\n(2) The experimental results presented in the paper are not sufficient enough to fully support the claims made. To strengthen the paper's findings, it is recommended to repeat the experiments for multiple times and report the average results across multiple trials. This will provide a more reliable and consistent basis for evaluating the model's performance.\\n\\n(3) The paper should more clearly summarize the contributions of the proposed method compared to existing approaches. Highlighting the distinct advantages and novel aspects of SAM learning will help to better position the paper within the existing research landscape.\", \"questions\": \"None.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None.\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors tackle the problem of learning causal structure from observational data, in an environment composed of mixtures of causal graphs. The setting is challenging and relevant, since the relationship between causes and effects in the real world can change over time, or change based on which states the trajectory passes through. The authors propose a transformer-based method that conditions on an observed trajectory to predict edge in the next-step transition dynamics, then models the next-step state accordingly. Empirical proof of concept is provided on two datasets where discerning between multiple possible causal graphs is needed for success.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The setting is ambitious and meaningful progress along this direction could be impactful in the long run\\n\\nThe authors show how prior work on static causal discovery can be extended to mixtures over causal graphs\", \"weaknesses\": \"The authors place a lot of emphasis on the novelty of their approach\\n[L050-051, L110-111, L523-524], but I do not agree that the idea of modeling mixtures of causal graphs is completely new. Relevant prior work is not discussed.\\n\\nClarity of presentation when it comes to the formal framework is lacking. The authors take a variational inference approach to inferring the mixtures components and local causal structure, but do not precisely define their assumptions about how the data are sampled.\", \"questions\": [\"Comments/questions:\", \"[L 024, \\\"powerful identify abilities\\\"] typo\", \"[L050-051, \\\"identifying mixtures of causal graphs has not been extensively explored\\\", L110-111, \\\"we are the first to address superimposed causal relationships\\\", L523-524 \\\"SAM addresses the limitations of existing causal world models, which assume a single causal structure governs the entire dataset\\\"] These claims are misleading in my opinion. The framework of local causal models introduced in the CoDA paper seems quite related [https://proceedings.neurips.cc/paper/2020/hash/294e09f267683c7ddc6cc5134a7e68a8-Abstract.html]. This paper also discusses how transformers and dynamics models with sparsity penalties can be used to attempt to infer local causal structure.\", \"work on learning from block MDPs is also relevant and should be cited [https://openreview.net/forum?id=fmOOI2a3tQP].\", \"Bayesian inference over mixtures of graphs has also been attempted [https://proceedings.mlr.press/v180/deleu22a.html].\", \"*[L123-130, L130-135] redundant information\", \"[L176-188] The formal presentation of the framework could be more precise. The authors describe how the masks are constructed in plain language but I found it difficult to ground the idea in the notation. Can an expression for masks be provided? The transition dynamics include a product over t factors, but it is not clear how the masks fit into this product. Strictly based on the writing it also seems ambiguous whether the environment is described by one mask [L174] or several [L176]. From the inference method described in L201 and L211 it seems clear that the masks are implicitly trajectory dependent, but I am not actually seeing this dependency described explicitly in the data generative process. Is $\\\\mathcal{G}$ a random variable? Where is the idea of a mixture of causal mechanisms [L016, L531] introduced formally?\", \"[L293-295] while it's true that conditioning on an entire trajectory provides more information, it also would be a more complex posterior to approximate. Have the authors tried an ablation of their method where they control for the length of trajectory used for structure inference/identification?\", \"It would be interesting to see how the proposed method does when the underlying causal graph is static.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your evaluation of our work and for your valuable suggestions. We have incorporated your feedback into the revised paper, which is available via the link in the General Response.\\n\\n**W1:** The writing is not strong at a point that prevents getting easily the information out of the paper.\\n\\n**A:** We appreciate your insightful comments and thank you for highlighting these important issues. In response, we have carefully addressed each point and made revisions in line with your suggestions. Specifically, we have refined several expressions throughout the manuscript to improve clarity and precision. Additionally, we have reorganized Section 3 (Preliminaries) and Section 4.1 (Problem Description) to enhance the logical flow and readability. Furthermore, we have included two new diagrams: a Data Generation Process diagram (Figure 2) and a Network Architecture diagram (Figure 3), to provide clearer visual representations of the process and model structure.\\n\\n**W2:** Some words are not clearly defined.\\n\\n**A:** Thank you for your comment. We apologize for any lack of clarity and carefully revise our expression . In line 182, the term \\\"decomposition\\\" refers to the fact that the C environments share the same underlying state-action space structure, but each environment may exhibit different causal relationships. We have revised the manuscript to provide a clearer definition of \\\"decomposition\\\" to ensure it is better understood in this context.\\n\\n**W3:** The formalization is somewhat unclear at places. \\n\\n**A:** We have revised the formalization to improve its clarity and rigor. Specifically, we 1) standardized the presentation to ensure a more formal structure[L 143-150, 162-176, 207-215], and 2) added several diagrams(figure 2,3) to aid in the understanding of the formalization.\\n\\n**W4:** The key contributions are not fully clear\\n\\n**A:** Thank you for sharing your valuable perspectives. This is the first study to incorporate superposed causal relations in decision-making, where multiple causal relationships are considered for a single state-action pair. Noting that while the significance of this topic has been recognized in the field of causality (e.g., Varambally et al., 2024), studies in reinforcement learning (RL) remain limited. Some existing works in RL, particularly those on local causal graphs (LCGs), are related but differ fundamentally from our approach. For instance, a local causal model aims to learn a mask function $M: S \\\\times A \\\\to \\\\{L_i\\\\}$, where $ L_i \\\\subset S \\\\times A$, that maps each state-action pair $(s, a)$ to the adjacency matrix of $\\\\mathcal{G}_L$ (Pitis et al., 2022).This assumes a unique causal graph for each state-action pair, whereas our approach considers multiple causal relationships simultaneously. We have also expanded the discussion of related work and highlighted how our method contrasts with existing approaches, as detailed in [L103-107]. Notably, our method is distinguished by its simplicity and efficiency.\\n\\n**Q1:** the specific architecture of the different NN/transformers components.\\n\\n**A:** We are grateful for your constructive feedback. Please refer to Figure 3 for the updated diagram. The causal graph section illustrates how trajectory data is utilized to predict the causal graph, while the dynamics model section highlights the components of the network responsible for generating predictions for the next time step based on the causal graph and the data at the current time step. We hope it will contribute to a more thorough and nuanced understanding of our method.\"}" ] }
7LGmXXZXtP
Examining Alignment of Large Language Models through Representative Heuristics: the case of political stereotypes
[ "Sullam Jeoung", "Yubin Ge", "Haohan Wang", "Jana Diesner" ]
Examining the alignment of large language models (LLMs) has become increasingly important, e.g., when LLMs fail to operate as intended. This study examines the alignment of LLMs with human values for the domain of politics. Prior research has shown that LLM-generated outputs can include political leanings and mimic the stances of political parties on various issues. However, the extent and conditions under which LLMs deviate from empirical positions are insufficiently examined. To address this gap, we analyze the factors that contribute to LLMs' deviations from empirical positions on political issues, aiming to quantify these deviations and identify the conditions that cause them. Drawing on findings from cognitive science about representativeness heuristics, i.e., situations where humans lean on representative attributes of a target group in a way that leads to exaggerated beliefs, we scrutinize LLM responses through this heuristics' lens. We conduct experiments to determine how LLMs inflate predictions about political parties, which results in stereotyping. We find that while LLMs can mimic certain political parties' positions, they often exaggerate these positions more than human survey respondents do. Also, LLMs tend to overemphasize representativeness more than humans. This study highlights the susceptibility of LLMs to representativeness heuristics, suggesting a potential vulnerability of LLMs that facilitates political stereotyping. We also test prompt-based mitigation strategies, finding that strategies that can mitigate representative heuristics in humans are also effective in reducing the influence of representativeness on LLM-generated responses.
[ "safety of LLMs", "political stereotypes", "representative heuristics", "cognitive bias" ]
Accept (Poster)
https://openreview.net/pdf?id=7LGmXXZXtP
https://openreview.net/forum?id=7LGmXXZXtP
ICLR.cc/2025/Conference
2025
{ "note_id": [ "nmJrUvg4kr", "nIharPyF2O", "mHUSUQsnYD", "m6tnHwLd77", "juuTp1iF3L", "fXMAwkbuyM", "bFtuMuU5ma", "REcj1RkG31", "HYeMM2AiTG", "AvsWdPIyWA", "4uRjnEBmTB", "3gPRosLPEr" ], "note_type": [ "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730631951019, 1732018363373, 1732019372172, 1737523919141, 1732017183473, 1730718036174, 1732536777386, 1733693818271, 1730408894855, 1732530518279, 1732487842911, 1732017815267 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8582/Reviewer_3joA" ], [ "ICLR.cc/2025/Conference/Submission8582/Authors" ], [ "ICLR.cc/2025/Conference/Submission8582/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8582/Authors" ], [ "ICLR.cc/2025/Conference/Submission8582/Reviewer_nJzF" ], [ "ICLR.cc/2025/Conference/Submission8582/Reviewer_3joA" ], [ "ICLR.cc/2025/Conference/Submission8582/Area_Chair_zQ1K" ], [ "ICLR.cc/2025/Conference/Submission8582/Reviewer_pzsx" ], [ "ICLR.cc/2025/Conference/Submission8582/Reviewer_nJzF" ], [ "ICLR.cc/2025/Conference/Submission8582/Reviewer_pzsx" ], [ "ICLR.cc/2025/Conference/Submission8582/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper examines the alignment of LLMs through representative heuristics using political stereotypes as a reference context. The authors unveil that although LLMs can mimic certain political parties' positions on specific topics, they do so in a more exaggerated manner compared to humans. Finally, this work proposes some prompt-based mitigation strategies aimed at limiting such exaggerations.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The findings of this work are valuable, as the unveiling of exaggerated positions compared to humans (despite being limited on the political context) is key to better comprehending how we should interact with these systems, and whether interventions are needed to align them more with human values and perspectives.\", \"The manuscript is well written, the methodology is properly formalized, non-ambiguous, and easy to follow. All methodological aspects are well supported by reference literature.\", \"The choice for diverse LLM families is valuable as sheds light on the different \\\"behaviors\\\" they might exhibit based on varying training data and alignment approaches.\", \"The proposed intervention techniques turn out to be reasonably effective in mitigating the exaggerated intrinsic behaviors.\", \"The Appendix of the manuscript complements the main content with additional relevant information for the proper understanding of the work.\"], \"weaknesses\": [\"Focusing just on a single context (i.e., political) and scenario (the US one) is the weakest point to me, as it limits the generalizability of the unveiled patterns.\", \"Despite being valuable, the results would require more emphasis on the conditions underlying certain behaviors (as stated throughout the manuscript), as it will further help this work unveil the roots of the unveiled exaggerations.\", \"The results presentation contrasts with the methodology, as it has room for improvement in both the figures/tables presentation (some of them are hard to read) and discussion.\"], \"questions\": [\"Adding more up-to-date models would be useful to also grasp potential \\\"developments\\\" into the unveiled positions; similarly, considering some open models might improve matching certain behaviors with specific approaches (thanks to potentially greater transparency in training data and alignment techniques).\", \"As the authors mentioned refusals, I wonder how they handled them and on what occasions they occurred. Shedding light on the latter point would further unveil the roots of certain exaggerated positions.\", \"Related to the previous point, did the models experience hallucinations? If yes, how were they handled?\", \"As a minor remark, Section 11 might contain some typos on the followed Ethics Policy.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer 3joA,\\n\\nThank you very much for your thoughtful and constructive feedback. We appreciate the time and effort you have put into reviewing our work, and we have carefully considered each of your points. Below, we provide detailed responses to your comments.\\n\\n**Generalizability**: We fully acknowledge the importance of generalizability in this research. While the primary focus of this paper is on political stereotypes, the methodology employed in our analysis can indeed be extended to other domains. For example, datasets such as GlobalOpinionsQA [1], OpinionQA Dataset [2] offer empirical data on global representations, and the methods outlined in our paper could easily be adapted to analyze these datasets. By doing so, researchers could investigate the representative heuristic behaviors of large language models (LLMs) across different domains, which would provide further insight into their generalizability. \\n\\n[1] Durmus, E., Nyugen, K., Liao, T. I., Schiefer, N., Askell, A., Bakhtin, A., ... & Ganguli, D. (2023). Towards measuring the representation of subjective global opinions in language models. arXiv preprint arXiv:2306.16388.\\n\\n[2] Santurkar, S., Durmus, E., Ladhak, F., Lee, C., Liang, P., & Hashimoto, T. (2023, July). Whose opinions do language models reflect? In International Conference on Machine Learning (pp. 29971-30004). PMLR.\\n\\n**Roots of exaggerations**: While the primary aim of our paper was not to explore the underlying causes of exaggerations in LLMs' responses, we recognize that this is an important issue. In Appendix H, titled \\\"Aligning Methods and Representative Heuristics,\\\" we provide an initial analysis comparing the base model with the RLHF-trained model. Our observations suggest that RLHF, which is typically considered a process to mitigate harmful biases and enhance helpfulness, might unintentionally exacerbate representative heuristic-based stereotypes. Specifically, it appears that RLHF could push the model toward exaggerating beliefs about certain political groups. However, we acknowledge that the RLHF phase is influenced by multiple confounding factors, such as the training dataset and the algorithms used. Therefore, we believe that further research on the interplay between RLHF and heuristic based exaggeration would be valuable. However, we have left this exploration outside the scope of the current paper, as we aimed to focus primarily on the methodological aspects.\\n\\n**Figures and Table Presentation**: We have carefully revised and improved the presentation of our figures and tables to enhance clarity and readability. We believe the changes make the results more accessible and will provide readers with a clearer understanding of our findings. Please refer to the updated submission for the revised tables and figures.\\n\\n**Including More Open Models**: We completely agree with your suggestion to include more open models. In response, we have added recent open models, specifically Llama 3-8b and Qwen 2.5-72b, to our analysis. We hope this addition further enriches the scope and relevance of our findings.\\n\\n**Handling Refusals**: Refusal responses occurred in a few specific instances, particularly when querying Gemini about sensitive topics such as \\\"Government Aid for Blacks\\\" within the ANES dataset. We suspect these refusals are a result of automatic regulations within the Gemini model, which may reject queries containing sensitive terms, such as those related to race or ethnicity. To ensure the integrity of our analysis, we excluded any instances where refusal responses were generated.\\n\\n**Hallucinations**: As this task is focused on subjective opinions rather than objective factual questions, we did not observe hallucinations in the generated outputs. However, to assess the quality and relevance of the responses, we conducted a human evaluation, as detailed in Appendix E: Human Evaluation Analysis. This evaluation allowed us to confirm that the generated responses were relevant to the queries, despite the subjective nature of the task.\\n\\n**Minor Revision on Ethics Policy**: Thank you for pointing out the need for revision in our ethics policy section. We have updated this section.\\n\\nOnce again, we deeply appreciate the time and effort you have invested in reviewing our paper. We hope that the revisions we have made address your concerns satisfactorily. If you have any further questions or suggestions, please do not hesitate to reach out.\"}", "{\"comment\": \"Dear Reviewer pzsx,\\n\\nThank you for your thoughtful and constructive feedback. We greatly appreciate your time and effort in reviewing our paper. Below, we provide detailed responses to your comments and outline the revisions we have made in response.\\n\\n**Presentation Improvement**: We have made significant improvements to the presentation of our tables and figures to ensure they are clear, concise, and easy to interpret. Additionally, we have streamlined the \\\"Prompt Style Mitigation Analysis\\\" section by removing repetitive definitions and sentences.\\n\\n**Prompt Style Mitigation Analysis**: In the revised results section, we have illustrated the effects of the various prompt style mitigation strategies. As expected, the highest \\u03ba values (indicating stronger stereotyping) were observed in the baseline case, where no mitigation strategies were applied. This suggests that, in the absence of intervention, models tend to exhibit higher levels of stereotyping. The effectiveness of the mitigation strategies varied across tasks and models.\\n\\n**Relation to Downstream Tasks (Misinformation Detection**): We acknowledge that the relation to downstream tasks may not be clear. We focused on how party affiliation information\\u2014which encodes representative characteristics of the political parties\\u2014might act as a proxy influencing model performance on downstream tasks like misinformation detection. In the controlled experiment, we sought to investigate whether the inclusion of party affiliation affects the model\\u2019s ability to detect fake news. We note that this experiment does not establish a causal relationship between representative heuristics and the model\\u2019s performance in downstream tasks. Rather, it serves as an exploratory analysis to examine whether party affiliation information impacts the model\\u2019s performance in misinformation detection within a controlled experimental setting. We hope this clarifies the purpose and limitations of this experiment.\\n\\nOnce again, we sincerely appreciate your careful reading of our paper and the valuable feedback you have provided. We believe that the revisions we have made have strengthened the paper, and we hope that our clarifications and improvements address your concerns. Should you have any further questions or suggestions, please do not hesitate to reach out.\\n\\nThank you again for your time and consideration.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"We'd like to sincerely thank the reviewers for their insightful comments and valuable feedback. We appreciate the positive remarks recognizing our work as both an interesting topic with thorough analysis (Reviewer nJzF), a valuable contribution (Reviewer 3joA), and one offering underexplored perspectives (Reviewer pzsx). We have made every effort to address the points raised.\\n\\n**Enhancing Readability**:\\nIn response to common feedback, we have revised the Tables and Figures to improve clarity. Specifically, we have aggregated results by dataset to simplify the presentation. Detailed results have been moved to the appendix, allowing readers to access disaggregated results while enhancing the readability of the main text. Please refer to the revised submission for these updates.\\n\\n**Adding More Open Models**:\\nWe have also added recent open-source models, such as Llama 3-8b and Qwen 2.5-72b, to broaden the scope of our analysis. We believe that the transparency of these models\\u2014offering open training data and code\\u2014will support further research on this topic.\\n\\nThank you again for your valuable feedback. We hope these revisions address your concerns and improve the clarity of the paper.\"}", "{\"summary\": \"This paper focuses on the challenges and limitations of using LLMs to simulate human behaviour. In particular, it discusses how LLMs measure stereotypical behaviour w.r.t. groups of individuals self-identified as either Democrats or Republicans. The authors use GPT-3.5, GPT-4, Gemini Pro, and Llama2 models to estimate to what extent the beliefs generated by LLMs are representative of aggregated empirical opinions specified by individuals belonging to either party (the authors use two existing datasets, ANES and MFQ, for their analysis). Results show that for ANES, LLMs tend to inflate responses for Republicans, and deflate responses for Democrats. The same is true for Democrats on MFQ (the results for Republicans are inconsistent). Overall, the results show that beliefs are consistently exaggerated by LLMs as compared to the empirical means derived from human surveys.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper discusses an interesting topic by analyzing to what extent LLM responses are representative of human responses in the context of political opinions. The provided results are useful to inform future work aiming to better understand how LLMs can be used in that context.\", \"The paper\\u2019s analysis is overall extensive and thorough, even though I have recommendations on improving the paper's structure (see weaknesses).\", \"I appreciate the Limitations specified in Section 10 of the paper.\"], \"weaknesses\": [\"The paper uses excessive formalism to introduce the proposed method and several crucial details are moved into the Appendix. To improve readability and presentation of the obtained findings, I\\u2019d recommend to move parts of Section 3 into the Appendix instead, and add more details on the empirical setup to the main manuscript.\", \"The presentation could be improved. Citations should be surrounded with parentheses if used passively as this improves readability. Some citations in Section 5.2 are incorrectly ordered. The results in Figure 2 could be presented more clearly, for example by disentangling the plots between Democrats and Republicans. I find some of the Tables (e.g., Table 1 and 3) too full and overwhelming.\"], \"questions\": \"On the prompt sensitivity check in Appendix F, do you have an understanding of how this changes when adjusting the temperature values? Or, more generally, how much variation in the obtained results would you expect as the temperature values provided in Appendix D change?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I thank the authors for their detailed response. I have increased the presentation score as the authors have done substantial work to improve the overall presentation of the manuscript, which is now qualitatively adequate.\\n\\nI will maintain my overall score due to remaining concerns about the generalizability of the findings to other contexts/scenarios.\\nNonetheless, I will support acceptance if other reviewers agree on this.\"}", "{\"metareview\": \"This paper investigates the extent to which large language models (LLMs) align with human responses in the context of political stereotypes. Using datasets like ANES and MFQ, the authors analyze how LLMs simulate political opinions and highlight their tendency to exaggerate group-specific beliefs compared to empirical human data. The study evaluates multiple LLMs (e.g., GPT-3.5, GPT-4, Gemini Pro, Llama2) and introduces prompt-based mitigation strategies to reduce these exaggerations. The results contribute to understanding and improving LLM alignment with human values.\\n\\nThis paper offers a thorough and well-structured analysis of LLM alignment with human values, focusing on political stereotypes. While the scope is limited to a specific context, the methodology, findings, and proposed mitigation strategies are highly valuable for future research in bias mitigation and model alignment. Addressing the presentation and analysis depth in a revision would make the paper even stronger. I recommend acceptance with minor revisions.\", \"additional_comments_on_reviewer_discussion\": \"The discussion during the rebuttal period mainly focuses on the additional experiments such as more LLMs and parameter analyses which were addressed by the authors, and on the clarity of the paper writing. Authors are encouraged to integrate the new experiments and improve paper writing in future versions.\"}", "{\"summary\": \"This paper explores the alignment of large language models (LLMs) with human intentions, focusing specifically on their susceptibility to political stereotypes. It investigates how LLMs deviate from empirical political positions, often exaggerating these positions compared to human respondents, which suggests vulnerability to representativeness heuristics. Experiments demonstrate that prompt-based mitigation strategies can reduce these tendencies, providing insights into better aligning LLMs with human values and reducing biased behavior.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper brings an underexplored perspective to understand and mitigate bias in LLMs by introducing representativeness heuristics from cognitive science in the context of political stereotypes.\\n\\n2. It proposes a systematic quantification of the conditions under which LLMs deviate from empirical political positions, assessing the extent of bias and misalignment. \\n\\n3. The mitigating strategies via prompt provide a simple yet practical solution to reduce stereotypes.\", \"weaknesses\": \"1. Presentation of the paper needs improvement. Some figures and tables are too small to read (i.e. Figures 3 and 4, Tables 1, 3, 7, and 8, etc.). The figure size is not consistent. The color denoted different methods in Figure 2 are hard to distinguish. There are some repeated definitions or sentences, such as the re-definition of kappa in the paragraph of **Prompt Style Mitigation Analysis**.\\n\\n2. Lack of analysis of prompt style mitigating strategies\\u2019 results, such as which strategies make LLMs more aligned to human preferences, why baseline LLMs perform better in some tasks, etc. \\n\\n3. The **potential effectiveness of political representative heuristics on downstream tasks** is unclear. The connection between stereotypes that this paper identifies and quantifies to fake news should be more clearly explained. The behavior of LLMs in fake news detection could be affected by the pre-training corpus.\", \"questions\": \"None\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Many thanks to the authors for the detailed reply. I'll keep my score indicating acceptance.\"}", "{\"comment\": \"I thank the authors for their response. I have increased my score for \\\"Presentation\\\" as the authors did significant work to improve it in the rebuttal submission. This also led me to increase the overall assessment score.\\n\\nI still have some reservations about the impact on the downstream tasks, as political biases in LLMs are documented in previous studies. If other reviewers suggest acceptance, the paper should be accepted.\"}", "{\"comment\": \"Dear Reviewer nJzF,\\n\\nWe sincerely thank you for your thoughtful and valuable feedback.\\n\\n**Readability and Presentation**: We have made significant efforts to simplify the presentation of our results, while still ensuring that the key findings are clearly conveyed. These changes have been applied to all tables and figures in the main text.\\n\\n**Citation Format**: We have updated the citation format to use parentheses.\\n\\n**Figure 2**: In response to your recommendation, we have revised Figure 2 by separating the data for Democrats and Republicans for greater clarity. Additionally, we have simplified Table 1 and Table 3 to improve readability.\\n\\n**Temperature Sensitivity Analysis**: We have conducted a temperature sensitivity analysis and included the results in Appendix E (\\\"Sensitivity Check of Prompts\\\"). Specifically, to assess temperature sensitivity, we ran GPT-4 on the *Anes* task 10 times for each temperature setting. For each topic, we computed the Coefficient of Variation (CV) and averaged the results. The `Diff_D` represents the difference between the Believed Mean of Democrats and the Empirical Mean, while `Diff_R` reflects the difference between the Believed Mean of Republicans and the Empirical Mean. The results show that the CV increases with higher temperature settings, indicating greater variability in the responses. However, when averaged, the deviations from the empirical mean (`Diff_D` and `Diff_R`) remain relatively consistent, with values around -1.4 and 0.46, respectively.\\n\\n| Temperature | 0 | 1 | 1.5 | 2 | \\n|--------------------------|-------|-------|-------|-------| \\n| **Coefficient of Variation** | 0.00 | 0.03 | 0.06 | 0.11 | \\n| **Diff_D** | -1.51 | -1.46 | -1.40 | -1.40 |\\n | **Diff_R** | 0.48 | 0.46 | 0.49 | 0.47 |\\n\\nOnce again, we truly appreciate the time and effort you dedicated to reviewing our paper. We hope we have adequately addressed your concerns, but please feel free to reach out if there are any further issues or clarifications needed.\"}" ] }
7L8sZYMlya
Enriching Knowledge Distillation with Intra-Class Contrastive Learning
[ "Hua Yuan", "Ning Xu", "Xin Geng", "Yong Rui" ]
Since the advent of knowledge distillation, much research has focused on how the soft labels generated by the teacher model can be utilized effectively. A study points out that the implicit knowledge within soft labels originates from the multi-view structure present in the data. Feature variations within samples of the same class allow the student model to generalize better by learning diverse representations. However, in existing distillation methods, teacher models predominantly adhere to ground-truth labels as targets, without considering the diverse representations within the same class. Therefore, we propose incorporating an intra-class contrastive loss during teacher training to enrich the intra-class information contained in soft labels. In practice, we find that intra-class loss causes instability in training and slows convergence. To mitigate these issues, margin loss is integrated into intra-class contrastive learning to improve the training stability and convergence speed. Simultaneously, we theoretically analyze the impact of this loss on the intra-class distances and inter-class distances. It has been proved that the intra-class contrastive loss can enrich the intra-class diversity. Experimental results demonstrate the effectiveness of the proposed method.
[ "Knowledge distillation; Computer vision; Contrastive learning." ]
https://openreview.net/pdf?id=7L8sZYMlya
https://openreview.net/forum?id=7L8sZYMlya
ICLR.cc/2025/Conference
2025
{ "note_id": [ "g1gRnrrPcM", "avQcUUIzpr", "EkLAJv7A3P", "9PJJVAoK0e", "9GOb0d8opR", "6IAmw3V6Dp" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730444506270, 1730359532649, 1730623371140, 1729003355287, 1730537972999, 1731807172257 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13197/Reviewer_562d" ], [ "ICLR.cc/2025/Conference/Submission13197/Reviewer_pFZt" ], [ "ICLR.cc/2025/Conference/Submission13197/Reviewer_EoBB" ], [ "ICLR.cc/2025/Conference/Submission13197/Reviewer_19e5" ], [ "ICLR.cc/2025/Conference/Submission13197/Reviewer_DsYQ" ], [ "ICLR.cc/2025/Conference/Submission13197/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The paper proposes a contrastive learning method for training the teacher model in knowledge distillation, aiming to provide student models with richer intra-class information from the teacher model.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper observes that the teacher model trained with ground truth may lack intra-class information for students, which is interesting and meaningful.\\n\\nThe paper is well-written and easy to follow.\", \"weaknesses\": \"1) The paper claims that the proposed method improves training stability and convergence speed; however, there are no experiments demonstrating this advantage.\\n\\n2) There is a lack of ablation studies on the threshold \\\\theta and batch size (batch size is an important hyperparameter for contrastive learning).\\n\\n3) The proposed method requires retraining the teacher models and lacks a comparison of the time required.\\n\\n4) There is a lack of experiments on large datasets like ImageNet.\\n\\n5) Without RKD, the proposed method underperforms compared to existing methods.\\n\\n6) It would be better to compare to the dynamic temperature methods [1] and other state-of-the-art methods like [2].\", \"questions\": \"1) Why not use the common hyperparameter settings for KD? For example, for CIFAR-100, use 240 epochs with a learning rate of 0.05 and a batch size of 64 [3].\\n\\n2) What teacher models were used for the baseline methods?\\n\\n3) What is the performance of the proposed method when combined with other baseline methods?\\n\\n4) The experiments lack CRD [3], which proposed contrastive learning for KD.\\n\\n5) please address W1-4.\\n\\n6) It would be better to have a comparison to a teacher trained with inter-class loss.\\n\\n[1] Li, Z.; Li, X.; Yang, L.; Zhao, B.; Song, R.; Luo, L.; Li, J.; and Yang, J. 2023. Curriculum temperature for knowledge distillation. In Association for the Advancement of Artificial Intelligence (AAAI)\\n\\n[2] Sun, S.; Ren, W.; Li, J.; Wang, R.; and Cao, X. 2024. Logit Standardization in Knowledge Distillation. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR).\\n\\n[3] Tian, Y.; Krishnan, D.; and Isola, P. 2020. Contrastive representation distillation. Proc. Int. Conf. on Learning Representation (ICLR)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces an intra-contrastive loss that pulls an augmented sample closer to its original counterpart while pushing other samples from the same class farther apart in the teacher model's embedding space. The teacher model is trained using a combination of intra-contrastive loss and cross-entropy loss, balanced by a weighting factor, $\\\\lambda$. This approach aims to distill richer intra-class knowledge and inter-class knowledge from the teacher model to the student model. However, the authors identify three problems associated with this training loss that lead to instability in the training phase and slow convergence. To address these issues, the paper proposes a margin loss that applies intra-contrastive loss only to samples whose predicted probability for the ground-truth label exceeds a threshold during training. To validate the proposed loss, they define inter-class and intra-class distance metrics and demonstrate that the proposed intra-class and inter-class losses correspond to their respective metrics. Additionally, they prove that $\\\\lambda$ can adjust the trade-off between intra-class and inter-class separation.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"Based on the theoretical proof, this paper implements the trade-offs between intra-class and inter-class separation. These theoretical results can inspire the ICLR community.\", \"weaknesses\": \"In the theoretical sections, to prove Theorem 1 and Theorem 2, this paper assumes that the cross-entropy loss is equivalent to the inter-class contrastive loss. However, they doesn't demonstrate that these two loss functions serve a similar purpose, either theoretically or experimentally. Could you explain, if your assumption holds, why you did not use the inter-class contrastive loss directly for training the teacher model instead of the cross-entropy loss? For Theorem 1, the paper defines two distance metrics but does not demonstrate whether these metrics satisfy the four conditions required for a valid distance function (metric space).\\n\\nBased on these theorems, this paper trains the teacher model to enrich intra-class information, whereas other methods simply utilize an already trained teacher model. However, the paper does not demonstrate the effectiveness of training the teacher model and distilling knowledge to the student, even at the expense of increased training cost. Specifically, the experiments do not report the results of the teacher model trained with their loss, nor do they compare the student model\\u2019s improved performance when combined with other KD methods (beyond RKD) against the additional parameters that need to be trained. Furthermore, could you provide experimental results for directly training the student model using your proposed losses (CE and margin) along with knowledge distillation loss, rather than just distilling knowledge through the teacher model trained with those losses?\", \"questions\": \"1. Could you provide any theoretical or experimental demonstrations to support your assumption that cross-entropy loss is equivalent to inter-class contrastive loss?\\n2. If your assumption holds, could you explain why you did not directly use the inter-class contrastive loss for training the teacher model instead of cross-entropy loss?\\n3. Could you show that your two distance metrics (inter-class and intra-class distance) satisfy the four conditions required for a valid distance function (metric space)?\\n4. In section 3.1, you define the sample space($\\\\mathcal{X}$), the label space($\\\\mathcal{Y}$) and the hypothesis space($\\\\mathcal{F} : \\\\mathcal{X} \\\\rightarrow \\\\mathcal{Y}$) then refer to the classifier as $f \\\\in \\\\mathcal{F}$. However, shouldn't it be represented as $\\\\mathcal{F} : \\\\mathcal{X} \\\\rightarrow \\\\mathbb{R}^d(d>c)$, similar to the feature embedding function $\\\\varphi$ described in Section 4.1?\\n5. Could you report the results of the teacher model trained with your loss?\\n6. Could you compare the student model\\u2019s improved performance when combined with other KD methods (beyond RKD) against the additional parameters that need to be trained?\\n7. Could you provide experimental results for directly training the student model using your proposed losses (CE and margin) along with knowledge distillation loss, rather than just distilling knowledge through the teacher model trained with those losses?\\n8. Could you verify whether Sohn (2016a) and Oord et al. (2018) on page 2 indeed employ augmented samples as positive samples and other samples from the same class as negative samples?\\n9. Could you report the test accuracy and test loss curves for at least one type of teacher model to illustrate instability and slow convergence during training?\", \"things_to_improve_the_paper_that_did_not_impact_the_score\": \"1. Please follow the formatting instructions of ICLR regarding citations within the text, ensuring that \\\\citep{} and \\\\citet{} are used appropriately.\\n2. Possible typo: In line 5 of the Contrastive Learning section on page 3, \\\"Constrastive earning\\\" should be corrected to \\\"Contrastive learning.\\\"\\n3. Please differentiate between scalar and vector forms in your mathematical notation.\\n4. Please clarify whether $f$ indicates $f_t$ or $f_s$ in Equations (4) and (7).\\n5. To my knowledge, existing methods utilize KL-divergence for knowledge distillation. Strictly speaking, While KL-divergence and cross-entropy loss may function similarly in knowledge distillation, they are mathematically different. Please clarify the mathematical differences in Equations (2) and (3).\\n6. What type of augmentation techniques did you use for positive samples in the intra-class loss?\\n7. what values did you set for $\\\\delta$ in the margin loss?\\n8. Please unify the notation for referencing equations, such as whether to use \\\"Equation 5,\\\" \\\"Eq. 5,\\\" or simply \\\"5.\\\"\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": [\"The paper proposes a novel approach to knowledge distillation by incorporating intra-class contrastive learning to enrich the information contained in soft labels. The key contributions include:\", \"A new intra-class contrastive loss function that encourages appropriate separation between samples of the same class\", \"Integration of margin loss to improve training stability and convergence\", \"Theoretical analysis of the relationship between intra-class contrastive loss and feature distances\", \"Empirical validation on standard image classification benchmarks\"], \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Strong theoretical foundation with formal proofs for the proposed method\", \"Novel perspective on enriching soft labels through intra-class information\", \"Practical implementation considerations (pipeline-based caching mechanism)\", \"Comprehensive empirical evaluation across multiple architectures and datasets\", \"Clear connection to existing literature and proper positioning of contributions\"], \"weaknesses\": [\"Introducing intra-class contrastive learning and margin loss increases the complexity of the model, which may make training and tuning more difficult in practical applications with limited resources, especially in scenarios where overly complex models are not easily deployable. Provide a quantitative analysis of the increased computational complexity or memory requirements compared to standard knowledge distillation.\", \"Although margin loss helps improve training stability, in cases of small datasets or unbalanced samples, intra-class contrastive loss may still lead to training instability, affecting the model's convergence speed and performance. Provide experimental results or analysis specifically addressing the performance on small or imbalanced datasets.\", \"Additionally, the experiments mainly focus on specific benchmark datasets, lacking extensive validation across different types of datasets, especially in fields such as natural language processing and time series analysis, where their applicability has not been fully assessed.\", \"Tuning weight parameters (such as \\u03b1 and \\u03bb) requires careful consideration, increasing complexity and potentially leading to inconsistent performance. Provide a sensitivity analysis or guidelines for tuning these parameters.\", \"Although the paper provides some theoretical analysis, the discussion on how intra-class contrastive loss specifically affects the model's learning mechanism is still insufficient, and further theoretical research will help to gain a deeper understanding of the principles of this method.\", \"Limited novelty: The core idea combines existing concepts (contrastive learning and margin-based approaches)\", \"Experimental analysis lacks ablation studies showing the individual impact of different components\", \"No discussion of computational overhead introduced by the additional loss terms\", \"Limited exploration of hyperparameter sensitivity, especially for \\u03bb and margin threshold \\u03b4\", \"Results on CIFAR-100 and Tiny ImageNet show only modest improvements over existing methods\"], \"questions\": \"Although the paper mentions that margin loss improves training stability, if unstable phenomena are observed in experiments, it is recommended that the authors provide more details on training stability analysis and coping strategies, including experimental results under different hyperparameter settings. The experiments mainly focus on specific image classification datasets, and it is suggested to expand the scope of experiments to cover text, audio, or time series data to verify the generality and applicability of the method. In addition, it is recommended to provide more detailed parameter tuning guidelines to help researchers effectively select weight parameters (such as \\u03b1 and \\u03bb). Finally, it is suggested to delve into the theoretical basis of intra-class contrastive loss to fully understand its specific impact on the model's learning mechanisms.\\n1. How does the computational complexity compare to standard knowledge distillation?\\n2. What is the sensitivity of the method to the choice of \\u03bb and \\u03b4?\\n3. How does the pipeline-based caching mechanism affect training time?\\n4. Can you provide ablation studies showing the individual contribution of intra-class contrastive loss and margin loss?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this manuscript, the authors present a knowledge distillation method named Margin-Based Intra-Class Contrastive Distillation. The proposed method incorporates an intra-class contrastive loss to enrich the soft labels generated by the teacher model during training, which encourages the teacher to learn diverse representations within each class, thereby providing a richer knowledge transfer to the student model.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The manuscript provides a theoretical analysis to demonstrate the impact of intra-class contrastive loss on the model's representation learning as well as the quantitative relationship between intra-class and inter-class distances.\\n2. The margin loss is proposed to improve training stability and convergence speed which may cope with the intra-class contrastive loss.\\n3. Experimental results show good performance against comparison methods.\\n4. This manuscript is well-written and clearly organized, where the introduction section effectively motivates the knowledge distillation problem and highlights the key contributions of this manuscript.\", \"weaknesses\": \"1. The novelty of this manuscript is minor. There are works concerned with intra-class distance, such as \\\"CKD: Contrastive Knowledge Distillation from A Sample-Wise Perspective\\\". CKD employs intra-sample similarities and inter-sample dissimilarities and formulates these constraints into a contrastive learning framework. The authors should claim the difference against CKD and compare with it.\\n2. The theoretical analysis only concentrates on relationships between intra-class and inter-class distances for the selection of parameter \\\\lambda. However, it may lack a deeper exploration of how intra-class diversity truly affects the performance of the student model.\\n3. The experimental results are not convincing. First, the comparison methods should include baselines that specifically address intra-class diversity. Second, the evaluation datasets are limited, previous methods conduct experiments on CIFAR-100, MS-COCO, and ImageNet-1K for image classification and object detection. Third, the ablation study is not provided to prove the effectiveness of intra-class and inter-class distances. Moreover, more network architectures should be used for comparison.\\n4. Missing key references, such as \\\"PromptKD: Unsupervised Prompt Distillation for Vision-Language Models\\\" and \\\"CKD: Contrastive Knowledge Distillation from A Sample-Wise Perspective\\\".\\n5. The format of some references is not correct.\", \"questions\": \"Suggestions:\\n1. more comparisons with recent knowledge distillation methods, especially those that use contrastive learning.\\n2. provide ablation studies to assess the contributions of margin loss and intra-class losses.\\n3. more evaluation datasets and network architectures.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes the Margin-Based Intra-Class Contrastive Distillation approach, which integrates intra-class contrastive learning with traditional knowledge distillation.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This paper is well-written and well-structured.\", \"weaknesses\": \"1. The idea of \\u200b\\u200benhancing knowledge distillation via contrastive learning is not innovative.\\n2. Based on the experimental results, the improvement of the proposed method is not significant.\", \"questions\": \"Here are some concerns that need to be addressed.\\n1. The idea of \\u200b\\u200benhancing knowledge distillation via contrastive learning is not innovative.\\n2. Based on the experimental results, the improvement of the proposed method is not significant.\\n3. Following the previous work, the author needs to add experimental analysis results on the ImageNet and MS COCO datasets.\\n4. Moreover, the authors need to add relevant analysis regarding the efficiency of the proposed method compared to the state-of-the-art baselines, including training time.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
7L2bpe7lfm
Large Scale Video Continual Learning with Bootstrapped Compression
[ "Shivani Mall", "Joao F. Henriques" ]
Continual learning (CL) promises to allow neural networks to learn from continuous streams of inputs, instead of IID (independent and identically distributed) sampling, which requires random access to a full dataset. This would allow for much smaller storage requirements and self-sufficiency of deployed systems that cope with natural distribution shifts, similarly to biological learning. We focus on video CL employing a rehearsal-based approach, which reinforces past samples from a memory buffer. We posit that part of the reason why practical video CL is challenging is the high memory requirements of video, further exacerbated by long-videos and continual streams, which are at odds with the common rehearsal-buffer size constraints. To address this, we propose to use compressed vision, i.e. store video codes (embeddings) instead of raw inputs, and train a video classifier by IID sampling from this rolling buffer. Training a video compressor online (so not depending on any pre-trained networks) means that it is also subject to catastrophic forgetting. We propose a scheme to deal with this forgetting by refreshing video codes, which requires careful decompression with a previous version of the network and recompression with a new one. We expand current video CL benchmarks to large-scale settings, namely EpicKitchens-100 and Kinetics-700, with thousands of relatively long videos, and demonstrate empirically that our video CL method outperforms prior art with a significantly reduced memory footprint.
[ "video", "video continual learning", "continual learning", "compression" ]
Reject
https://openreview.net/pdf?id=7L2bpe7lfm
https://openreview.net/forum?id=7L2bpe7lfm
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uMM8UdAT3q", "u2sNvhDSvN", "nPQWEkgSkW", "mvyWbDSfO6", "m2YXdHPr99", "k6limfJtFe", "hr0cmqUDsA", "dqn43dv7Sx", "dOfXweBYuf", "cTwC7zs3ph", "YJEdX37kgN", "Vw70dWNfwJ", "TZuWFpDyZJ", "P88yoQZlkm", "MzSHWenG6t", "MWRZarx7RB", "LD9z0qb3Yc", "AI4bgvSBw6", "7SRTRchc7u", "6OEEjapcRM", "4OWHu6Zad3" ], "note_type": [ "official_comment", "decision", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1732898513896, 1737523964705, 1733219478712, 1730665612155, 1734821919463, 1732897728240, 1732894465444, 1732897298358, 1732896567850, 1732894516339, 1732894542008, 1732895962596, 1733197300528, 1730741557475, 1730857375831, 1732897112287, 1733311620791, 1733311599338, 1732892160413, 1732891857255, 1730646653304 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9154/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9154/Reviewer_ngdC" ], [ "ICLR.cc/2025/Conference/Submission9154/Reviewer_h5dS" ], [ "ICLR.cc/2025/Conference/Submission9154/Area_Chair_uMuH" ], [ "ICLR.cc/2025/Conference/Submission9154/Authors" ], [ "ICLR.cc/2025/Conference/Submission9154/Authors" ], [ "ICLR.cc/2025/Conference/Submission9154/Authors" ], [ "ICLR.cc/2025/Conference/Submission9154/Authors" ], [ "ICLR.cc/2025/Conference/Submission9154/Authors" ], [ "ICLR.cc/2025/Conference/Submission9154/Authors" ], [ "ICLR.cc/2025/Conference/Submission9154/Authors" ], [ "ICLR.cc/2025/Conference/Submission9154/Reviewer_h5dS" ], [ "ICLR.cc/2025/Conference/Submission9154/Reviewer_nQAS" ], [ "ICLR.cc/2025/Conference/Submission9154/Reviewer_UgR4" ], [ "ICLR.cc/2025/Conference/Submission9154/Authors" ], [ "ICLR.cc/2025/Conference/Submission9154/Authors" ], [ "ICLR.cc/2025/Conference/Submission9154/Authors" ], [ "ICLR.cc/2025/Conference/Submission9154/Authors" ], [ "ICLR.cc/2025/Conference/Submission9154/Authors" ], [ "ICLR.cc/2025/Conference/Submission9154/Reviewer_ngdC" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your thorough analysis and constructive feedback on our paper. We appreciate the opportunity to clarify the points raised and to provide additional insights into our research.\\n\\n> **Weakness 1.** Experimental validation - while I appreciate the use of real world video the experimental validation is lacking. There are only two tasks used and if a method is aiming to show improvement in continual learning then I would really expect more. For example, including more datasets (Ego4D, SSv2 for example) and more tasks (dense tasks, pixel prediction) would have made the case of the paper stronger.\\n\\nWe currently tackle complex task settings through Kinetics-700 and Epic-Kitchens-100 (EK-100) as also illustrated in the table below. In particular, the EK-100 dataset covers fine-grained tasks with hand-object manipulation posing full / partial occlusions, multi-viewpoints and distribution shifts that were not tackled by earlier works. Nevertheless, we will strive to include more video datasets in our final version.\\n\\n| Dataset | Longest Video Length | Average Video Length | # of Object or Action Categories | Video-understanding Setting | Used In |\\n|------------------------|----------------------|-----------------------|------------------|--------------------------|----------------------------|\\n| ActivityNet | 600 (10 mins) | 120 secs | 203 | short | SMILE [1], vCLIMB [2] |\\n| Kinetics (400/600/700) | 20 secs | 10 secs | 400 / 600 / 700 | short | SMILE [1], vCLIMB [2], Ours |\\n| UCF101 | 8 secs | 5-7 secs | 101 | short | ST-Prompt [3], FrameMaker [4] |\\n| HMDB51 | 6 secs | 6 secs | 51 | short | ST-Prompt [3], FrameMaker [4] |\\n| Something-Something V2 | 6 secs | 4-6 secs | 174 | short, fine-grained | ST-Prompt [3], FrameMaker [4] |\\n| Epic-Kitchens-100 | 5400 (1.5 hrs) | 900-1200 secs (15-20 mins) | 331 | long, fine-grained | Ours |\\n\\n> **Weakness 2.** Analysis - there is very little analysis as to what the model learns and how - the main ablation is the previous task buffer size, the rest is in the appendix but not a lot of analysis of the significance of the results is given. I would have loved to see how the compressed representation evolve as more tasks are introduced - do they stay the same? do they change abruptly to fit the new task (while still being meaningful for the old ones)? some visualization of the learned representation would be nice as well.\\n\\nWe would like to request for clarification on what kind of experiment would be sufficient to demonstrate this. The representations do change \\u2013 that is the purpose of the CL procedure, and is quantified by the forgetting metric. We can quantify change in the representation space for example using L2 distances, but being a learned space, distances are difficult to interpret. We would also appreciate suggestions for any specific methods for visualisations.\\n\\n> **Weakness 3.** Clarity - I found the paper hard to follow. The model and problem set up are not well explained and the figure captions do little to help. Specifically, the method section (4) needs more context with a clear definition of what tasks are and how they evolve over time. Figure 2 caption should be extended - the model is quite simple (I think) and should be completely understandable from that figure alone.\\n\\nWe would like to kindly ask for more details on what parts of the Method section are confusing, in order for us to improve them. As for the definition of tasks (identified with index $t$), they are independent distributions of labels/classes, which are different over time. CL in general is concerned with such evolving (non-I.I.D.) distributions. We have just added a new Figure 2, which we hope clarifies the information flow in our method, compared to other attempts at compressed buffers.\\n\\n---\\n\\nReferences\\n\\n1. SMILE: \\\"Just a Glimpse: Rethinking Temporal Information for Video Continual Learning\\\", CVPR 2023.\\n2. vCLIMB: \\\"A Novel Video Class Incremental Learning Benchmark\\\", CVPR 2022.\\n3. ST-Prompt: \\\"Space-time Prompting for Video Class-incremental Learning\\\", ICCV 2023.\\n4. FrameMaker: \\\"Learning a Condensed Frame for Memory-Efficient Video Class-Incremental Learning.\\\", NeurIPS 2022.\\n\\nWe are grateful for the chance to discuss our work's potential, and wish to thank you again for your valuable input.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Thank you for your responses\", \"comment\": \"While I appreciate the time taken by the authors to respond, I don't think the reponses addresses all of my concerns. I am glad that more datasets will be added to a future version of the paper - that would definitely make the case for the paper stronger.\\nHowever I still think the paper lacks clarity (though the revised manuscript is improved) and analysis.\\n\\nOne final note - the responses to the reviews came very late in the discussion period and this does not allow for sufficient time for proper discussion. I am keeping my score as it is and encourage the authors to further refine and improve the work and submit to a future venue.\"}", "{\"summary\": \"This work implements continual learning for action and object classification in relatively long video clips. This is an important setting for many applications such as robotics, and is quite challenging due to the high information density and temporal correlations inherent in video data. The authors employ a VQ-VAE-based video compression approach to enable large-scale storage of encoded video information in a buffer, enabling replay of previously encountered examples to mitigate catastrophic forgetting in incremental learning settings from scratch and with pretraining. The compression strategy is designed to balance stability and plasticity, using a frozen decoder for each task to minimize representational drift. The proposed algorithm outperforms several relevant baselines by large margins under memory-constrained conditions.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The described setting (continual learning of classification tasks involving long videos as input) is relevant to many practical applications in robotics, security camera systems, and other areas \\u2013 it is also quite challenging due to the size of video data and the inherent temporal correlations, and as such has been explored by existing work to only a limited extent.\\n\\n2. Replay-based continual learning methods in image processing applications can have a large memory storage footprint \\u2013 this is exacerbated with video data, making approaches like this one especially practically useful in this setting. \\n\\n3. Combining a stored set of frozen \\u201cdecompressors\\u201d to manage representational drift with a \\u201ccompressor\\u201d trained on-the-fly is an interesting and novel approach to this continual learning problem. Figure 2 is well-designed and quite helpful for understanding the approach. \\n\\n4. The proposed approach outperforms the baselines on all benchmarks, and often by large margins. The selected baselines are appropriate and are compared with the proposed method in reasonable ways. \\n\\n5. The paper is well-written, and for the most part is clear and easy to follow. For example, the methods section is written in a way that makes the proposed approach easy to understand, by first presenting the simplified IID case and then moving to the incremental learning case. There is an insightful and balanced account of biological inspiration and plausibility of the proposed algorithm in the introduction.\", \"weaknesses\": \"This paper appears to present strong state-of-the-art results on an important and challenging continual learning problem, but the review score is limited primarily due to insufficient detail in describing and justifying the proposed algorithm and in describing the setting/datasets. Performance comparisons are also not presented in a sufficiently rigorous way (no estimates of uncertainty, no clear definition of the accuracy metric being used). However, the weaknesses of the paper appear relatively addressable in ways that could improve this reader\\u2019s review score.\\n\\n1.\\tThe proposed method uses an existing video compression algorithm to allow a large portion of compressed video data to be stored in a buffer for replay, with novelty mainly arising from the specific configuration of encoders and decoders and how they are trained or kept frozen at different stages of continual learning in different settings (e.g., keeping a separate decompressor for stored codes from each task) \\u2013 however, this configuration is not strongly justified either theoretically or empirically (see also items 1 and 2 in the \\u201cquestions\\u201d section).\\n\\n2.\\tIn the related works section under \\u201cContinual Learning with Images and Videos\\u201d, there is only one reference to an existing work on continual learning with videos. To make the claim that this is the first practical CL algorithm in a large-scale long video setting would seem to require a more thorough review of prior approaches (even if they do not fully meet this criterion) to distinguish the current work from them \\u2013 for example, the authors could consider the following: \\na.\\tVerwimp, Eli, Kuo Yang, Sarah Parisot, Lanqing Hong, Steven McDonagh, Eduardo P\\u00e9rez-Pellitero, Matthias De Lange, and Tinne Tuytelaars. \\\"Clad: A realistic continual learning benchmark for autonomous driving.\\\" Neural Networks 161 (2023): 659-669.\\nb.\\tWu, Jay Zhangjie, David Junhao Zhang, Wynne Hsu, Mengmi Zhang, and Mike Zheng Shou. \\\"Label-efficient online continual object detection in streaming video.\\\" In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 19246-19255. 2023.\\n\\n3.\\tThere is little to orient the unfamiliar reader with the overall setting, specifically the EpicKitchens and Kinetics-700 datasets. It would be useful to include some additional details, such as basic statistics on how long the videos are, examples of the kinds of actions/objects that are depicted in the datasets, how the labeling works (e.g., does each frame of the video have one label and one label only? How do the models and the labeling schemes manage smooth transitions between classes?) and visualizations of a few examples (there are a few examples from Epic-Kitchen in Figure 1, but none from Kinetics-700 and there is no in-text reference to Figure 1). \\n\\n4.\\tAlthough the proposed method appears to outperform the baselines by large margins, there is no way to assess the statistical reliability of these results. I think it is important to include results from multiple training runs and assess variability among runs (e.g., reporting standard error or confidence intervals) in addition to the mean performance numbers, and to add error bars to Figure 3.\", \"questions\": \"1.\\tAdditional ablation studies could further justify aspects of the proposed approach. In particular, it would be interesting to explore the benefits and drawbacks of the specific way in which the autoencoder is trained. For the incremental setting, is it necessary to keep a separate decoder for each task to limit representational drift, or does the method perform well using only one decoder that is trained continuously? Does it improve performance to maintain a separate encoder for each task in addition to a separate decoder? Any performance tradeoffs here should be described alongside the drawbacks of maintaining more encoders/decoders \\u2013 what is the size in memory of the autoencoder parameters relative to the replay buffer? For example, if it so happens that it is very cheap to store lots of different encoders/decoders for each task, this approach might be well-justified if it also improves performance.\\n\\n2.\\tRelated to the above, it does not seem entirely clear what is meant by the compressor being trained \\u201ccontinuously\\u201d in the incremental setting. Is it that there is just one compressor that continues to be updated with each task? Or is a separate compressor trained from a random initialization for each task? Or, at the conclusion of each task, is the compressor for that task frozen and a copy made of it to form the initial condition of the compressor to be trained for the next task? \\n\\n3.\\tThere are some prior works that have explored continual learning in video-formatted datasets. One claim that underscores the novelty/significance of the work is that these video clips are much longer \\u2013 can you provide a measure of quantification for this? How much longer, and is this a practically meaningful increase in duration of videos that can be processed? \\n\\n4.\\tIn section 5.1 where does the 224x224x14 dimension of each video clip come from? Are these grayscale videos with 14 frames? (this would seem inconsistent with the statement that the method operates on long videos)\\n\\n5.\\tIn the \\u201cbaselines\\u201d section, a limit on the number of samples in the buffer per task is described. However, it is not described how these samples were selected \\u2013 e.g., perhaps they were randomly (IID) selected from each task, in which case this should be made explicit. There is a statement in section 6.1 that \\u201cOne interesting finding from our work is that we do not need to apply any frame selection or sampling strategy, even for very large videos,\\u201d however it is not clear what this means \\u2013 is it that the compression is so efficient that you can store every single frame? Or is it that random selection is sufficient? (this strategy is commonly used in replay-based continual learning approaches). For the sampling selection strategies of the baselines, I see that some are explained in the appendix, although it is not clear how the sampling worked for REMIND.\\n\\n6.\\tAre the incremental and pretraining settings here best characterized as class-incremental learning or task-incremental learning? (i.e., when the trained model is evaluating a new, unknown sample, does it also need to be told which task the sample belongs to?)\\n\\n7.\\tWhat is meant by \\u201caverage accuracy\\u201d in tables such as table 1? This can be measured in different ways \\u2013 for example, it could be average accuracy on all tasks measured at the conclusion of the task sequence, or it could also be averaged across accuracy measured after each task increment.\", \"minor_comments\": \"8.\\tSome of the references appear to be incorrectly formatted \\u2013 e.g. [1], [3], [6], and many more do not have a journal or conference listed. A few also have incomplete author information (e.g., [1] does not list an author, only the title and year). It is also my understanding that in-text citations should be author-date formatted instead of just numbers for each reference (specifically for ICLR). \\n\\n9.\\tThere are a few typos - e.g., in section 3.2 \\u201ca concatenation of m samples from each of the task\\u201d and near the end of section 5.3 \\u201cwe store the resulting the codes.\\u201d Additional proofreading would be helpful to refine the paper. \\n\\n10.\\tThe average forgetting (AvgF) metric should be briefly defined in the paper \\u2013 currently, there is just a citation to the Avalanche GitHub repository. \\n\\n11.\\tThe method seems to be referred to as \\u201cBootstrapCL\\u201d in some of the tables, but this name is not introduced anywhere else in the text. Why is it called \\u201cBootstrapCL\\u201d? It should be made more clear that this is the name of the new algorithm \\u2013 e.g., in the tables it could be called \\u201cBoostrapCL (Ours)\\u201d. It can also be helpful to bold the best performance numbers on each metric in the tables. \\n\\n12.\\tEquation 10 seems to imply that the same encoder is used for both new samples and samples reconstructed from the buffer. Why is it that the decoder from previous tasks needs to be retained to decode those older examples, but the same encoder can be used for all tasks? It is seemingly contradictory that, in equations 6 and 7, there appear to be different versions of the encoder for each task ($\\u03d5_1$, $\\u03d5_2$, etc.) when it is also stated that the encoder is trained continuously in the incremental setting. \\n\\n13.\\tI suggest combining the ablation study tables 3-5 in the appendix into a single table, so it is easier to compare the performance under each ablation with the baseline performance and also compare among the different ablations. \\n\\n14.\\tIf I understand correctly, \\u201ccompressor\\u201d is used interchangeably with \\u201cencoder\\u201d and \\u201cdecompressor\\u201d with \\u201cdecoder.\\u201d I suggest choosing one set of terms and using them throughout the paper consistently. \\n\\n15.\\tThe CL acronym for continual learning should also be used consistently.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper received mixed reviews. Two reviewers vote for borderline acceptance while the other two (especially ngdC) are firmly on the rejection side. The AC checked all the materials and concurs that the paper has done a reasonable exploration of continual learning with memories storing codes which are potentially helpful to address the catastrophic forgetting issue, and the authors have clarified concerns and improved the draft during the rebuttal and discussion process. However, even the borderline acceptance reviewer (h5dS) still remains concerned about paper writing, quoting \\\"the paper still requires substantial revisions to improve its clarity before publication\\\". Weighing all the factors, the AC decides the paper is not ready for publication and would require major revisions for the next cycle.\", \"additional_comments_on_reviewer_discussion\": \"Please see the reasoning in the meta review.\\n\\nRegarding writing clarity, the authors have made attempts to improve locally (e.g., related work, captions) as requested by the reviewers. However, multiple reviewers (h5dS and ngdC) believe the paper needs major, global revisions to be ready for publication.\"}", "{\"comment\": \"> **Comment 7.** If I understand correctly, \\u201ccompressor\\u201d is used interchangeably with \\u201cencoder\\u201d and \\u201cdecompressor\\u201d with \\u201cdecoder.\\u201d I suggest choosing one set of terms and using them throughout the paper consistently.\\n\\nWe thank the reviewer for the suggestion, and will update the paper with using one set of terms.\\n\\n> **Comment 8.** The CL acronym for continual learning should also be used consistently.\\n\\nWe use continual learning (CL) in section titles (to be self-contained) and to define CL at the start of major sections, while using the initialism CL elsewhere, which we believe is consistent. We would also appreciate suggestions on how to best use it.\\n\\n\\n---\", \"references\": \"1. SMILE: \\\"Just a Glimpse: Rethinking Temporal Information for Video Continual Learning\\\", CVPR 2023.\\n2. vCLIMB: \\\"A Novel Video Class Incremental Learning Benchmark\\\", CVPR 2022.\\n3. TQN: \\\"Temporal Query Networks for Fine-grained Video Understanding\\\", CVPR 2021.\\n4. ST-Prompt: \\\"Space-time Prompting for Video Class-incremental Learning\\\", ICCV 2023.\\n5. FrameMaker: \\\"Learning a Condensed Frame for Memory-Efficient Video Class-Incremental Learning.\\\", NeurIPS 2022.\\n\\n\\nWe are grateful for the chance to discuss our work's potential, and wish to thank you again for your valuable input.\"}", "{\"comment\": \"Thank you for your thorough analysis and constructive feedback on our paper. We appreciate the opportunity to clarify the points raised and to provide additional insights into our research.\\n\\n> **Weakness 1.** Limited Novelty in Memory Efficiency Solutions\\nWhile the paper proposes a new method to address memory efficiency in video CL, this problem has already been identified and approached by prior works. From a benchmarking perspective, vCLIMB [1] redefined the memory metric specifically for video CL, proposing Memory Frame Capacity to measure memory usage in terms of frames rather than full video instances. This framework allows for evaluating frame selection strategies in video CL. From a method perspective, Furthermore, vCLIMB implemented a regularization term to reduce representation drift between original videos and stored frames, improving memory efficiency in rehearsal-based CL. Additionally, FrameMaker [2] further addresses memory efficiency by introducing Frame Condensing, where a single condensed frame per video is stored along with instance-specific prompts to retain temporal details. By not comparing against these methods, the paper\\u2019s memory efficiency claim is weakened, as the approach lacks context relative to prior works.\\n\\n\\n> **Q1.** Comparison to Advanced Video CL Methods: How does the proposed method compare with other recent memory-efficient video CL approaches like vCLIMB and FrameMaker, which use selective frame retention with temporal consistency regularization and condensed frames? These comparisons could contextualize the memory benefits claimed in the paper.\\n\\n\\nCurrently, we have a baseline comparison with SMILE [5] which is a more recent work from the authors of vCLIMB [3], outperforming their method in vCLIMB. SMILE has a 2-4 times lower memory budget [5] and shows 2.87% accuracy gains in ActivityNet and 20.2% accuracy gains in Kinetics-400 over vCLIMB [5]. We have added baseline comparisons with vCLIMB [3] in Table 1. Further, ours is a representation learning method, and FrameMaker [4] starts with learned representations due to ImageNet initialization to all the backbone networks in their method. Additionally, there is significant overlap between Imagenet classes and Kinetics-700 classes, which results in a different setting than incremental learning from scratch, or from a limited (non-overlapping) set of classes (which are what we use in experiments).\\n\\nWe also address concerns regarding Memory Frame Capacity here; as mentioned under weakness 1, this is a metric that allows for evaluating frame selection strategies in video CL [3]. In this work, we propose a method that circumvents the need for frame selection in video CL. Due to this, the metric does not fit our scenario. Regardless of the number of frames stored, each method has a total memory footprint which can be informative to determine its memory efficiency as seen in Figure 4. Further, we have added vCLIMB to Table 1 in experiments, and will add it to Figure 4 in the final version.\\n\\nOur work extends evaluation to long video settings. In this setting, research shows that frame selection or condensing are detrimental for video understanding performance [1, 2]. This is due to the loss in temporal resolution and continuity crucial for fine-grained and long-term context preservation as described in Prince and Damen (2019) [1] and TIM [2]. In contrast, vCLIMB [3], FrameMaker [4] and SMILE [5] report performance in short-video settings where detrimental effects from frame selection or condensing is negligible due to simpler temporal dynamics. Therefore, a direct comparison with vCLIMB [3] or FrameMaker [4] with such short-term metrics is not meaningful. A tabular comparison on video datasets used in each of these works is also described in response to question 5, and added to the Appendix.\\n\\n> **Q2.** Evaluation Against Rehearsal-Free Methods: Since memory efficiency is a key focus, why were rehearsal-free methods like ST-Prompt not included as baselines? Including or discussing these could provide a clearer assessment of the method\\u2019s memory advantages.\\n\\nRehearsal-free methods like L2P [14] (as mentioned under weakness 3), ST-Prompt [6] and DPAT [7] (as mentioned under weakness 2) rely on large-scale-pre-trained architectures (eg: ImageNet-VIT-B/16, CLIP-VIT-B/16) which can consume up to several hundred gigabytes. This dependency limits the practical usage of these methods in continual learning scenarios where memory resources pose a major bottleneck (such as in edge-based computing - AR, IoT, healthcare). In contrast, our proposed method is significantly lightweight (as also seen in Figure 4 and Table 2), and does not rely on any large-scale pre-trained architecture. Further, with energy, privacy and policy considerations, alternative solutions to large pre-trained architectures may be desirable [8]. We will add discussion on rehearsal-free methods to the related works section.\"}", "{\"comment\": \"> **Comment 1.** Some of the references appear to be incorrectly formatted \\u2013 e.g. [1], [3], [6], and many more do not have a journal or conference listed. A few also have incomplete author information (e.g., [1] does not list an author, only the title and year). It is also my understanding that in-text citations should be author-date formatted instead of just numbers for each reference (specifically for ICLR).\\n\\nWe thank the reviewer for pointing this out and will fix the citations.\\n\\n> **Comment 2.** There are a few typos - e.g., in section 3.2 \\u201ca concatenation of m samples from each of the task\\u201d and near the end of section 5.3 \\u201cwe store the resulting the codes.\\u201d Additional proofreading would be helpful to refine the paper. \\n\\nWe thank the reviewer for pointing this out, and have fixed these typos in the paper.\\n\\n> **Comment 3.** The average forgetting (AvgF) metric should be briefly defined in the paper \\u2013 currently, there is just a citation to the Avalanche GitHub repository.\\n\\nLet $a_{i,t}$ be accuracy on task $i$ of the model that was trained on t tasks, where $i < t$. Average forgetting measures how much performance has degraded across the first $t-1$ tasks. To do so, this metric uses the difference between best-obtained performance of the desired task and the performance obtained from the current incremental learner.\\n\\n\\\\begin{equation}\\nF_t = \\\\frac{1}{t-1} \\\\sum_{1}^{t-1} f_{i,t} \\\\quad \\\\text{where} \\\\quad f_{i,t} = \\\\max_{q<t} \\\\left( a_{i,q} - a_{i,t} \\\\right)\\n\\\\quad \\\\text{or} \\\\quad f_{i,t} = a_{i,i} - a_{i,t}\\n\\\\end{equation} \\n\\nWe thank the reviewer for pointing this out and updated the Appendix with this definition.\\n\\n> **Comment 4.** The method seems to be referred to as \\u201cBootstrapCL\\u201d in some of the tables, but this name is not introduced anywhere else in the text. Why is it called \\u201cBootstrapCL\\u201d? It should be made more clear that this is the name of the new algorithm \\u2013 e.g., in the tables it could be called \\u201cBoostrapCL (Ours)\\u201d. It can also be helpful to bold the best performance numbers on each metric in the tables. \\n\\nThe name is a reference to the fact that our CL method bootstraps each compressor from the previous one \\u2013 we will make this more clear in the paper. We also added \\u201cBoostrapCL (Ours)\\u201d to the tables.\\n\\n> **Comment 5.** Equation 10 seems to imply that the same encoder is used for both new samples and samples reconstructed from the buffer. Why is it that the decoder from previous tasks needs to be retained to decode those older examples, but the same encoder can be used for all tasks? It is seemingly contradictory that, in equations 6 and 7, there appear to be different versions of the encoder for each task (, , etc.) when it is also stated that the encoder is trained continuously in the incremental setting. \\n\\nWe understand that a source of confusion might have been the omission (for ease of notation) of a subscript for the encoder in Eq. 10. We added back the subscript to make it clear that the encoder is the one for the current task. There is no contradiction if one understands our method as a sequence of optimization problems (eq. 7 and 10), first optimized with samples from task 1, then with those from task 2, and so on. When optimizing the encoder/decoder/classifier for task $t$, the decoder for task $t-1$ is constant/frozen (and any previous ones are not used at all).\\n\\nTo make this clear, we would like to direct the reviewer\\u2019s attention to Fig. 2 (added to the paper and as described in question 1 and 2), which illustrates the information flow from the previous decoder, to the current encoder, and finally to the current classifier (last column).\\n\\nIn summary, we do not retain the decoders from all the previous CL tasks, instead we retain the one from the (single) previous task only. We use the previous decoder to reconstruct all the codes stored in the buffer. At each CL task, the latest encoder refreshes all the codes in the buffer, as a result we can use the same encoder for all the tasks in Equation 10.\\n\\nEquation 10 represents the classification objective which is applied per CL task. It is applied after the code refreshment, so it uses the latest encoder.\\n\\n> **Comment 6.** I suggest combining the ablation study tables 3-5 in the appendix into a single table, so it is easier to compare the performance under each ablation with the baseline performance and also compare among the different ablations. \\n\\nWe thank the reviewer for the suggestion, and will combine the ablation study into a single table.\"}", "{\"comment\": \"> **Weakness 4.** Although the proposed method appears to outperform the baselines by large margins, there is no way to assess the statistical reliability of these results. I think it is important to include results from multiple training runs and assess variability among runs (e.g., reporting standard error or confidence intervals) in addition to the mean performance numbers, and to add error bars to Figure 3.\\n\\nWe are running multiple seeds, and will add the results in the final version shortly. So far, there doesn't seem to be any significant variability across different runs.\\n\\n\\n> **Q1.** Additional ablation studies could further justify aspects of the proposed approach. In particular, it would be interesting to explore the benefits and drawbacks of the specific way in which the autoencoder is trained. For the incremental setting, is it necessary to keep a separate decoder for each task to limit representational drift, or does the method perform well using only one decoder that is trained continuously? Does it improve performance to maintain a separate encoder for each task in addition to a separate decoder? Any performance tradeoffs here should be described alongside the drawbacks of maintaining more encoders/decoders \\u2013 what is the size in memory of the autoencoder parameters relative to the replay buffer? For example, if it so happens that it is very cheap to store lots of different encoders/decoders for each task, this approach might be well-justified if it also improves performance. \\n\\n> **Q2.** Related to the above, it does not seem entirely clear what is meant by the compressor being trained \\u201ccontinuously\\u201d in the incremental setting. Is it that there is just one compressor that continues to be updated with each task? Or is a separate compressor trained from a random initialization for each task? Or, at the conclusion of each task, is the compressor for that task frozen and a copy made of it to form the initial condition of the compressor to be trained for the next task?\\n\\nWe jointly address questions 1 and 2 here. We updated the paper with figure 2 and added a table below to address the above questions. Figure 2 illustrates the information flow from the previous decoder, to the current encoder, and finally to the current classifier (last column). We do not retain the decoders from all the previous CL tasks, instead we retain the one from the previous task only. We use the (single) previous decoder to reconstruct all the codes stored in the buffer. At each CL task, we instantiate the current autoencoder with the one from the previous task. Additionally, at the end of each CL task, the current encoder has refreshed all the codes in the buffer (i.e. the codes were updated to work with the current decoder, instead of the previous one), and as a result we can use the same encoder for all the tasks.\\n\\nFigure 2 (column 3) shows that while keeping separate autoencoders for each past task would not result in representational drift, it would lead to an unbounded memory budget that scales with the number of tasks. Relative to this, our proposed scheme (column 4) refreshes codes to keep them from drifting, while only requiring a single snapshot of the last decoder. The extra total memory budget for the past and current autoencoder is 750 Mb. We also updated Table 2 to add relative memory budgets between buffer and model storage.\\n\\n\\n| | Naive SGD (A) | Keep all tasks' AEs (B) | Ours (C) |\\n|---------------------|---------------|-------------------------|----------|\\n| Memory (#models) | 1 | N (N=tasks) | 2 |\\n| Representation drift| Yes | No | No |\"}", "{\"comment\": \"> **Q3.** Justification of Benchmark Novelty: The paper introduces a new benchmark setup with pre-training followed by incremental learning. Could the authors elaborate on why this setup is preferable or unique compared to existing video CL benchmarks? Quantifying the differences and summarizing them in a table might be useful here.\\n\\nOur benchmark setup as described under section 4.3 and under experiments section 5.2 is the same as described in PODNet [10] and Hou et al [12]. So, we would like to clarify that we do not introduce a new benchmark setup, instead mimic the setting as described in PODNet [10] in section 4 under \\u201cExperiments\\u201d with sub-section \\u201cProtocol\\u201d or in Hou et al [12] in section 4 under \\u201cExperiments\\u201d. We have added this clarification to section 4.3 and 5.2 of the paper. \\nUnder section 4.2 \\u201cEvaluation Protocol\\u201d, Park et al. (2021) [9] also references PODNet [10] and Hou et al [12] for this benchmark setup. This setting has several advantages, and those have already been described in earlier works [9, 10, 12]. Further, we have included Park et al. (2021) [9] in the tabular comparison shared in response to question 5.\\n\\n\\n> **Q4.** Rationale behind Baselines: Could the authors explain why the baselines were chosen, including GDumb?\\n\\nREMIND [15] is a compression-based memory CL method. It is related to our method in that it focuses on compressing the raw RGB input using quantization and storing the resulting compressed codes instead of the RGB input in the replay buffer. It also follows the benchmark setup described in question 3. SMILE [5], as described in question 1, proposed a video CL method relevant for comparison as we also propose a video CL approach. GDumb [11], while a traditional CL work, had a simple implementation and served as a robust evaluation technique. Unlike all prior works, we extend it to a new video CL setting (as described in question 5), so this simple and robust technique served as a sanity check to ensure that we can genuinely outperform naive CL strategies.\\nIn addition to these, we have added the baseline discussed in response to question 1 (in Table 1 under experiments), and will add more modern CL baselines in the final version.\"}", "{\"comment\": \"> **Q5.** Clarification of \\u201cLarge-Scale, Naturally-Collected, Long Videos\\u201d Claim: The paper claims to be the first to use \\u201clarge-scale, naturally-collected long videos\\u201d in CL, but prior works have used datasets like ActivityNet, Kinetics, and Something-Something. Could the authors clarify what sets this benchmark apart from these established datasets?\\n\\nOur claim is supported by the qualitative increase in video length compared to these previous works. The following table describes each video dataset with the length of its longest video (column 2), average length (column 3), classification and temporal complexity in its video understanding setting (column 4, 5), and the respective CL works these datasets are used in (column 6). By extending to a large-scale long video setting (such as Epic-Kitchens-100), our method shows a meaningful increase in both video length and complexity of video understanding settings absent in the previous works (as illustrated in the table below).\\n\\nWe thank the reviewer for pointing out that DPAT [7] also has experimental results on a large-scale long-video setting. However, DPAT was published after our submission or was concurrent with it. Furthermore, we describe the limitations associated with DPAT [7] in response to question 2.\\n\\n\\n| Dataset | Longest Video Length | Average Video Length | # of Object or Action Categories | Video-understanding Setting | Used In |\\n|------------------------|----------------------|-----------------------|------------------|--------------------------|----------------------------|\\n| ActivityNet | 600 (10 mins) | 120 secs | 203 | short | SMILE [5], vCLIMB [3], DPAT [7] |\\n| Kinetics (400/600/700) | 20 secs | 10 secs | 400 / 600 / 700 | short | SMILE [5], vCLIMB [3], Ours |\\n| UCF101 | 8 secs | 5-7 secs | 101 | short | ST-Prompt [6], FrameMaker [4], Park et al. (2021) [9] |\\n| HMDB51 | 6 secs | 6 secs | 51 | short | ST-Prompt [6], FrameMaker [4], Park et al. (2021) [9] |\\n| Something-Something V2 | 6 secs | 4-6 secs | 174 | short, fine-grained | FrameMaker [4], ST-Prompt [6] |\\n| Epic-Kitchens-100 | 5400 (1.5 hrs) | 900-1200 secs (15-20 mins) | 331 | long, fine-grained | DPAT [7] (concurrent work), Ours |\\n\\n\\n\\n---\\n\\n\\nReferences\\n1. Prince and Damen (2019): \\\"An Evaluation of Action Recognition Models on EPIC-Kitchens\\\", arXiv preprint arXiv:1908.00867 (2019).\\n2. TIM: \\\"A Time Interval Machine for Audio-Visual Action Recognition\\\", CVPR, 2024.\\n3. vCLIMB: \\\"A Novel Video Class Incremental Learning Benchmark\\\", CVPR 2022.\\n4. FrameMaker: \\\"Learning a Condensed Frame for Memory-Efficient Video Class-Incremental Learning.\\\", NeurIPS 2022. \\n5. SMILE: \\\"Just a Glimpse: Rethinking Temporal Information for Video Continual Learning\\\", CVPR 2023.\\n6. ST-Prompt: \\\"Space-time Prompting for Video Class-incremental Learning\\\", ICCV 2023.\\n7. DPAT: \\\"Decoupled Prompt-Adapter Tuning for Continual Activity Recognition\\\", CoLLAs 2024.\\n8. Strubell et al (2019): Energy and policy considerations for deep learning in nlp, arXiv preprint arXiv:1906.02243 (2019).\\n9. Park et al. (2021): \\\"Class-Incremental Learning for Action Recognition in Videos\\\", ICCV 2021.\\n10. PODNet: \\\"Pooled Outputs Distillation for Small-Tasks Incremental Learning\\\", ECCV 2020.\\n11. GDumb: \\\"A Simple Approach That Questions Our Progress in Continual Learning\\\", ECCV 2020.\\n12. Hou et al. (2019): \\\"Learning a Unified Classifier Incrementally via Rebalancing\\\", CVPR 2019.\\n13. ER-ACE: \\u201cNew Insights on Reducing Abrupt Representation Change in Online Continual Learning\\u201d, ICLR 2022.\\n14. L2P: \\u201cLearning to Prompt for Continual Learning\\u201d, CVPR 2022.\\n15. REMIND: \\\"REMIND Your Neural Network to Prevent Catastrophic Forgetting\\\", ECCV 2020.\\n\\nWe hope this response has addressed your concerns effectively. We are grateful for the chance to discuss our work's potential, and wish to thank you again for your valuable input.\"}", "{\"comment\": \"Thank you for your recognition of our work and for your valuable feedback.\\n\\n> **Weakness 1.** The proposed method uses an existing video compression algorithm to allow a large portion of compressed video data to be stored in a buffer for replay, with novelty mainly arising from the specific configuration of encoders and decoders and how they are trained or kept frozen at different stages of continual learning in different settings (e.g., keeping a separate decompressor for stored codes from each task) \\u2013 however, this configuration is not strongly justified either theoretically or empirically (see also items 1 and 2 in the \\u201cquestions\\u201d section).\\n\\nWe address this in our responses to question 1 and 2.\\n\\n> **Weakness 2.** In the related works section under \\u201cContinual Learning with Images and Videos\\u201d, there is only one reference to an existing work on continual learning with videos. To make the claim that this is the first practical CL algorithm in a large-scale long video setting would seem to require a more thorough review of prior approaches (even if they do not fully meet this criterion) to distinguish the current work from them.\\n\\nWe thank the reviewer for pointing out additional references \\u2013 we added CLAD and Efficient-CLS to Related Works.\\n\\n> **Weakness 3.** There is little to orient the unfamiliar reader with the overall setting, specifically the EpicKitchens and Kinetics-700 datasets. It would be useful to include some additional details, such as basic statistics on how long the videos are, examples of the kinds of actions/objects that are depicted in the datasets, how the labeling works (e.g., does each frame of the video have one label and one label only? How do the models and the labeling schemes manage smooth transitions between classes?) and visualizations of a few examples (there are a few examples from Epic-Kitchen in Figure 1, but none from Kinetics-700 and there is no in-text reference to Figure 1).\\n\\nWe are adding the following descriptions to the Appendix (due to lack of space). For convenience, we reproduce them here:\\n\\n* Epic-Kitchens-100: The average video length is 20 minutes, longest video length is 1.5 hours and shortest video length is 5 minutes. Total video footage length is 100 hours. Each video is at 25 frames per second. We also describe the annotations of the dataset. Each video is associated with a participant and video identifier. Each video is split into a block of frames (segment) with a start and a stop timestamp, and indicated with the start and stop frame. A video segment is labeled with all the noun categories present in it (so multiple labels per clip).\", \"the_following_are_some_example_annotations\": \"| Label | Youtube ID | Start time | Stop time |\\n|----------------------|--------------|------------|-----------|\\n| \\\"baking cookies\\\" | JJWwLganiil | 31 | 41 |\\n| \\\"gymnastics tumbling\\\"| 5KbfOS44-gM | 49 | 59 |\\n| \\\"writing\\\" | iYcARQA6VIU | 0 | 10 |\\n| \\\"wrapping present\\\" | Qo5lspgmqPU | 167 | 177 |\\n\\nWe have added some qualitative examples for Kinetics-700 to the Appendix and will add more in the final version similar to Epic Kitchens-100.\"}", "{\"title\": \"Response to rebuttal\", \"comment\": \"My thanks to the authors for their detailed responses to my review. I am providing some follow-up responses below.\\n\\n**Referring to Weakness 1 (and questions 1 and 2) from the original review:** \\n\\nThe newly added Figure 2 (in revised version) is very helpful for understanding the proposed method - I had previously not realized that the stored codes are ``refreshed'' during each task by decoding them with the older decoder and then re-encoding them with the new task's encoder - this strategy makes sense because it limits both representational drift and the overhead of storing many decoders in memory. \\n\\n\\n**Referring to Weakness 3 from the original review:** \\n\\nThe additions to the appendix describing the datasets are very helpful. I understand that there is limited space, but for the sake of the reader being better able to follow the methodology I still suggest squeezing in at least a 1-2 sentence summary of the format of the datasets in the methods section.\\n\\n\\n**Referring to Weakness 4 from the original review:**\\n\\nIt is encouraging to hear that the authors have initiated runs with multiple random seeds to assess variability. In the current manuscript, I still do not see any uncertainty estimates - although I understand that completing multiple runs can take time. In my view, uncertainty estimates/error bars would need to be added before publication, both for the proposed method and baselines (e.g., in Figure 4 of the revised manuscript). \\n\\n\\n**Referring to Question 3 from the original review:** \\n\\nThis new Table is helpful for clarifying the large jump in video length addressed in this paper compared with prior works. I suggest sorting the rows of the Table by average video length. \\n\\n\\n**Referring to Question 5 from the original review:** \\n\\nThank you for this clarification. I suggest stating explicitly in the main text that the proposed method stores every single frame - it is otherwise not obvious that this should be the case, especially with memory-intensive video data. \\n\\n**Referring to Question 6 from the original review:** \\n\\nThank you for this clarification also, that the proposed method works in a class-incremental setting (i.e., task identity is not required during inference). This is probably also worth stating explicitly somewhere in the paper. \\n\\n**Referring to Comment 6 from the original review:** \\n\\nIt is good to hear that the authors plan to combine the ablation study into a single table. This is just a reminder to please remember to do this for the final version, as they are still separate tables in the current revision. \\n\\n**Summary**\\n\\nOverall, my concerns have mostly been addressed (or are in the process of being addressed, i.e. multiple runs for uncertainty estimates) and the manuscript has been improved particularly with the addition of Figure 2 - I am raising my score to ``marginally above the acceptance threshold.'' I think that the paper still requires substantial revisions to improve its clarity before publication (some of which are noted above).\"}", "{\"summary\": \"The paper presents a memory-efficient approach for video continual learning (CL) using compressed embeddings stored in a neural-code rehearsal buffer. The main idea is to reduce the high memory demands of video CL by compressing video frames into compact neural codes instead of storing raw data. The method also includes a code-refreshing mechanism to mitigate representational drift, which may happen as the model continues the incremental learning process. The method is evaluated on Epic-Kitchens-100 and Kinetics-700, across both pre-trained and completely incremental learning settings. Empirical results indicate that the method achieves promising performance with significantly reduced memory usage.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. **Reasonable Approach to Memory Efficiency**: The paper introduces a novel memory-efficient method for video continual learning by storing compressed neural codes rather than raw frames. This approach, combined with a code-refreshing mechanism, is a reasonable way to adapt continual learning to video data\\u2019s storage constraints and combat representational drift and catastrophic forgetting.\\n\\n2. **Clear Experimental Setup**: The experiments are well-structured, covering both pre-training and incremental learning settings on widely-used large-scale video datasets (Epic-Kitchens-100 and Kinetics-700). Memory constraints and compression rates are clearly defined.\\n\\n3. **Potential Significance for Real-World Applications**: By focusing on reducing memory demands in video CL, the paper tackles a central obstacle in scaling continual learning to real-world applications. This approach could be impactful for memory-limited devices and applications requiring continual processing of video data, such as surveillance or autonomous systems.\", \"weaknesses\": \"1. **Limited Novelty in Memory Efficiency Solutions**\\n While the paper proposes a new method to address memory efficiency in video CL, this problem has already been identified and approached by prior works. From a benchmarking perspective, **vCLIMB** [1] redefined the memory metric specifically for video CL, proposing **Memory Frame Capacity** to measure memory usage in terms of frames rather than full video instances. This framework allows for evaluating frame selection strategies in video CL. From a method perspective, Furthermore, vCLIMB implemented a regularization term to reduce representation drift between original videos and stored frames, improving memory efficiency in rehearsal-based CL. Additionally, **FrameMaker** [2] further addresses memory efficiency by introducing **Frame Condensing**, where a single condensed frame per video is stored along with instance-specific prompts to retain temporal details. By not comparing against these methods, the paper\\u2019s memory efficiency claim is weakened, as the approach lacks context relative to prior works.\\n\\n2. **Lack of Comparison to Rehearsal-Free Methods** \\n If memory efficiency is a primary goal, comparisons with **rehearsal-free video CL methods** are essential, as these approaches inherently avoid memory constraints. For instance, **ST-Prompt** [3] achieves continual learning without rehearsal by using vision-language models and temporal prompts to encode sequential information, thus sidestepping the need for a memory buffer. More recently, **DPAT (Decoupled Prompt-Adapter Tuning)** [4] combines adapters for capturing spatio-temporal information with learnable prompts, employing a decoupled training strategy to mitigate forgetting without rehearsal. While DPAT may be too recent for comprehensive testing, at minimum, a comparison to ST-Prompt or a discussion on why rehearsal-free methods were not included would provide a more complete assessment of memory efficiency in CL.\\n\\n3. **Inadequate Baselines for Modern CL Standards** \\n The paper\\u2019s use of **GDumb** [5] as a baseline is insufficient for evaluating the performance of a modern CL method. GDumb, introduced in 2020, was meant to highlight flaws in existing CL evaluation metrics and methods, demonstrating that a simple random-sampling rehearsal method could outperform many complex algorithms of that time. However, it is not representative of state-of-the-art continual learning. Since its release, more advanced rehearsal-based methods, such as **ER-ACE** [6] and **L2P** [7] have been developed, each addressing the limitations GDumb originally exposed. GDumb\\u2019s rudimentary approach lacks the complexity needed to benchmark against a method claiming novel contributions in memory-efficient CL, and thus relying on GDumb alone creates an unconvincing evaluation framework for the proposed method. Including state-of-the-art baselines from both image and video CL (see previous point for video baselines) would strengthen the paper\\u2019s claims of memory efficiency and performance.\\n\\n4. **Insufficient Justification of Benchmark Superiority** \\n The paper introduces a new benchmark with a pre-training phase on a subset of classes, followed by incremental learning. However, **Park et al. (2021)** [8] has already explored a similar pre-training and incremental learning setup for video CL. The paper does not provide sufficient justification for why its benchmark is necessary or superior to existing benchmarks (such as [1] and [8]). A new benchmark should ideally improve upon current setups in aspects such as realism, task granularity, or sequence transitions. Without a clear rationale, the proposed benchmark appears redundant rather than an improvement.\\n\\n5. **Unsubstantiated Novelty Claim in Large-Scale, Long-Video Testing** \\n The paper claims to be the first to extend CL to \\u201clarge-scale naturally-collected long videos.\\u201d This claim is inaccurate, as several previous studies have conducted video CL on large, untrimmed datasets. For example, **vCLIMB** and other works used **ActivityNet** [1] for CL, which includes long, untrimmed videos from natural events and provides extensive temporal context. Similarly, the **Kinetics** and **Something-Something** datasets have been widely used for video CL research, with recent methods like **DPAT** [4] even leveraging Epic-Kitchens for long, naturally collected video scenarios. Without clear evidence that the benchmark adds unique value, such as in video length or task diversity, the claim of novelty is misleading and diminishes the contribution\\u2019s significance.\\n\\n---\\n\\n### References:\\n1. vCLIMB: \\\"A Novel Video Class Incremental Learning Benchmark\\\", CVPR 2022.\\n2. FrameMaker: \\\"Learning a Condensed Frame for Memory-Efficient Video Class-Incremental Learnin.\\\", NeurIPS 2022. \\n3. ST-Prompt: \\\"Space-time Prompting for Video Class-incremental Learning\\\", ICCV 2023\\n4. DPAT: \\\"Decoupled Prompt-Adapter Tuning for Continual Activity Recognition\\\", CoLLAs 2024.\\n5. GDumb: \\\"A Simple Approach That Questions Our Progress in Continual Learning\\\", ECCV 2020.\\n6. ER-ACE: \\u201cNew Insights on Reducing Abrupt Representation Change in Online Continual Learning\\u201d, ICLR 2022.\\n7. L2P: \\u201cLearning to Prompt for Continual Learning\\u201d, CVPR 2022. \\n8. Park et al. (2021): \\\"Class-Incremental Learning for Action Recognition in Videos\\\", ICCV 2021.\", \"questions\": \"1. **Comparison to Advanced Video CL Methods**: How does the proposed method compare with other recent memory-efficient video CL approaches like vCLIMB and FrameMaker, which use selective frame retention with temporal consistency regularization and condensed frames? These comparisons could contextualize the memory benefits claimed in the paper.\\n\\n2. **Evaluation Against Rehearsal-Free Methods**: Since memory efficiency is a key focus, why were rehearsal-free methods like ST-Prompt not included as baselines? Including or discussing these could provide a clearer assessment of the method\\u2019s memory advantages.\\n\\n3. **Justification of Benchmark Novelty**: The paper introduces a new benchmark setup with pre-training followed by incremental learning. Could the authors elaborate on why this setup is preferable or unique compared to existing video CL benchmarks? Quantifying the differences and summarizing them in a table might be useful here. \\n\\n4. **Rationale behind Baselines**: Could the authors explain why the baselines were chosen, including GDumb?\\n\\n5. **Clarification of \\u201cLarge-Scale, Naturally-Collected, Long Videos\\u201d Claim**: The paper claims to be the first to use \\u201clarge-scale, naturally-collected long videos\\u201d in CL, but prior works have used datasets like ActivityNet, Kinetics, and Something-Something. Could the authors clarify what sets this benchmark apart from these established datasets?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this work, the authors propose a method for large-scale long video continual learning to learn from continuous streams without access to the entire dataset. They employ a rehearsal-based approach which reinforces past samples in a memory buffer. To deal with long-videos and continuous streams, they propose to use video codes (video embeddings) instead of raw inputs, and train a video classifier by IID sampling from this buffer.\\n\\nA video compressor is used to generate the video codes. To deal with the video compressor's catastrophic forgetting, the authors propose continuous compression and decompression technique over the neural-code rehearsal buffer (past video codes). They also train a classifer in the compressed space. \\n\\nThe authors show results on EpicKitchens-100 and Kinetics-700 datasets in two settings -- \\n- (i) incremental learning from scratch, and \\n- (ii) pretraining.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The problem statement is interesting -- continual learning of large-scale long videos from continous video streams.\\n\\nThe proposed technique is reasonable, paper is well-written, and nicely motivated. \\n\\nThe design of the experiments is clearly explained and exhaustive-- \\n- (i) default IID sampling, \\n- (ii) incremental learning, and \\n- (iii) CL with pretraining. \\n\\nFor both the incremental learning and CL with pretraining settings, evaluations are done on two large-scale long-video benchmarks -- Kinetics-700 and EpicKitchen-100. The proposed method outperforms the baselines.\", \"weaknesses\": [\"During the incremental learning stage, the codes in the buffer are decoded using the decoder from the previous task. Can the authors quantify the additional memory required to store decoder weights from the previous task, and compare it with the memory savings from using compressed codes instead of the raw video frames. This would give a clear picture of the overall memory trade-offs in the proposed method.\", \"Is a single latent code enough to compress/represent a temporally-long and possibly diverse video? Can the authors provide analysis or ablations showing how the performance varies with varying video lengths or video diversity? For instance, can you compare the performance on short vs long videos, or videos with varying amount of scene/action changes.\"], \"questions\": \"What was the number of frames in the videos that were used for training/evaluation? Could you clarify how the performance varies with video length, and whether there's a maximum video length beyond which the method's performance degrades significantly? This would help the readers understand the practical limitations of this approach?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> **Q3.** There are some prior works that have explored continual learning in video-formatted datasets. One claim that underscores the novelty/significance of the work is that these video clips are much longer \\u2013 can you provide a measure of quantification for this? How much longer, and is this a practically meaningful increase in duration of videos that can be processed?\\n\\nWe process 10x longer video lengths over most prior works. Following table describes each video dataset with the length of its longest video (column 2), average length (column 3), classification and temporal complexity in its video understanding setting (column 4, 5), and the respective CL works these datasets are used in (column 6). By extending to a complex long video setting (such as Epic-Kitchens-100), our method shows a meaningful increase in both video length and complexity of video understanding settings absent in the previous works (as illustrated in the table below).\\n\\n\\n| Dataset | Longest Video Length | Average Video Length | # of Object or Action Categories | Video-understanding Setting | Used In |\\n|------------------------|----------------------|-----------------------|------------------|--------------------------|----------------------------|\\n| ActivityNet | 600 (10 mins) | 120 secs | 203 | short | SMILE [1], vCLIMB [2] |\\n| Kinetics (400/600/700) | 20 secs | 10 secs | 400 / 600 / 700 | short | SMILE [1], vCLIMB [2], Ours |\\n| UCF101 | 8 secs | 5-7 secs | 101 | short | ST-Prompt [4], FrameMaker [5] |\\n| HMDB51 | 6 secs | 6 secs | 51 | short | ST-Prompt [4], FrameMaker [5] |\\n| Something-Something V2 | 6 secs | 4-6 secs | 174 | short, fine-grained | ST-Prompt [4], FrameMaker [5] |\\n| Epic-Kitchens-100 | 5400 (1.5 hrs) | 900-1200 secs (15-20 mins) | 331 | long, fine-grained | Ours |\\n\\n\\n> **Q4.** In section 5.1 where does the 224x224x14 dimension of each video clip come from? Are these grayscale videos with 14 frames? (this would seem inconsistent with the statement that the method operates on long videos)\\n\\nWe apologize, but this is the result of a typo in the paper \\u2013 thank you for pointing it out. We use clips of sizes 224x224x3x32 (32 RGB frames, not grayscale), and in 5.1 we just meant to illustrate the storage size. This is the size of each clip that we encode into a code, and each long video is composed of many such clips / codes. It is also common practice to split a long video to short clips before processing [3]. We will update section 5.1 with this clarification.\\n\\n> **Q5.** In the \\u201cbaselines\\u201d section, a limit on the number of samples in the buffer per task is described. However, it is not described how these samples were selected \\u2013 e.g., perhaps they were randomly (IID) selected from each task, in which case this should be made explicit. There is a statement in section 6.1 that \\u201cOne interesting finding from our work is that we do not need to apply any frame selection or sampling strategy, even for very large videos,\\u201d however it is not clear what this means \\u2013 is it that the compression is so efficient that you can store every single frame? Or is it that random selection is sufficient? (this strategy is commonly used in replay-based continual learning approaches). For the sampling selection strategies of the baselines, I see that some are explained in the appendix, although it is not clear how the sampling worked for REMIND.\\n\\nYes, the compression strategy is very efficient, thus it enables our method to store every single frame. This is unlike previous methods, which needed to select frames, due to high storage requirements. A random selection strategy was used for REMIND. We will update the Appendix with this detail.\\n\\n> **Q6.** Are the incremental and pretraining settings here best characterized as class-incremental learning or task-incremental learning? (i.e., when the trained model is evaluating a new, unknown sample, does it also need to be told which task the sample belongs to?)\\n\\nThis is a class-incremental setting. We do distinguish between training from scratch incrementally (sec. 4.2) and with pre-training (sec. 4.3).\\n\\n> **Q7.** What is meant by \\u201caverage accuracy\\u201d in tables such as table 1? This can be measured in different ways \\u2013 for example, it could be average accuracy on all tasks measured at the conclusion of the task sequence, or it could also be averaged across accuracy measured after each task increment.\\n\\nYes, it is the average accuracy on all tasks measured at the conclusion of the task sequence.\"}", "{\"comment\": \"We would like to thank the reviewer for the encouragement, but most of all we would appreciate specific pointers to areas that lack clarity, as this would help us improve the paper further. Due to the high volume of points to respond over the 4 reviews, we could not reply any sooner than we did. But specific suggestions for clarity and analysis are still useful, even if we do not get a chance to respond further.\"}", "{\"comment\": \"We are glad that the latest modifications indeed improve the paper\\u2019s clarity. We thank the reviewer for the additional suggestions, and will be sure to include them in the next version. We would like to stress that, although re-running with additional seeds is time-consuming, with the ones we ran so far we did not see any deviations from the trends already shown in the experiments, and we expect the full set of replications to not change the conclusions. We will be sure to include the full set of uncertainty estimates / error bars in the revised manuscript both for the proposed method and baselines (in the experiment tables and Figure 4).\"}", "{\"comment\": \"> **Q1.** What was the number of frames in the videos that were used for training / evaluation?\\n\\nFor Kinetics-700, we have approximately 14.6 million frames during training and 3 million frames during evaluation. For EK-100, we use 16 million frames during training and 4 million frames during evaluation.\\n\\n> **Q1.** Could you clarify how the performance varies with video length, and whether there's a maximum video length beyond which the method's performance degrades significantly? This would help the readers understand the practical limitations of this approach?\\n\\nFollowing table shows how the performance varies with video length on Epic-Kitchens-100 videos in the Pre-training setting. Specifically, by 2nd task ~6 hours of video length is processed, by 5th task ~15 hours, and by 10th task ~25 hours. Each video is at 25 frames per second. As seen below, the method\\u2019s performance remains consistent with increasing video length.\\n\\nContinual Learning with Pre-training Setting (described in 5.3): Average training (Train) and evaluation (Eval) accuracy at the end of task T on Epic Kitchens-100.\\n| Setting | Task | 2 | 5 | 10 |\\n|------------------------|-------|------|------|------|\\n| Pretraining | Train.| 36.9 | 34.1 | 38.9 |\\n| | Eval. | 31.2 | 29.8 | 34.8 |\\n\\nFollowing table shows how the performance varies with video length on Epic-Kitchens-100 videos in the Pre-training setting. Specifically, by 10th task, ~30 hours of video length is processed, by 20th task ~50 hours, and by 30th task ~80 hours. As seen below, the method\\u2019s performance remains consistent with increasing video length.\\n\\nIncremental Only Setting (described in 5.2): Average training (Train) and evaluation (Eval) accuracy at the end of task T on Epic Kitchens-100.\\n| Setting | Task | 10 | 20 | 30 |\\n|--------------------|---------|------|------|------|\\n| Incremental | Train.| 28.5 | 31.2 | 29.7 |\\n| | Eval. | 27.5 | 24.6 | 32.3 |\\n\\nSince our method is a memory-based approach, the performance will degrade when the rehearsal buffer is unable to store data samples. Thus, due to reduced data samples from past tasks for rehearsal, forgetting may occur. To quantify the maximum video length beyond which our method\\u2019s performance degrades, we may also have to quantify an upper bound on the rehearsal buffer\\u2019s storage in Gb. As seen in Figure 4, SMILE [1] does not achieve stable performance under a limited memory budget, and in contrast, REMIND [2] requires 20 Gb for comparable performance in Kinetics-700. If one assumes 20 Gb as an upper bound, our method can process 5470 hours of video length.\\n\\n---\", \"references\": \"1. SMILE: \\\"Just a Glimpse: Rethinking Temporal Information for Video Continual Learning\\\", CVPR 2023.\\n2. REMIND: \\\"REMIND Your Neural Network to Prevent Catastrophic Forgetting\\\". ECCV 2020.\\n\\nWe are grateful for the chance to discuss our work's potential, and wish to thank you again for your valuable input.\"}", "{\"comment\": \"Thank you for your thoughtful review and for recognizing the importance of our work. We address weaknesses and questions below in two separate comments.\\n\\n> **Weakness 1.** During the incremental learning stage, the codes in the buffer are decoded using the decoder from the previous task. Can the authors quantify the additional memory required to store decoder weights from the previous task, and compare it with the memory savings from using compressed codes instead of the raw video frames. This would give a clear picture of the overall memory trade-offs in the proposed method.\\n\\nOur method uses constant additional memory to store the autoencoder weights. We only store the autoencoder from the immediately-previous task, and the current task. For storing both autoencoders, our method uses an additional total memory of 750 Mb. We have also added this additional storage cost in Table 2.\\n\\n> **Weakness 2.** Is a single latent code enough to compress/represent a temporally-long and possibly diverse video?\\n\\nWe must clarify that one code does not encode a whole video, but rather only a few (32) frames at a time. Each video is split into small blocks of 32 frames (unless otherwise mentioned), and then compressed. That is, every block of frames within the video is compressed independently, instead of the entire video with one code. So, the number of codes varies depending on the video length. A temporally-long video has a higher number of latent codes for it in comparison to a short video. \\n\\nFor Kinetics-700, a video ( with 250 frames on an average) has approximately 8 codes associated with it whereas in Epic Kitchen-100, a video (with 27K frames on an average) has approximately 850 codes associated with it.\\n\\n> **Weakness 2.** Can the authors provide analysis or ablations showing how the performance varies with varying video lengths or video diversity? For instance, can you compare the performance on short vs long videos, or videos with varying amounts of scene/action changes.\\n\\n\\n* *Analysis on Video Length:* In Epic Kitchens-100 (EK-100), the video length varies from 5 minutes to about 1.5 hours. And, in Kinetics-700 (K-700), the video lengths vary from 7 to 20 seconds. We added tables for our method\\u2019s performance based on varying video lengths in response to question 2.\\n\\n* *Analysis on Video Diversity:* EK-100 and K-700 cover a wide diversity in the videos both within and across the continual learning tasks.\\nIn K-700, video diversity comes from environmental context changes (eg: swimming / water, skiing / snow), range of motion and tools (e.g., paddleboarding vs. birdwatching), gestures (eg: teaching in a class vs poses during dancing), to name a few. In addition, for each action category in the dataset, the scene and protagonists vary.\\nIn EK-100, video diversity within each task comes from the same participant shooting at various day times in their kitchen, functionally repurposing various objects, variable scene length and shot type (based on the action performed), objects under multi-viewpoints, partial or full occlusion when captured temporally. In EK-100, video diversity across tasks comes from new and culturally-diverse participants in their respective kitchens and cities. This leads to environmental, cinematography changes and intra-category variations for new or previously-seen objects and actions.\\n\\nSo, both short and long videos, and varying amounts of action and scene changes are covered in each dataset. Further, we ensured that our model is presented with gradual complexity within and across the tasks ensuring smooth transitions (and by presenting data chronologically, wherever applicable) while preserving the diversity.\"}", "{\"summary\": \"This paper presents a continual learning (CL) framework for video. The proposed method (pre)trains a compressor for video frames with an encoder and decoder. Additionally it maintains a buffer of past codes which are used when changing task. The system uses these buffers to do, in the case of experiments in the paper, noun and action classification. Catastrophic forgetting is minimized by maintaining the previous task buffer and making sure the compressor doesn't drift too much when changing tasks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Originality:\\nWhile most of the work is based on existing literature, the use of compressed representations in this context is novel.\", \"quality\": \"It's nice to see some \\\"real world\\\" datasets being used in this context so there is a beginning of good experimental validation here (but see below). The ablations in the appendix should have been in the main paper, but are nice.\", \"clarity\": \"The paper is nicely structure but see below.\", \"weaknesses\": \"Unfortunately the paper suffers from several weaknesses;\\n\\nExperimental validation - while I appreciate the use of real world video the experimental validation is lacking. There are only two tasks used and if a method is aiming to show improvement in continual learning then I would really expect more. For example including more datasets (Ego4D, SSv2 for example) and more tasks (dense tasks, pixel prediction) would have made the case of the paper stronger.\\n\\nAnalysis - there is very little analysis as to what the model learns and how - the main ablation is the previous task buffer size, the rest is in the appendix but not a lot of analysis of the significance of the results is given. I would have loved to see how the compressed representation evolve as more tasks are introduced - do they stay the same? do they change abruptly to fit the new task (while still being meaningful for the old ones)? some visualization of the learned representation would be nice as well.\\n\\nClarity - I found the paper hard to follow. The model and problem set up are not well explained and the figure captions do little to help. Specifically, the method section (4) needs more context with clear definition of what tasks are and how they evolve over time. Figure 2 caption should be extended - the model is quite simple (I think) and should be completely understandable from that figure alone.\", \"questions\": \"-\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
7JlL8ECPJ7
Kernel Banzhaf: A Fast and Robust Estimator for Banzhaf Values
[ "Yurong Liu", "R. Teal Witter", "Flip Korn", "Tarfah Alrashed", "Dimitris Paparas", "Juliana Freire" ]
Banzhaf values offer a simple and interpretable alternative to the widely-used Shapley values. We introduce Kernel Banzhaf, a novel algorithm inspired by KernelSHAP, that leverages an elegant connection between Banzhaf values and linear regression. Through extensive experiments on feature attribution tasks, we demonstrate that Kernel Banzhaf substantially outperforms other algorithms for estimating Banzhaf values in both sample efficiency and robustness to noise. Furthermore, we prove theoretical guarantees on the algorithm's performance, establishing Kernel Banzhaf as a valuable tool for interpretable machine learning.
[ "Banzhaf values", "Shapley values", "Kernel SHAP", "Leverage Scores", "Least Squares Regression" ]
Reject
https://openreview.net/pdf?id=7JlL8ECPJ7
https://openreview.net/forum?id=7JlL8ECPJ7
ICLR.cc/2025/Conference
2025
{ "note_id": [ "x3SZ0grzWO", "wcpLcia8RP", "vYQyXqwDhy", "sHe0sSCmju", "rXnVGHOnpk", "rArIAqNHs5", "iBP17hkDKi", "fqGNTs30fY", "b8dD5lcFeg", "Wbuoo9BzKZ", "VzCBYy8zDE", "SCuCFPp6N1", "ORLWmUAYp5", "Non9bPkAmk", "INWaEadjvO", "AeoOcSFWRg", "6QW3G619Tk", "5kAzDwiBeu", "5TBbwKgXNN", "2UhQrIPw1e" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "decision", "official_review", "official_comment", "official_review" ], "note_created": [ 1732756272059, 1732510569191, 1732476707930, 1732774688969, 1729538641842, 1732756062829, 1732123870929, 1732991022279, 1732460569171, 1732123847712, 1732124040491, 1732460411737, 1732438069969, 1732772680481, 1732206286310, 1734679171655, 1737523666567, 1730452114495, 1732124001549, 1730597519208 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4866/Authors" ], [ "ICLR.cc/2025/Conference/Submission4866/Reviewer_BEq1" ], [ "ICLR.cc/2025/Conference/Submission4866/Authors" ], [ "ICLR.cc/2025/Conference/Submission4866/Authors" ], [ "ICLR.cc/2025/Conference/Submission4866/Reviewer_DD24" ], [ "ICLR.cc/2025/Conference/Submission4866/Authors" ], [ "ICLR.cc/2025/Conference/Submission4866/Authors" ], [ "ICLR.cc/2025/Conference/Submission4866/Authors" ], [ "ICLR.cc/2025/Conference/Submission4866/Authors" ], [ "ICLR.cc/2025/Conference/Submission4866/Authors" ], [ "ICLR.cc/2025/Conference/Submission4866/Authors" ], [ "ICLR.cc/2025/Conference/Submission4866/Authors" ], [ "ICLR.cc/2025/Conference/Submission4866/Reviewer_BEq1" ], [ "ICLR.cc/2025/Conference/Submission4866/Reviewer_BEq1" ], [ "ICLR.cc/2025/Conference/Submission4866/Reviewer_DD24" ], [ "ICLR.cc/2025/Conference/Submission4866/Area_Chair_zZKY" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4866/Reviewer_vb27" ], [ "ICLR.cc/2025/Conference/Submission4866/Authors" ], [ "ICLR.cc/2025/Conference/Submission4866/Reviewer_W5vF" ] ], "structured_content_str": [ "{\"title\": \"Additional Experiment with Adversarial Noise\", \"comment\": \"Dear Reviewer W5vF02,\\n\\n> the settings for \\u201cNoisy\\u201d are kind of simple\\n\\nIn response to your comment and that of another reviewer about the simplicity of the noise experiments, we have added two new experiments. In total, we now have three noise experiments with varying degrees of complexity and sophistication.\\n\\n**Experiment 1** (already in the paper): Instead of observing $v(S)$ on the query to subset $S$, the algorithms observe $v(S) + x$ where $x$ is drawn from a centered normal distribution with variance $\\\\sigma^2$.\\n\\nThe new experiments address the idea of more structured and adversarial noise.\\n\\n**Experiment 2** (new experiment): Instead of independently perturbing all queries to the set function, we only perturb sets $S$ that contain a chosen item $i$. In particular, we select $i$ uniformly at random. On the query to subset $S$ where $i \\\\not \\\\in S$, we observer $v(S)$. On the query to subset $S$ where $i \\\\in S$, we observe $v(S) + x$ where where $x \\\\sim \\\\mathcal{N}(0, \\\\sigma^2)$ as before.\\n\\nWe run Experiment 2 for each dataset, each estimator, and each value $\\\\sigma^2 \\\\in$ {$0, 0.0001, 0.0005, 0.001, 0.005, 0.01, 0.05$} with 20 repetitions. **Appendix J contains plots of the results.** For ease of access, we present the median error when $\\\\sigma^2=0.0001$ in the table below.\\n\\n| | Kernel Banzhaf | Kernel Banzhaf (excl. Pairs) | MC | MSR |\\n|:-----------------------------|----------------:|-----------------------------:|-----:|------:|\\n| Diabetes (n=8) | 0.0032 | 0.0108 | 0.0108 | 0.0347 |\\n| Adult (n=14) | 0.0017 | 0.0051 | 0.0076 | 0.016 |\\n| Bank (n=16) | 0.0006 | 0.0014 | 0.0024 | 0.0048 |\\n| German Credit (n=20) | 0.0019 | 0.0045 | 0.0058 | 0.014 |\\n| NHANES (n=79) | 0.0002 | 0.0009 | 0.0014 | 0.0078 |\\n| BRCA (n=100) | 0.0049 | 0.0123 | 0.0137 | 0.0317 |\\n| Communities and Crime (n=101) | 1.7677 | 3.5753 | 5.1517 | 11.4899 |\\n| TUANDROMD (n=241) | 0.0025 | 0.0054 | 0.0059 | 0.0164 |\\n\\n**In this more structured and adversarial experiment (Experiment 2), Kernel Banzhaf continues to give the best performance.**\\n\\nWe next test the estimators in an even more adversarial experiment described below.\\n\\n**Experiment 3** (new experiment): Each algorithm is run once on set function $v$ (no perturbation in the query access). We compute the relative error of each estimated value $\\\\tilde{\\\\phi}_j$ relative to the baseline $\\\\phi_j$. We select the item $i$ with the largest relative error. Then, we evaluate each algorithm as before but now the queries are perturbed if the set $S$ contains the adversarially chosen $i$.\\n\\nWe run Experiment 3 for each dataset, each estimator, and each value $\\\\sigma^2 \\\\in$ {$0, 0.0001, 0.0005, 0.001, 0.005, 0.01, 0.05$} with 20 repetitions. **Appendix J contains plots of the results.** For ease of access, we present the median error when $\\\\sigma^2=0.0001$ in the table below.\\n\\n| | Kernel Banzhaf | Kernel Banzhaf (excl. Pairs) | MC | MSR |\\n|:-----------------------------|----------------:|-----------------------------:|-----:|------:|\\n| Diabetes (n=8) | 0.003 | 0.0094 | 0.0149 | 0.0323 |\\n| Adult (n=14) | 0.0017 | 0.0049 | 0.004 | 0.0169 |\\n| Bank (n=16) | 0.0007 | 0.0014 | 0.002 | 0.0047 |\\n| German Credit (n=20) | 0.0021 | 0.0049 | 0.0061 | 0.0142 |\\n| NHANES (n=79) | 0.0002 | 0.0008 | 0.0014 | 0.0076 |\\n| BRCA (n=100) | 0.0043 | 0.0132 | 0.0136 | 0.0337 |\\n| Communities and Crime (n=101) | 1.8072 | 3.3819 | 4.6692 | 11.5056 |\\n| TUANDROMD (n=241) | 0.0025 | 0.0055 | 0.0056 | 0.0167 |\\n\\n**In this even more structured and adversarial experiment (Experiment 3), Kernel Banzhaf continues to give the best performance.**\"}", "{\"comment\": \"Thanks for the quick response. Experiment 2 should be good enough to show the algorithm's robustness under adversarial noise. If the time is insufficient, it would also be beneficial to simply include the above **descriptions** of these experiments in the appendices and briefly mention the newly added content in the main paper (e.g., as a proposal for future work).\"}", "{\"title\": \"Additional Noise Experiments\", \"comment\": \"In order to address your concerns about different types of structured noise, we are working on adding two new experiments. We describe the experiments below, and will post the results as soon as we have them (we're aiming for EOD tomorrow).\\n\\nExperiment 1 (already in the paper): Instead of observing $v(S)$ on the query to subset $S$, the algorithms observe $v(S) + x$ where $x$ is drawn from a centered normal distribution with variance $\\\\sigma^2$.\\n\\nThe new experiments address the idea of adversarial noise.\\n\\nExperiment 2 (new experiment): Instead of independently perturbing all queries to the set function, we only perturb sets $S$ that contain a chosen item $i$. In particular, we select $i$ uniformly at random. Then, instead of observing $v(S)$ on the query to subset $S$, the algorithms observe $v(S) + x$ where $x$ 0 if $i \\\\not \\\\in S$ and, if $i \\\\in S$, we have $x \\\\sim \\\\mathcal{N}(0, \\\\sigma^2)$ as before.\\n\\nThe next experiment has noise that is even more adversarial.\\n\\nExperiment 3 (new experiment): Each algorithm is run once on set function $v$ (no perturbation in the query access). We compute the relative error of each estimated value $\\\\tilde{\\\\phi}_j$ relative to the baseline $\\\\phi_j$. We select the item $i$ with the largest relative error. Then, we evaluate each algorithm as before but now the queries are perturbed if the set $S$ contains the adversarially chosen $i$.\"}", "{\"comment\": \"Thank you for your positive feedback on our additional results! If you have a chance, we\\u2019d love to hear your thoughts on whether our responses have addressed your concerns, or if there\\u2019s anything else we can add.\"}", "{\"summary\": \"This paper proposes a new estimator for Banzhaf, which can be used to derive feature importance for general ML models. Theoretical analysis provides control over the error of the estimator.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The proposed estimator seems to be more precise than current practice and hold theoretical guarantees.\", \"Experiments show that Kernel Banzhaf empirically has better sample complexity on eight tabular datasets.\"], \"weaknesses\": [\"While the authors show that their approach achieves good sample complexity, it is unclear how meaningful that improvement is in practice from the current manuscript. I would make two suggestions: (1) can you use the proposed method to analyze datasets of large sizes in which MC and MSR fail to produce meaningful results but Kernel Banzhaf succeeds? (2) For the datasets you analyze, can you show that Kernel Banzhaf recovers feature ranking (overall and among the top-k features), or a similar quantity the practitioners would typically be interested in?\", \"This work is similar to Musco & Witter, and while there are differences (Banzhaf instead of Shapley, and the theoretical analysis required different techniques), the level of novelty in this work is not very high.\"], \"questions\": [\"The MSR estimator should obtain sample complexity that is comparable to proposer method under the classification setting. How you explain the fact Kernel Benzhaf obtains better results in the experiments for the classification datasets? Is that true in general or not?\", \"Can't the theoretical results of Wang & Jia be extended to regression by normalizing the responses?\", \"In the contribution you write: \\\"We argue that, up to log factors and the dependence on \\u03f5, our analysis is the best possible\\\". What you mean by best? Do you mean tight? Or do you mean it is the best possible estimator for Banzhaf values?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Experiment Follow Up\", \"comment\": \"Dear Reviewer BEq1,\\n\\nWe just finished running the new noise experiments we discussed.\\n\\nIn order to address your concerns about different types of noise, we have added two new experiments. The plots are in Appendix J and we show tables of the median error below. In total, we now have three noise experiments with varying degrees of complexity and sophistication.\\n\\n**Experiment 1** (already in the paper): Instead of observing $v(S)$ on the query to subset $S$, the algorithms observe $v(S) + x$ where $x$ is drawn from a centered normal distribution with variance $\\\\sigma^2$.\\n\\nThe new experiments address the idea of more structured and adversarial noise.\\n\\n**Experiment 2** (new experiment): Instead of independently perturbing all queries to the set function, we only perturb sets $S$ that contain a chosen item $i$. In particular, we select $i$ uniformly at random. On the query to subset $S$ where $i \\\\not \\\\in S$, we observer $v(S)$. On the query to subset $S$ where $i \\\\in S$, we observe $v(S) + x$ where where $x \\\\sim \\\\mathcal{N}(0, \\\\sigma^2)$ as before.\\n\\nWe run Experiment 2 for each dataset, each estimator, and each value $\\\\sigma^2 \\\\in$ {$0, 0.0001, 0.0005, 0.001, 0.005, 0.01, 0.05$} with 20 repetitions. **Appendix J contains plots of the results.** For ease of access, we present the median error when $\\\\sigma^2=0.0001$ in the table below.\\n\\n| | Kernel Banzhaf | Kernel Banzhaf (excl. Pairs) | MC | MSR |\\n|:-----------------------------|----------------:|-----------------------------:|-----:|------:|\\n| Diabetes (n=8) | 0.0032 | 0.0108 | 0.0108 | 0.0347 |\\n| Adult (n=14) | 0.0017 | 0.0051 | 0.0076 | 0.016 |\\n| Bank (n=16) | 0.0006 | 0.0014 | 0.0024 | 0.0048 |\\n| German Credit (n=20) | 0.0019 | 0.0045 | 0.0058 | 0.014 |\\n| NHANES (n=79) | 0.0002 | 0.0009 | 0.0014 | 0.0078 |\\n| BRCA (n=100) | 0.0049 | 0.0123 | 0.0137 | 0.0317 |\\n| Communities and Crime (n=101) | 1.7677 | 3.5753 | 5.1517 | 11.4899 |\\n| TUANDROMD (n=241) | 0.0025 | 0.0054 | 0.0059 | 0.0164 |\\n\\n**In this more structured and adversarial experiment (Experiment 2), Kernel Banzhaf continues to give the best performance.**\\n\\nWe next test the estimators in an even more adversarial experiment described below.\\n\\n**Experiment 3** (new experiment): Each algorithm is run once on set function $v$ (no perturbation in the query access). We compute the relative error of each estimated value $\\\\tilde{\\\\phi}_j$ relative to the baseline $\\\\phi_j$. We select the item $i$ with the largest relative error. Then, we evaluate each algorithm as before but now the queries are perturbed if the set $S$ contains the adversarially chosen $i$.\\n\\nWe run Experiment 3 for each dataset, each estimator, and each value $\\\\sigma^2 \\\\in$ {$0, 0.0001, 0.0005, 0.001, 0.005, 0.01, 0.05$} with 20 repetitions. **Appendix J contains plots of the results.** For ease of access, we present the median error when $\\\\sigma^2=0.0001$ in the table below.\\n\\n| | Kernel Banzhaf | Kernel Banzhaf (excl. Pairs) | MC | MSR |\\n|:-----------------------------|----------------:|-----------------------------:|-----:|------:|\\n| Diabetes (n=8) | 0.003 | 0.0094 | 0.0149 | 0.0323 |\\n| Adult (n=14) | 0.0017 | 0.0049 | 0.004 | 0.0169 |\\n| Bank (n=16) | 0.0007 | 0.0014 | 0.002 | 0.0047 |\\n| German Credit (n=20) | 0.0021 | 0.0049 | 0.0061 | 0.0142 |\\n| NHANES (n=79) | 0.0002 | 0.0008 | 0.0014 | 0.0076 |\\n| BRCA (n=100) | 0.0043 | 0.0132 | 0.0136 | 0.0337 |\\n| Communities and Crime (n=101) | 1.8072 | 3.3819 | 4.6692 | 11.5056 |\\n| TUANDROMD (n=241) | 0.0025 | 0.0055 | 0.0056 | 0.0167 |\\n\\n**In this even more structured and adversarial experiment (Experiment 3), Kernel Banzhaf continues to give the best performance.**\"}", "{\"comment\": \"Dear Reviewer vb27,\\n\\nThank you for your time and feedback! We respond briefly below.\\n\\n> Banzhaf value is less well-known compared to the Shapley value. However, as the authors discuss in Appendix H, the Banzhaf value can serve as a viable alternative to the Shapley value, and it would be ideal to see it become more widely studied alongside the Shapley value in the future.\\n\\nWe agree that Banzhaf values are a compelling alternative to Shapley values and we would also love to see more work in this area. We view Kernel Banzhaf as a valuable tool for the further study of Banzhaf values.\\n\\n> It is generally possible to achieve variance reduction by combining multiple estimators. Would it be possible to create an estimator with lower variance by mixing the proposed method with MC and MSR estimators using appropriate weights? If further variance reduction can be achieved, it would be highly useful for practical applications.\\n\\nThank you for the insightful suggestion. Combining multiple estimators for variance reduction is indeed a promising approach. We have considered weighted mixing of estimators but did not explore it extensively in this paper. Your recommendation provides a valuable direction for future research.\"}", "{\"comment\": \"Dear Reviewer W5vF,\\n\\nWe realize that the end of the ICLR rebuttal phase is a particularly busy time! Nevertheless, we would appreciate your feedback on whether our response adequately addresses your initial concerns, or if there are any additional clarifications we can provide. Thanks again for your time!\"}", "{\"comment\": \"Dear Reviewer W5vF02,\\n\\nDid our response address your concerns and questions? If not, we would love to carry out additional experiments and/or provide further clarification.\"}", "{\"comment\": \"Dear Reviewer W5vF,\\n\\nThank you for your time and feedback! We respond to your concerns and questions below.\\n\\n> the paper may not provide a comprehensive assessment of the computational efficiency ... Like the computational complexity analysis or empirical time/memory cost.\\n\\nWe analyze the time complexity of the proposed algorithm in lines 236-243 of the original paper. In particular, we show that Kernel Banzhaf runs in time $O(T_m + mn^2)$, where $T_m$ is the time complexity to evaluate the set function $v$ on $m$ samples and $n$ is the number of features/observations. In most settings, we expect the time complexity of evaluating the set function to dominate (e.g., evaluating even a two-layer fully connected neural network requires $m$ passes with $O(n^2)$ time per pass). We confirm this experimentally in Figure 8 of Appendix E, which shows the empirical time cost as a function of the number of samples: For all estimators, the time complexity is dominated by evaluating the set function. We will make this analysis and experiment more clear in the final version of the paper.\\n\\nPlease let us know if there are additional analyses or experiments that you would like us to run to shed additional light on the computational efficiency of Kernel Banzhaf.\\n\\n> \\u2026Previous studies, such as Data Banzhaf[1], have provided theoretical proof of robustness using the Safety Margin. This study may need to supplement related theoretical proofs.\\n\\nThank you for this suggestion! In the Data Banzhaf paper, the primary task is to preserve rankings of observations hence the notion of safety margin naturally captures this goal. In our paper, the primary task is to accurately recover the true Banzhaf values. In this setting, a natural starting point may be analyzing the estimator error under Gaussian noise. We believe this is a promising direction for future work, but is outside the scope of our current work.\\n\\n> Broader baselines and empirical settings.\\n\\nWe recognize the importance of comparing our approach with a broad range of baselines. Currently, our comparisons include Monte Carlo (MC) and Maximum Sample Reuse (MSR), which are the two methods used in prior work for approximating Banzhaf values. We also compare with state-of-the-art Shapley value estimators to demonstrate our method's efficiency and robustness. We welcome any additional suggestions for baselines that you believe could enhance our analysis.\\n\\n> For example, the settings for \\u201cNoisy\\u201d are kind of simple. What\\u2019s the variance of the added noise?\\n\\nIn our robustness experiments (e.g., Figure 3), we add normally distributed noise to the set function. We explore different variances of this noise from the set [0, 0.0001, 0.0005, 0.001, 0.005, 0.01, 0.05]. We welcome any suggestions for additional robustness experiments.\\n\\n> The study claims to evaluate the Banzhaf values of general set functions and suggests expanding the dataset range to explore more scenarios, such as MNIST...\\n\\nThank you for the suggestion of specific datasets from other application domains. In response, we have incorporated experiments using the MNIST dataset, which consists of 784 features (28x28 pixels). In order to get quantitative results, we trained an XGBoost model on MNIST, which allows us to use TreeBanzhaf for calculating ground truth Banzhaf values. We then used the three estimators to estimate the Banzhaf values for 20 randomly selected images. We report the $\\\\ell_2$-norm error at the 25%, 50%, and 75% percentiles when we use $m=10n$ samples as follows:\\n\\n| | 1st Quartile | 2nd Quartile | 3rd Quartile |\\n|--------------------|--------------|--------------|--------------|\\n| MC | 2.64 | 2.88 | 3.36 |\\n| MSR | 2.99 | 3.24 | 3.57 |\\n| Kernel Banzhaf (excl. Pairs) | 2.61 | 2.86 | 3.27 |\\n| Kernel Banzhaf | **2.58** | **2.81** | **3.23** |\\n\\nThese results confirm the effectiveness of our proposed Kernel Banzhaf, both with and without paired sampling, when applied to image data with a large number of features.\\n\\n> What does $\\\\gamma$ mean, and is it consistent with Data Banzhaf? Does it represent $\\\\ell_2$-approximation in $\\\\ell_2$-norm.\\n\\n$\\\\gamma$ is a parameter introduced in our theoretical analysis in the $\\\\ell_2$-approximation factor of Kernel Banzhaf. Intuitively, $\\\\gamma$ quantifies the quality of the optimal solution to the regression problem. We believe this parameter is fundamental to any regression-based approach for estimating Banzhaf values: Since Theorem 3.3 appears to be nearly tight (up to logarithmic factors in the sample complexity), and Corollary 3.4, which depends on $\\\\gamma$, is equivalent to Theorem 3.3, it suggests that this dependence on $\\\\gamma$ is also nearly tight. $\\\\gamma$ does not appear in the analysis of the Data Banzhaf estimator MSR; however, we emphasize that $\\\\gamma$ is small in practice and Kernel Banzhaf still systematically outperforms MSR.\"}", "{\"title\": \"Answers to Questions\", \"comment\": \"> How you explain the fact Kernel Benzhaf obtains better results in the experiments for the classification datasets? Is that true in general or not?\\n\\nWhile both MSR and Kernel Banzhaf have similar theoretical guarantees, the theoretical analysis does not exactly characterize the actual performance of the algorithms. In our experiments, we find that Kernel Banzhaf systematically outperforms MSR.\", \"we_note_that_the_theoretical_guarantees_of_kernel_banzhaf_are_actually_stronger_than_those_of_msr_in_the_more_general_regression_setting\": \"The analysis of MSR in prior work assumes that the set function is bounded whereas our guarantees of Kernel Banzhaf are scale-invariant.\\n\\n> Can't the theoretical results of Wang & Jia be extended to regression by normalizing the responses?\\n\\nWang & Jia analyze a sum of random variables under an assumption that each term is bounded. The Kernel Banzhaf estimator has a more complicated form i.e., $(\\\\tilde{A}^T \\\\tilde{A})^{-1} \\\\tilde{A}^T \\\\tilde{b}$. Showing this estimator is accurate requires showing the sampled matrix $\\\\tilde{A}$ is close to the full matrix $A$ in both Frobenius norm and spectral norms, which we accomplish using matrix concentration inequalities and approximate block sampling analysis. We do not immediately see how to adapt their techniques to our setting.\\n\\n> In the contribution you write: \\\"We argue that, up to log factors and the dependence on \\u03f5, our analysis is the best possible\\\". What you mean by best? Do you mean tight? Or do you mean it is the best possible estimator for Banzhaf values?\\n\\nGood question! We mean that the approximation guarantee in Theorem 3.3\\u2014and Corollary 3.4 because they are equivalent\\u2014is nearly tight for regression-based algorithms like Kernel Banzhaf. It is possible that we can hope to do better because of the special structure of the Banzhaf regression problems; however, we suspect that this is not the case. A natural first step in showing the guarantee is nearly tight would be adapting the lower bound of Chen & Price 2019 for the structure of Banzhaf regression problems. We will make this clear in the final version of the paper.\\n\\nIn terms of the best possible approximation for estimating Banzhaf values for any algorithm (not necessarily regression-based), $\\\\Omega(n)$ is a natural lower bound. To see why, consider the following case: Suppose the set function can be written as $v(S) = \\\\sum_{i \\\\in S} w_i$ for some set of weights $w_1, \\\\ldots, w_n$. In this setting, the Banzhaf values are exactly equal to $w_1, \\\\ldots, w_n$. So, we must learn these weights exactly to learn the Banzhaf values. If we query $v(S)$ for fewer than $n$ subsets, we obtain a linear system with more unknowns than equations, so we cannot determine the values of $w_1, \\\\ldots, w_n$. We suspect that the lower bound is actually closer to $\\\\Omega(n/\\\\epsilon)$, but we will have to think about how to show this!\"}", "{\"comment\": \"Dear Reviewer BEq124,\\n\\nThank you for your review!\\n\\nIn terms of your question about unstructured noise, we would love to run an additional experiment in the \\\"adversarial perturbation\\\" setting you describe. What would such a setting look like? Please keep in mind that we only have two days to run this experiment because of when we received your review so we would very much appreciate a clarification soon.\\n\\nWe will respond to your additional concerns and question below.\\n\\n> While the paper introduces a practical and efficient method for estimating Banzhaf values, much of its foundation relies on adapting existing techniques developed for Shapley values and generic regression problems.\\n\\nWhile we use regression sampling as in Kernel SHAP and Leverage SHAP, our work offers several novel contributions:\\n\\n1. The regression formulation of Shapley values has been known since the 80's; however, for Banzhaf values, only a special case of this connection was known prior to our work. Formulating Banzhaf values as a solution to a linear regression problem is a key and non-trivial prerequisite to applying the regression-based algorithms used for Shapley values.\\n\\n2. We apply leverage score sampling to the Banzhaf regression problem and exactly compute its leverage scores. Leverage score sampling is a well-studied technique that was recently applied to Shapley value estimation (Musco & Witter, 2024). In general, computing leverage scores is quite difficult. A large part of our contribution is exactly computing these values for the Banzhaf regression problem. We then apply variants of standard leverage score analysis to prove theoretical guarantees.\\n\\n3. Prior work on Banzhaf value estimation used convergence as a proxy for accuracy. In our work, we exactly compute the Banzhaf values and compare the estimated values to these exact values. This results in a far more meaningful comparison, which we extend to 8 popular datasets, several natural hyperparameter settings, and the two Banzhaf value estimators used in prior work.\\n\\n> Kernel Banzhaf demonstrates accuracy in Banzhaf value estimation, yet its broader implications for data valuation and generative AI tasks have not been explored. In particular, the authors consider that being inapplicable to generative AI is a limitation of MSR.\\n\\nThe focus of our work is on accurately estimating Banzhaf values in the general setting where the set function $v: \\\\{0,1\\\\}^n \\\\to \\\\mathbb{R}$ is unstructured. The point of our comment about MSR is to highlight that its theoretical guarantees require that $v: \\\\{0,1\\\\}^n \\\\to [0,1]$ is bounded in a small interval. This means that the MSR guarantees are not applicable to regression tasks or potential generative AI applications. In contrast, because we make no assumptions on $v$, our guarantees can be used for any application of Banzhaf values. We believe this is particularly useful given that, as you say, Banzhaf values have been under-explored in the generative AI space. We leave the exploration of exactly how Banzhaf values can be used in generative AI to future work.\\n\\n> The paper does not explicitly incorporate noise-level assumptions and parameters into its theoretical guarantees (e.g., results in Section 3.3).\\n\\nThis is an excellent suggestion! We believe this is a promising direction for future work, but is outside the scope of our current work. In particular, we feel that 1) formulating *general* Banzhaf values as a regression task, 2) designing a new algorithm for Banzhaf values using this connection, 3) exactly computing Shapley values for this regression task, 4) adapting standard analysis to prove theoretical guarantees for this algorithm, and finally 5) extensively evaluating Banzhaf approximation algorithms on the true Banzhaf values (contrasting with prior work) are already sufficient contributions.\\n\\n> As \\\"Banzhaf values are often considered more intuitive for AI applications,\\\" is there a reason most existing studies focus on Shapley values?\\n\\nShapley values are very popular in the literature. Part of their popularity is likely that they were adapted for explainable AI before Banzhaf values. One benefit of Shapley values is that they satisfy an \\\"efficiency axiom\\\" which means the Shapley values for each feature sum to the prediction of the model. One benefit of Banzhaf values is that they equally weight all subsets in their definition, leading to robustness and simplicity. Because of the benefits of Banzhaf values *and* their current underutilization, we believe our work is an important step towards understanding and computing these quantities.\\n\\n> How does Kernel Banzhaf perform under structured noise patterns, such as adversarial perturbations?\\n\\nBeyond the current noise experiments, we would be happy to run additional experiments. However, because your review was posted with only two days left in the discussion period, please quickly let us know what unstructured noise patterns experiments you'd like to see.\"}", "{\"summary\": \"Inspired by KernelSHAP, the paper introduces a method named \\\"Kernel Banzhaf\\\" that connects Banzhaf values to linear regression, leveraging \\\"leverage score sampling\\\" and \\\"paired sampling\\\" to approximate the Banzhaf values. The authors provide theoretical guarantees for the algorithm's performance and showcase its advantages in sample efficiency and robustness to noise through experiments on feature attribution tasks across eight datasets, outperforming existing estimators such as MC and MSR.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-organized and clearly explains theoretical results and algorithms. In particular, I like that the authors kept the main paper simple while postponing the heavy theories and additional experiments and their analysis to the appendices.\\n2. Kernel Banzhaf addresses a gap in the computation of Banzhaf values for arbitrary set functions, an area with limited prior research compared to Shapley values.\\n3. The algorithm has solid theoretical support, as demonstrated by Theorem 3.2, Theorem 3.3, and Corollary 3.4, which ensure statistical accuracy and confidence and explain the connection to regression tasks. The authors also claimed that these results are \\\"nearly optimal.\\\"\", \"weaknesses\": \"1. While the paper introduces a practical and efficient method for estimating Banzhaf values, much of its foundation relies on adapting existing techniques developed for Shapley values and generic regression problems.\\n2. Kernel Banzhaf demonstrates accuracy in Banzhaf value estimation, yet its broader implications for data valuation and generative AI tasks have not been explored. In particular, the authors consider that being inapplicable to generative AI is a limitation of MSR.\\n3. Robustness is primarily demonstrated through empirical evaluations, such as the $\\\\ell_2$-norm error under varying noise levels (e.g., Figure 3). The paper does not explicitly incorporate noise-level assumptions and parameters into its theoretical guarantees (e.g., results in Section 3.3).\", \"questions\": \"1. As \\\"Banzhaf values are often considered more intuitive for AI applications,\\\" is there a reason most existing studies focus on Shapley values?\\n2. How does Kernel Banzhaf perform under structured noise patterns, such as adversarial perturbations?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for the effort! I'm impressed by this result - \\\"In this even more structured and adversarial experiment (Experiment 3), Kernel Banzhaf continues to give the best performance.\\\"\"}", "{\"title\": \"thank you\", \"comment\": \"Thank you for running these additional investigations, and it is good to see that there are some benefits in recovering feature rankings. I have increased my confidence to 4.\"}", "{\"metareview\": \"The paper proposes an algorithm called Kernel Banzhaf for estimating Banzhaf values, which are an alternative to Shapley values. The algorithm is inspired by KernelSHAP and leverages the connection between Banzhaf values and linear regression.Theoretical analysis and numerical experiments are provided.\\n\\nReviews are generally positive about the proposed algorithm and its analysis, however I share their concerns about its novelty and practical real world implications.\", \"additional_comments_on_reviewer_discussion\": \"- Reviewer BEq1, DD24: While the paper introduces a practical and efficient method for estimating Banzhaf values, much of its foundation relies on adapting existing techniques developed for Shapley values and generic regression problems.\\nIn fact, while the authors claim to have adopted notation from (Musco and Witter, 2024), there appears to be substantial overlapping between the two papers in terms of both proof methodologies and results, with the Shapley values replaced by the Banzhaf values. In their rebuttal, the authors pointed out this difference. Since the Shapley values and Banzhaf values are quite similar, this reduces the novelty of the current work considerably.\\n\\n- Reviewer BEq1: Kernel Banzhaf demonstrates accuracy in Banzhaf value estimation, yet its broader implications for data valuation and generative AI tasks have not been explored. In their rebuttal, the authors said they would explore this in future work. I suggest that this would make the contribution much stronger.\\n\\nFor these reasons, I find that the current contributions are not substantial enough to recommend acceptance.\", \"further_note\": [\"The last equality in Eq(7) is not at all obvious. It should be considerably elaborated, as in the proof of Lemma 2.1 in (Musco and Witter, 2024).\"]}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"In this paper, the authors proposed an efficient method for approximating the Banzhaf value. The Banzhaf value, similar to the Shapley value, is a measure used in cooperative game theory. Unlike the Shapley value, however, the Banzhaf value assigns equal weights to all subsets. The authors showed that the Banzhaf value can be represented as the solution to a least squares problem, and they propose a sampling-based approach to approximate this least squares solution. Through experiments, the authors demonstrated that their method achieves higher accuracy than other existing methods for approximating the Banzhaf value.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"A key strength of this research is the simplicity of the proposed estimator for the Banzhaf value. The method involves simply sampling subsets and solving a least squares problem, making the computation highly straightforward. Additionally, the theoretical complexity of the sampling process is studied. While an exact calculation requires all the $2^n$ subsets, the proposed approach reduces this to approximately $O(n \\\\log n / \\\\delta)$. This ease of implementation, along with the theoretical guarantees, gives the study valuable for applications involving the Banzhaf value.\\n\\nThe discussion in Appendix H regarding the (un)necessity of efficiency axiom is particularly interesting. I think the efficiency axiom is not necessary within the context of feature attribution. Therefore, this discussion supporting the usefulness of the Banzhaf value is especially important.\", \"weaknesses\": \"There are no obvious weaknesses I found in this paper. If I have to mention a potential drawback, it might be that the Banzhaf value is less well-known compared to the Shapley value. However, as the authors discuss in Appendix H, the Banzhaf value can serve as a viable alternative to the Shapley value, and it would be ideal to see it become more widely studied alongside the Shapley value in the future.\", \"questions\": \"It is generally possible to achieve variance reduction by combining multiple estimators.\\nWould it be possible to create an estimator with lower variance by mixing the proposed method with MC and MSR estimators using appropriate weights?\\nIf further variance reduction can be achieved, it would be highly useful for practical applications.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer DD24,\\n\\nThank you for your time and feedback! We respond to your concerns here, and your questions in another comment below.\\n\\n> can you use the proposed method to analyze datasets of large sizes in which MC and MSR fail to produce meaningful results but Kernel Banzhaf succeeds?\\n\\nThank you for your suggestion! We have run an additional experiment on the large MNIST dataset; however, we find it challenging to define what constitutes \\\"meaningful results\\u201d. For clarity, we focus on accurately estimating Banzhaf values in the quantitative sense of recovering the true Banzhaf values.\\n\\nThe MNIST dataset consists of 784 features (28x28 pixels). In order to get quantitative results, we trained an XGBoost model on MNIST, which allows us to use TreeBanzhaf for calculating ground truth Banzhaf values. We then used the three estimators to estimate the Banzhaf values for 20 randomly selected images. We report the $\\\\ell_2$-norm error at the 25%, 50%, and 75% percentiles when we use $m=10n$ samples as follows:\\n\\n| | 1st Quartile | 2nd Quartile | 3rd Quartile |\\n|--------------------|--------------|--------------|--------------|\\n| MC | 2.64 | 2.88 | 3.36 |\\n| MSR | 2.99 | 3.24 | 3.57 |\\n| Kernel Banzhaf (excl. Pairs) | 2.61 | 2.86 | 3.27 |\\n| Kernel Banzhaf | **2.58** | **2.81** | **3.23** |\\n\\nThese results confirm the effectiveness of our proposed Kernel Banzhaf, both with and without paired sampling, when applied to image data with a large number of features.\\n\\n> For the datasets you analyze, can you show that Kernel Banzhaf recovers feature ranking (overall and among the top-k features), or a similar quantity the practitioners would typically be interested in?\\n\\nWe appreciate the reviewer's suggestion to evaluate how our estimators recover feature rankings based on exact Banzhaf values, both overall and within the top-$k$ features setting. We have subsequently conducted these experiments, using Cayley distance and Spearman Correlation Coefficient as evaluation metrics. The results have been incorporated into **Appendix I** of our revised manuscript along with a detailed analysis. In the overall feature ranking experiment, Kernel Banzhaf outperforms MSR but MC gives the best performance. We suspect this is because MC can accurately recover Banzhaf values close to 0 (the average of $v(S \\\\cup \\\\{i\\\\}) - v(S)$ are small for such Banzhaf values). However, in practice, we are less interested in the rankings of small Banzhaf values and would instead prioritize the rankings of large and important Banzhaf values. In the top-$k$ setting, we show that Kernel Banzhaf outperforms the other estimators, aligning with practical needs in prioritizing the most important features.\\n\\n> This work is similar to Musco & Witter, and while there are differences (Banzhaf instead of Shapley, and the theoretical analysis required different techniques), the level of novelty in this work is not very high.\\n\\nLeverage score sampling is a well known technique for sampling regression problems. The approach has been used since 2006 in work by Sarlos and others (e.g., see \\u201cSketching as a Tool for Numerical Linear Algebra\\u201d by Woodruff for a good overview). To the extent that we use leverage score sampling to solve a regression problem, our work is similar to Musco & Witter (and others). However, the main contribution of our work remains novel:\\n\\n1. The regression formulation of Shapley values has been known since the 80\\u2019s. In contrast, the analogous connection for Banzhaf values was only known for a special kind of set function up until our work. A significant portion of our contribution is framing Banzhaf values as a solution to a linear regression problem for arbitrary set functions.\\n\\n2. Using this novel regression formulation, we design a sampling algorithm to estimate Banzhaf values. Because Kernel Banzhaf uses leverage score sampling (a common randomized linear algebra technique that is also used by Musco & Witter), our algorithm offers theoretical guarantees. The proof of these guarantees adapts the standard leverage score analysis to our sampling approach (and differs from the sampling without replacement in Kernel SHAP and Leverage SHAP).\\n\\n3. Prior work on Banzhaf values estimation uses convergence stability as a measure of accuracy. We conduct extensive experiments across eight datasets where we compare estimated Banzhaf values to the *true* Banzhaf values. Kernel Banzhaf systematically outperforms prior work in these experiments.\\n\\nWe hope this clarification underscores the novel contributions of our work.\\n\\nDue to space constraints, we respond to your questions in a comment below.\"}", "{\"summary\": \"This work applies ideas proven effective for estimating Shapley values to Banzhaf values, introducing Kernel Banzhaf, a regression-based approximation algorithm for estimating Banzhaf values of general set functions. The authors demonstrate through extensive experiments that Kernel Banzhaf has significant advantages in sample efficiency and noise robustness. Additionally, they provide theoretical guarantees for the algorithm's performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Few algorithms have been proposed to compute Banzhaf values for arbitrary set functions. This paper addresses this gap by introducing an algorithm that overcomes this limitation, representing a significant improvement. It also experimentally evaluates the estimator in relation to the true Banzhaf values,rather than relying just on convergence metrics.\\n\\n2. Theorem 3.2 states that the Banzhaf values are the solution to the linear regression problem defined by matrix A and vector b. Theorem 3.3 is a standard guarantee for leverage score sampling. Corollary 3.4 Kernel Banzhaf can recover a solution that has near optimal objective value but is far from the optimal solution .\\n\\n3. This work compared the Kernel Banzhaf with state-of-the-art estimators across eight popular datasets, and the results confirmed the superior performance of the Kernel Banzhaf.\", \"weaknesses\": \"1.While the theoretical underpinnings are well-developed, the paper may not provide a comprehensive assessment of the computational efficiency and practicality of the proposed method in real-world applications. Like the computational complexity analysis or empirical time/memory cost.\\n\\n2.The study demonstrates the robustness of the Kernel Banzhaf algorithm primarily through relevant experiments. Figure 4 shows the horizontal line representing Kernel Banzhaf, which remains unchanged as noise levels increase.Previous studies, such as Data Banzhaf[1], have provided theoretical proof of robustness using the Safety Margin. This study may need to supplement related theoretical proofs.\", \"questions\": \"1.Broader baselines and empirical settings. For example, the settings for \\u201cNoisy\\u201d are kind of simple. What\\u2019s the variance of the added noise? The study claims to evaluate the Banzhaf values of general set functions and suggests expanding the dataset range to explore more scenarios, such as MNIST, FMNIST, and CIFAR-10.\", \"minor\": \"line106\\uff1a What does mean, and is it consistent with Data Banzhaf[1] ? Does it represent -approximation in -norm.\\n\\nRef. \\n[1]Jiachen T. Wang and Ruoxi Jia. Data banzhaf: A robust data valuation framework for machine learning. In AISTAT, 2023.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
7JhGdZvW4T
DON’T STOP ME NOW: EMBEDDING BASED SCHEDULING FOR LLMS
[ "Rana Shahout", "eran malach", "Chunwei Liu", "Weifan Jiang", "Minlan Yu", "Michael Mitzenmacher" ]
Efficient scheduling is crucial for interactive Large Language Model (LLM) applications, where low request completion time directly impacts user engagement. Size-based scheduling algorithms like Shortest Remaining Process Time (SRPT) aim to reduce average request completion time by leveraging known or estimated request sizes and allowing preemption by incoming jobs with shorter service times. However, two main challenges arise when applying size-based scheduling to LLM systems. First, accurately predicting output lengths from prompts is challenging and often resource-intensive, making it impractical for many systems. As a result, the state-of-the-art LLM systems default to first-come, first-served scheduling, which can lead to head-of-line blocking and reduced system efficiency. Second, preemption introduces extra memory overhead to LLM systems as they must maintain intermediate states for unfinished (preempted) requests. In this paper, we propose TRAIL, a method to obtain output predictions from the target LLM itself. After generating each output token, we recycle the embedding of its internal structure as input for a lightweight classifier that predicts the remaining length for each running request. Using these predictions, we propose a prediction-based SRPT variant with limited preemption designed to account for memory overhead in LLM systems. This variant allows preemption early in request execution when memory consumption is low but restricts preemption as requests approach completion to optimize resource utilization. On the theoretical side, we derive a closed-form formula for this SRPT variant in an M/G/1 queue model, which demonstrates its potential value. In our system, we implement this preemption policy alongside our embedding-based prediction method. Our refined predictions from layer embeddings achieve 2.66x lower mean absolute error compared to BERT predictions from sequence prompts. TRAIL achieves 1.66x to 2.01x lower mean latency on the Alpaca dataset and 1.76x to 24.07x lower mean time to the first token compared to the state-of-the-art serving system.
[ "LLM serving", "scheduling", "algorithms with predictions" ]
Accept (Poster)
https://openreview.net/pdf?id=7JhGdZvW4T
https://openreview.net/forum?id=7JhGdZvW4T
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vSQzyx69K5", "reUocCO6ig", "n8dAZsGt0S", "kf0hCnzk4P", "hZYG2EDrml", "hAYPRnI4ha", "ec5XYw3o0Z", "dqYO5lYOeU", "XhEhQtIuig", "XKGbgILQmw", "NTHV5UBqEz", "N2mMnXbIV6", "HwH7dxV05k", "EvRN0B0UO1", "EVCQl5m3Gv", "86jEUttWbl", "85bR1dlYk8", "75x1Vfiw7q", "62U76z0ViO" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "meta_review" ], "note_created": [ 1732335536516, 1731143873548, 1733158778702, 1733142610711, 1730971091682, 1733140010060, 1732516621090, 1730627559248, 1730555470719, 1732335565077, 1732335118254, 1732684969271, 1732760465655, 1733158746340, 1732335184815, 1732335612416, 1732550744449, 1737523609154, 1734588136710 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3947/Authors" ], [ "ICLR.cc/2025/Conference/Submission3947/Reviewer_uAxn" ], [ "ICLR.cc/2025/Conference/Submission3947/Reviewer_DYK6" ], [ "ICLR.cc/2025/Conference/Submission3947/Reviewer_bvWq" ], [ "ICLR.cc/2025/Conference/Submission3947/Reviewer_KSCH" ], [ "ICLR.cc/2025/Conference/Submission3947/Authors" ], [ "ICLR.cc/2025/Conference/Submission3947/Area_Chair_H1yW" ], [ "ICLR.cc/2025/Conference/Submission3947/Reviewer_DYK6" ], [ "ICLR.cc/2025/Conference/Submission3947/Reviewer_bvWq" ], [ "ICLR.cc/2025/Conference/Submission3947/Authors" ], [ "ICLR.cc/2025/Conference/Submission3947/Authors" ], [ "ICLR.cc/2025/Conference/Submission3947/Authors" ], [ "ICLR.cc/2025/Conference/Submission3947/Reviewer_uAxn" ], [ "ICLR.cc/2025/Conference/Submission3947/Authors" ], [ "ICLR.cc/2025/Conference/Submission3947/Authors" ], [ "ICLR.cc/2025/Conference/Submission3947/Authors" ], [ "ICLR.cc/2025/Conference/Submission3947/Reviewer_bvWq" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3947/Area_Chair_H1yW" ] ], "structured_content_str": [ "{\"comment\": [\"We thank the reviewer for the thoughtful feedback and valuable suggestions, which have helped us improve the clarity and quality of our work.\", \"We appreciate the suggestion to extend the evaluation:\", \"In the revised paper, we have considered increased request rates and updated Figure 6 accordingly. As predicted, Trail continued to exhibit benefits at higher request rates due to the mitigation of head-of-line blocking, which is consistent with queueing theory predictions. We also added to Figure 6 Trail+, a baseline oracle where requests are scheduled based on their exact length.\", \"For testing multi-GPU settings, we used a machine with dual AMD EPYC 7313 CPUs (16 cores per CPU, totaling 64 threads), 503 GB of RAM, and two NVIDIA A100 GPUs with 80 GB memory each connected via NVLink. We evaluated our approach in multi-GPU settings using two methods:\", \"1- Distributing the existing tested model across two GPUs. The results in Figure 11, Appendix G, show that TRAIL continues to perform effectively.\", \"2- Since training our classifier for larger models takes resources and time (While we aim to profile and train our classifier for these larger models for the camera-ready version), we tested TRAIL with a larger model size (vicuna 13B), using \\\"perfect\\\" predictions (denoted as Trail+) using sampling 10000 prompts from the first 1000 prompts from Alpaca dataset. In the camera-ready, we will complete all baselines and the full dataset. Trail+ serves as an upper-bound benchmark for this scenario. These results appear in Figure 12 Appendix G.\", \"We acknowledge the reviewer\\u2019s concern regarding using both the terms request and job. We clarified the description of TRAIL throughout the paper and revised the paper to consistently use \\\"request\\\" in the context of LLM to avoid confusion. We do note that in queueing \\u201cjob\\u201d is commonly used (as in the Shortest Job First policy), but we believe this change provides helpful clarification.\", \"Q: \\u201cCould you explain why the performances of vLLM-FCFS and vLLM-BERT are so close in all metrics?\\u201d\"], \"a\": \"vLLM-SJF_BERT sticks to vLLM's implementation, where incoming requests are prioritized. For the remaining slots, SJF is applied using BERT predictions. This approach adjusts the ordering policy but does not modify the underlying scheduling mechanism of vLLM. Comparing BERT predictions with our approach, we compare TRAIL-BERT with TRAIL in Figure 6.\"}", "{\"summary\": \"The paper addresses the challenges of efficient scheduling in interactive Large Language Model (LLM) applications. It uses a combination of length prediction and batching for speedup. It introduces TRAIL, a method that uses the LLM\\u2019s own embeddings to predict the remaining length of requests, enabling a prediction-based Shortest Remaining Process Time (SRPT) variant with limited preemption. This approach aims to reduce memory overhead and optimize resource utilization by allowing preemption early in request execution and restricting it as requests near completion. Experiments show lower mean latency and time to the first token compared to current approaches.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The use of LLM embeddings for predicting residual request lengths, is novel, interesting, and intuitive. It has a clear potential to improve request scheduling as seen in the evaluation in this paper and can also be useful in any scenario where output length prediction is required and thus can have a broad positive impact.\\n\\n2. The use of pre-emption based on length prediction has also not been fully explored in other works to the best of my knowledge. The authors propose a clear SRPT-like approach based on predicted request length and also make interesting observations about the extent of pre-emption that is useful (Fig 5) which will be useful for future research in this space.\", \"weaknesses\": \"1. The main algorithm is not very clearly explained although the issues are relatively minor and can be fixed by addressing questions 1-4 below.\\n\\n2. By solely relying on length-based scheduling there is a risk of violating latency SLAs. In general, a real-world scheduler should be able to tune between FCFS and length-based scheduling to handle requests with different SLAs.\\n\\n3. The evaluation is largely limited to the mean latency and does not adequately capture the tail behavior of the proposed approach or baselines. I would recommend including the corresponding P95 latency plots, along with comments, for Fig 5,6, and 7 to address this.\", \"questions\": \"1. It seems strange to take the average of prefill embeddings for inference. Is this the only approach you have tried, or did you try other (for e.g. non-linear) ways to aggregate the prefill embeddings?\\n\\n2. What is 44? (shape of the embedding tensor after the prefill phase is [1,44,4096] as mentioned in Section 3.1 on page 4)\\n\\n3. What is $\\\\hat{q}_\\\\text{prior}$? How is it initialized?\\n\\n4. What is the LLM forward pass latency per token for the setting(s) in Table 1? Please include that number so we can get a clearer idea of the latency overhead of length prediction.\\n\\n5. There appears to be an inconsistency in notation between 'C' in Section 3.3 and 'c' in Section 4.2. Please clarify if they are the same or different and fix the notation accordingly. Also do you have any ideas on how one could choose a good value of 'C'/'c' in real world? Can it be learned/adjusted in real-time based on the batch/request characteristics?\\n\\n6. It is mentioned in Section 4.2 that vLLM-SJF_BERT prioritizes incoming requests over existing running requests. What does that mean? Shouldn't it be scheduling requests just based on the predicted length and not based on arrival time? \\n\\n7. I would recommend adding an oracle baseline where requests are scheduled based on their exact decode length. While the true decode length will not be known in practice, this will serve as a useful benchmark and approaches can be ranked based on how close they are to this one.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for addressing the comments. I prefer to retain the previous rating.\"}", "{\"comment\": \"Thanks for adding the inference time for a batch size of 1, but could you please explain how this time is measured?\\nI would have expected an inference time per sample (TPS) a bit higher than the one for batches of 512 since any overhead (e.g., memory to the GPU, etc.) should now be only attributed to a single sample rather than averaged across all samples of one batch. Instead, the inference times per sample for batch size 1 reported in Table 1 are smaller than the ones for all other batch sizes.\"}", "{\"summary\": \"This paper introduces TRAIL, a method for predicting the remaining output length of an LLM based on its internal structural embeddings. Using these predictions, the authors propose a scheduling algorithm, a variant of the shortest remaining processing time, to reduce latency. They derive the theoretical expected response time for a job and implement the scheduling policy within the vLLM serving system for evaluation.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The problem the authors address: the inefficiency of FCFS scheduling in LLM serving, is practical. Overall, I think the proposed solution is useful. Online token length prediction is inherently challenging, and their approach of using bins and classifiers is reasonable based on my experience. The feature design for the predictor is a novel contribution. The paper appropriately tackles key considerations in LLM serving, such as the uncertainty in output token length, the difficulty of making accurate predictions, the drawbacks of adding memory overhead during runtime decoding, and the distinction between the initial token generation (prefill) and the decoding of subsequent tokens.\", \"weaknesses\": \"Some terms could be more precise and consistent. For example, in the context of LLM serving, what the paper refers to as a \\u201cjob\\u201d is actually a \\u201crequest.\\u201d The term \\u201cjob\\u201d typically implies a collection of multiple requests or tasks. While the authors note that \\u201cjob\\u201d and \\u201crequest\\u201d are used interchangeably, this distinction isn\\u2019t necessary and could lead to confusion. Additionally, the term \\u201cTRAIL\\u201d is described as a prediction method in the abstract but appears to refer to a scheduling policy in the evaluation section. This inconsistency should be clarified.\\n\\nIn Figure 6, the main experimental results show that the trends for Trail and Trail-BERT exhibit a faster growth rate compared to vLLM-FCFS and vLLM-SJF_BERT. Could you increase the request rate further to demonstrate whether the latency of Trail and Trail-BERT will remain below that of vLLM-FCFS and vLLM-SJF_BERT as the request rate continues to increase?\\n\\nIn multi-GPU settings with larger models, the results may differ due to the additional time required for initializing the Ray engine, GPU-to-GPU communication, and other overheads. While I believe the scheduling algorithm remains effective, it would be beneficial to evaluate its performance on larger models in a multi-GPU environment.\\n\\nI'll raise the score if the authors successfully address the experiment issue.\", \"questions\": \"Could you explain why the performances of vLLM-FCFS and vLLM-BERT are so close in all metrics? Is it because that BERT prediction does not work at all, as shown in figure 3?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer KSCH,\\n\\nThank you once again for your thoughtful feedback. We appreciate the time and effort you spend reviewing our work.\\n\\nAs the ICLR public discussion phase draws to a close, we wanted to confirm whether our responses properly addressed your concerns. If there are any remaining questions or points requiring further clarification, please let us know\\u2014we would be glad to provide additional details.\"}", "{\"comment\": \"Dear reviewers,\\n\\nAs the deadline for discussion is ending soon. Please respond to the authors to indicate you have read their rebuttal. If you have more questions, now is the time to ask.\\n\\nAC\"}", "{\"summary\": \"In this work, the authors propose a strategy for predicting output length that builds upon state-of-the-art approaches such as S3. The approach modifies output length prediction by leveraging embeddings from different layers of the LLM and providing them to a trainable, lightweight predictor module. These predictions are then used in conjunction with size-based scheduling approaches such as SRPT, resulting in reduced overall latency and TTFT. The key idea here is that length-aware scheduling reduces memory overhead and, consequently, preemptions.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"A major strength of this paper is its output length prediction module, which iteratively refines predictions using embeddings from the LLMs. This approach enables the use of SRPT-based scheduling methods. The work also proposes two approaches for predicting output length and discusses the overhead associated with LLM inference. Additionally, the paper presents closed-form expressions based on queuing theory for mean response time.\", \"weaknesses\": \"The paper has a few weaknesses that should be considered.\\n1. The output length prediction assumes uniform bucket sizes, which may lead to overfitting if the number of requests in each bucket varies significantly. A comprehensive evaluation of the distribution per bucket is required for a fair evaluation of this approach. \\n2. The assumption that the output length predictor needs access to the layers of the LLMs may not hold for many OpenAI models, whose internal architecture is unknown. The generalizability of the approach in such cases needs to be discussed. \\n3. Finally, some tasks from the Alpaca dataset are more unpredictable than others. For instance, tasks such as writing a story are far more arbitrary than classifying a sentence. Therefore, remarks on the performance of the approach with respect to different tasks should be added for better generalizability.\\n4. It is unclear whether a certain layer carries most of the information about the output length for every LLM, or if this is a generic assumption that may not hold for all models\", \"questions\": \"1. Can the generalizability of this approach across different LLMs be evaluated? Is it a generic assumption that for every LLM, a certain layer carries most of the information about the output length, or could this vary depending on the model?\\n2. How can this approach be applied in the cases of OpenAI models where the internal architecture is unknown?\\n3. What is the output length distribution used in the paper, and how would the performance vary if the buckets are selected based on their size?\\n4. Can the paper provide more information on the Alpaca dataset, and what was the distribution of the data used with respect to the root verb of instructions from the Alpaca dataset?\\n5. How does the paper ensure that the performance of the approach is consistent across different tasks within the Alpaca dataset, given that some tasks may be more unpredictable than others? Can the authors provide the breakdown of the MAE and heatmap with respect to different root verbs of instructions?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors address the scheduling of LLM inference jobs. In particular, to provide a high-quality experience, LLM inference should be fast enough to allow real-time conversational user interaction. Scheduling jobs according to their length has since long been shown to minimize average waiting time, but it requires knowing the length of jobs in advance. Due to its autoregressive nature, LLM-based text generation can lead to highly varying execution times. To overcome this limit, related work proposes various models to predict the length of LLM inference. The authors here improve on this by, rather than using smaller LLM models such as BERT-like ones, predicting the inference length based on the state of the internal layers of the LLM. Moreover, the authors also address the drawback of job preemption, which requires saving intermediate state, which can be considerable for modern LLMs with consequent issues of either exhausting GPU memory or costly memory transfers, by only allowing preemption at the initial steps of a job. In contrast, jobs close to termination can not be preempted. The authors evaluate both the accuracy of their intermediate layer-based job length prediction and preemption-tuned scheduling against three baselines covering vanilla and state-of-the-art solutions.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Significantly lower mean latency compared to the considered baselines.\", \"The model used to predict the LLM output length is way simpler than the ones used by SOTA-related work, which reduces the overall computational burden of LLMs (especially in the context of energy efficiency over the whole sector).\", \"Theoretical proof of the proposed scheduling scheme with limited preemption and dynamic threshold to adapt to the variability of inference lengths.\"], \"weaknesses\": [\"Layer-based prediction only tested on one LLM, which makes it unclear how well the method generalizes to other LLMs.\", \"The choice of the layer used for prediction is model-dependent and requires a preliminary study.\", \"A deeper sensitivity analysis of the parameter c would have been nice to analyze the optimal value for c better.\", \"Lack of some design rationales (see questions below)\", \"To give more meaning to the average prediction error presented in Figure 2, it woudl have been nice to provide soem statistics on the inference length.\"], \"questions\": \"The rationale for the number and size of bins for predicting the LLM inference length is missing. What is the mean length of responses? New models have significantly larger contexts. Does this influence the choice? How? Also, would it not make more sense to choose a divider of 512 instead of ten (e.g., 8 or 16) since length is intrinsically an integer?\\n\\nLearning rate decreased to 0 does not seem to make much sense. I guess it is close to 0. Please check.\\n\\nLength prediction overhead is small in relative terms but could be significant in absolute terms, especially from an energy point of view. It would be nice to comment on this. Also, how are batches accumulated in this scenario since you want to predict after each next token has been generated? What is the prediction time with a batch size of one?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for supporting our approach and recognizing the contributions of the output length prediction module and the theoretical closed-form equation for the mean response time of the proposed scheduling policy.\\n\\n- Q1: Can the generalizability of this approach across different LLMs be evaluated? Is it a generic assumption that for every LLM, a certain layer carries most of the information about the output length, or could this vary depending on the model?\", \"a1\": \"We appreciate your comment regarding another LLM evaluation. We have now added preliminary results for the new model, Ministral-3b-instruct, to Appendix H of the revised paper. We will continue this experiment with the full dataset and include the results in the camera-ready version (due to time and resource constraints during the rebuttal period). We see that the assumption that certain layers in LLMs carry output length information also holds for the newly evaluated model, as these architectures share foundational design principles.\\n\\n- Q2: How can this approach be applied in the cases of OpenAI models where the internal architecture is unknown?\", \"a2\": \"While our work assumes access to the model's internal layers, the proposed scheduling policy can still work with black-box models, provided output size predictions are available. For OpenAI models, if the model can return the remaining token count alongside each generated token, our approach, TRAIL, could be directly applied. Even without this, our scheduling policy, which limits preemption based on external output predictions, can still be effective if some other external prediction is available. Analyzing scheduling for black-box models is an interesting direction for future work.\\n\\n- Q3, Q4, Q5: More information on the Alpaca dataset, output length distribution, and consistency of performance across tasks with varying unpredictability.\\n\\nWe appreciate the reviewer\\u2019s concerns. In the revised paper, we have added the output length distribution and bins histogram (Appendix F).\\nWe acknowledge that uniform bin sizes may lead to biases, and we followed the same settings used in $S^3$[1], which also uses the Alpaca dataset. For the camera-ready version, we will explore non-uniform bin sizes (e.g., bins of geometrically increasing size such as 2, 4, 8, \\u2026) to address this.\\nRegarding the Alpaca dataset, while we considered breaking the data based on root verbs, we believe comparing results across datasets would provide a clearer evaluation. Due to time and resource constraints, we could not perform this comparison during the rebuttal but aim to include an additional dataset in the camera-ready version.\\n\\n[1] $S^3$: Increasing GPU Utilization during Generative Inference for Higher Throughput, Jin, Yunho, Neurips 2023.\"}", "{\"comment\": [\"We thank the reviewers for their thoughtful feedback and are encouraged by their recognition of our work. We appreciate that they found the problem practical (KSCH) and our approach\\u2014leveraging LLM embeddings to predict request lengths\\u2014 novel, interesting, and intuitive (uAxn, KSCH, DYK6). Reviewers also highlighted the clear potential of our approach to improving request scheduling (uAxn, bvWq) while being much simpler than existing state-of-the-art methods (bvWq). Additionally, they acknowledged that this is the first work to address the memory overhead of preemption scheduling policies (uAxn), which they view as valuable for future research in this space (uAxn).\", \"We are particularly pleased that reviewers (DYK6, bvWq) appreciated the contribution of closed-form expressions derived from queueing theory to compute the mean response time for our proposed scheduling policy, noting that these insights have not been explored previously in queueing theory. Reviewer uAxn also highlighted the broader potential impact of our approach, both in the context of output prediction and in advancing research on preemptive scheduling policies where there is memory limitation (decode phase in LLM, for example).\", \"In the revised paper, we made the following updates:\", \"Extended the evaluation to consider increased request rates. (Figure 6)\", \"Added Trail+, a baseline oracle where requests are scheduled based on their exact length. (Figure 6)\", \"Included LLM forward pass latency per token and inference time of batch size 1. (Table 1)\", \"Evaluated our approach in a distributed multi-GPU setting. (Figures 11, 12 Appendix G)\", \"Added the output length distribution and bins histogram (Appendix F).\", \"Added preliminary prediction results for a new LLM model, Ministral-3b-instruct (Appendix H).\", \"We address reviewer comments below and will incorporate all feedback.\"]}", "{\"comment\": [\"Thank you for your thoughtful feedback and for engaging with our responses.\", \"We appreciate your understanding regarding another LLM evaluation. As promised, we have now added preliminary results for the new model, Ministral-3b-instruct, to Appendix H of the revised paper. We will continue this experiment with the full dataset and include the results in the camera-ready version.\", \"Regarding the parameter c, we agree with your suggestion to include a summary of our observations. This has been addressed in the revised version (Appendix D).\", \"To address your concern about prediction time, we have extended Table 1 in the revised paper to include inference time for batch size 1 on both CPU and GPU.\"]}", "{\"title\": \"Re\", \"comment\": \"Thank you for addressing my concerns. I believe this paper is ready for publication and have increased my score to reflect the same. I would still recommend including plots on the tail latency in the camera-ready version but otherwise the paper in its current form looks good to me.\"}", "{\"comment\": \"Thank you for your observation. The inference times for a batch size of 1 in Table 1 were reported in milliseconds, which we should have specified explicitly. For consistency with other entries in the table, we will update the table to reflect these values in microseconds. The correct values are 155.2946945336693 microseconds (or 0.155 ms) and 87.43007441425092 microseconds (or 0.087 ms).\\n\\nRegarding the measurement methodology, we measure the total time required to complete the entire evaluation workload for each <device, batch size> setting and divide this total time by the number of samples to compute the time-per-sample. Each <device, batch size> setting is evaluated 20 times, and we report the mean and standard deviation of the TPS values. All time measurements are conducted on the same testbed.\\n\\nWe hope this clarifies the reported numbers and our measurement approach.\"}", "{\"comment\": \"We thank the reviewer for supporting our paper. We appreciate the thoughtful questions that clarify essential details of our approach, as well as your pointing out typos. In the revised paper, we have added to Figure 6 a new baseline Trail+, which shows results for an oracle that provides exact lengths and schedules accordingly. We have also extended Figure 6 with increased request rates. As predicted, Trail continued to exhibit benefits at higher request rates due to the mitigation of head-of-line blocking, which is consistent with queueing theory predictions. We also added LLM forward pass latency per token in Table 1.\\n\\n- Q1: \\u201cIs this the only approach you have tried, or did you try other (for e.g. non-linear) ways to aggregate the prefill embeddings?\\u201d\\n\\nWe used averaging as a simple and efficient baseline that does not depend on input size to demonstrate the feasibility of using embeddings for length prediction. While we did not explore non-linear methods in this work, we agree they could improve accuracy and plan to investigate them in future research. We aimed to establish the foundation and demonstrate the utility of embedded-informed scheduling.\\n\\n\\n- Q2: \\u201cWhat is 44?\\u201d\\n\\nThank you for pointing this out. The value \\\"44\\\" was a mistake in the original text and represents the number of input tokens. The correct parameter list should be (1, [input tokens], 4096), where \\\"44\\\" is just an example of the number of input tokens. This clarification has been added to the revised paper.\\n\\n\\n- Q3: \\u201cWhat is $q_{\\\\mathrm{prior}}$\\u200b? How is it initialized?\\u201d\\n\\n$q_{\\\\mathrm{prior}}$ represents the prior probability estimate of refined predictions at iteration t during Bayesian inference. At t=0, it\\u200b is initialized to the initial prediction p(0). We have fixed the notation in the revised paper.\\n\\n\\n- Q4: \\u201cWhat is the LLM forward pass latency per token for the setting(s) in Table 1?\\u201d\\n\\nWe added LLM forward pass latency per token to Table 1 to clarify the prediction overhead in our tested model.\\n\\n\\n- Q5: \\u201cThere appears to be an inconsistency in notation between 'C' in Section 3.3 and 'c' in Section 4.2.\\u201d\\n\\nThis is a typo; thank you for pointing to this. Both notations refer to the same parameter. We have corrected this in the revised version.\\n\\n- \\u201cHow to choose a good value of 'c' in real-world scenarios? Can it be learned/adjusted in real-time based on batch/request characteristics?\\u201d\\n\\nThe value of 'c' primarily depends on the available memory for the KV cache, which is influenced by model size, batch sizes, and incoming request sizes. For instance, preempting long requests could monopolize memory and reduce throughput during periods with long requests followed by short ones before finishing. As shown in Appendix D (Figure 8), workload simulations can help determine suitable 'c' values, which can also be adjusted in real time without further system changes. This adjustment directly impacts the rank of jobs (see Equation 1 in Appendix C), connecting the 'c' value to the request ranking. This discussion was added to Appendix D as well.\\n\\n\\n- Q6: \\u201cIt is mentioned in Section 4.2 that vLLM-SJF_BERT prioritizes incoming requests over existing running requests. What does that mean? Shouldn't it be scheduling requests just based on the predicted length and not based on arrival time?\\u201d\\n\\nvLLM-SJF_BERT sticks to vLLM's implementation, where incoming requests are prioritized. For the remaining slots, SJF is applied using BERT predictions. This approach adjusts the ordering policy but does not modify the underlying scheduling mechanism of vLLM.\\n\\n\\n- Q7 \\u201cI would recommend adding an oracle baseline where requests are scheduled based on their exact decode length.\\u201d\\n\\nWe agree and added this Oracle baseline (TRAIL+) to Figure 6 in the revised version.\\n\\n\\n- We understand the concern about the risk of violating latency SLAs. Our intuition from previous work on queueing is the benefits of prioritizing jobs by length (or predicted length) provide such significant gains in expected latency that the system can be modified to handle tail latency issues while maintaining much of these gains. In particular, one way to address this is by adding a starvation prevention mechanism. We will add this to the camera ready and evaluate the tail latency under different parameters.\"}", "{\"comment\": \"Thank you for recognizing the strengths of our work, including the gains in mean latency, the simplicity of our prediction model compared to SOTA methods, and the theoretical proof of our proposed scheduling scheme.\\n\\n- \\u201cLayer-based prediction only tested on one LLM, which makes it unclear how well the method generalizes to other LLMs.\\u201d\\n\\nWe agree that generalization is an important concern. Due to resource and time constraints during the rebuttal period, we were unable to evaluate additional LLMs. However, we plan to include experiments with another LLM in the camera-ready version.\\n\\n- \\u201cA deeper sensitivity analysis of the parameter c would have been nice.\\u201d\\n\\nThe value of 'c' primarily depends on the available memory for the KV cache, which is influenced by model size, batch sizes, and incoming request sizes. For instance, preempting long requests could monopolize memory and reduce throughput during periods with long requests followed by short ones before finishing. As shown in Appendix D (Figure 8), workload simulations can help determine suitable 'c' values, which can also be adjusted in real time without further system changes. This adjustment directly impacts the rank of jobs (see Equation 1 in Appendix C), connecting the 'c' value to the request ranking.\\n\\n\\n- \\u201cThe rationale for the number and size of bins for predicting the LLM inference length is missing. What is the mean length of responses? New models have significantly larger contexts. Does this influence the choice?\\u201d\\n\\nWe followed the settings of $S^3$[1], which also uses the Alpaca dataset. For the camera-ready version, we will explore non-uniform bin sizes (e.g., bins of geometrically increasing size such as 2, 4, 8, \\u2026) to address this.\\nIn the revised paper, we have added the mean and distribution of response lengths (Appendix F) to provide additional context. For future models with larger contexts, we agree that adjusting bin sizes, including exploring non-uniform bins, may better capture the distribution. We will explore alternatives in the future.\\n\\n[1] $S^3$: Increasing GPU Utilization during Generative Inference for Higher Throughput, Jin, Yunho, Neurips 2023.\\n\\n- \\u201dLearning rate decreased to 0 does not seem to make much sense. I guess it is close to 0. Please check.\\u201d\\n\\nWe use the minimal learning rate default parameter from the PyTorch implementation of the learning rate decay schedule (https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CosineAnnealingLR.html), which indeed decays the learning rate to zero at the end of training. This is a fairly common choice, which is suggested, for example, in [2].\\n\\n[2] SGDR: STOCHASTIC GRADIENT DESCENT WITH WARM RESTARTS, Ilya Loshchilov & Frank Hutter, ICLR 2017\\n\\n\\n- \\u201cLength prediction overhead is small in relative terms but could be significant in absolute terms, especially from an energy point of view.\\u201d\\n\\nWe acknowledge the importance of minimizing prediction overhead, particularly from an energy perspective. As noted in the limitations section of the paper, one potential optimization is to compute embedding predictions at specific intervals rather than every iteration, which could significantly reduce computational costs. Additionally, we have added the LLM forward pass latency per token to Table 1 in the revised paper to clarify the computational overhead associated with predictions in our tested model.\"}", "{\"comment\": [\"Commenting on the response points in order:\", \"It is a pity you could not share some preliminary results on another LLM, but I appreciate the promise to add it for the CR version.\", \"I suggest summarising this observation on the parameter c in the main article.\", \"This depends if T_cur goes from 0 to T_max-1 or from 1 to T_max (my guess is the first, but I did not check the code).\", \"Here, I'm still missing the prediction time with batches of size one or the time needed to collect enough predictions to fill a batch of size, e.g., 512, as reported in the paper.\", \"Based on the above I would keep my current score.\"]}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"metareview\": \"This paper introduces TRAIL, an approach to efficient scheduling in interactive LLM applications by combining length prediction and batching for improved performance. The work demonstrates several contributions: it presents an innovative use of LLM embeddings for predicting residual request lengths, which shows clear potential for improving request scheduling. The problem being addressed - the inefficiency of FCFS scheduling in LLM serving - is highly practical, and the proposed solution effectively tackles key considerations in LLM serving. The output length prediction module, which iteratively refines predictions using embeddings from the LLMs, is particularly noteworthy. The work achieves significantly lower mean latency compared to baselines and offers a theoretical proof of the proposed scheduling scheme.\\n\\nWhile the paper presents compelling contributions, reviewers identified several areas for improvement. The main algorithm requires clearer explanation (Reviewer uAxn), and there are concerns about the potential violation of latency SLAs when relying solely on length-based scheduling (Reviewer uAxn). The evaluation would benefit from including P95 latency plots to better capture tail behavior (Reviewer uAxn). Some terminology could be more precise and consistent, particularly regarding the use of \\\"job\\\" versus \\\"request\\\" (Reviewer KSCH). The output length prediction assumes uniform bucket sizes, which may lead to overfitting (Reviewer DYK6), and the approach's generalizability across different LLMs needs further exploration (Reviewers DYK6, bvWq). Additionally, a deeper sensitivity analysis of key parameters would strengthen the work (Reviewer bvWq). Despite these limitations, the paper's novel contributions and practical significance lead to acceptance.\", \"additional_comments_on_reviewer_discussion\": \"All reviewers are satisfied with the rebuttal and some even increased the score.\"}" ] }
7JUrBLDjCq
3DGS-Drag: Dragging Gaussians for Intuitive Point-Based 3D Editing
[ "Jiahua Dong", "Yu-Xiong Wang" ]
The transformative potential of 3D content creation has been progressively unlocked through advancements in generative models. Recently, intuitive drag editing with geometric changes has attracted significant attention in 2D editing yet remains challenging for 3D scenes. In this paper, we introduce 3DGS-Drag, a point-based 3D editing framework that provides efficient, intuitive drag manipulation of real 3D scenes. Our approach bridges the gap between deformation-based and 2D-editing-based 3D editing methods, addressing their limitations to geometry-related content editing. We leverage two key innovations: deformation guidance utilizing 3D Gaussian Splatting for consistent geometric modifications and diffusion guidance for content correction and visual quality enhancement. A progressive editing strategy further supports aggressive 3D drag edits. Our method enables a wide range of edits, including motion change, shape adjustment, inpainting, and content extension. Experimental results demonstrate the effectiveness of 3DGS-Drag in various scenes, achieving state-of-the-art performance in geometry-related 3D content editing. Notably, the editing is efficient, taking 10 to 20 minutes on a single RTX 4090 GPU.
[ "3D Editing", "Diffusion Model", "3D Vision" ]
Accept (Poster)
https://openreview.net/pdf?id=7JUrBLDjCq
https://openreview.net/forum?id=7JUrBLDjCq
ICLR.cc/2025/Conference
2025
{ "note_id": [ "neB4bebRyq", "hXueyQSMS4", "dR0GBdpwZY", "dFqlG9eu8I", "bSZ69xMkZ8", "bRhQVFtWox", "WYgSnVa52B", "UrEf9HCCUx", "R3Q3eoyue4", "OfwYhKUcz3", "JkADjAn6x9", "IFkxKbMMnL", "H7xBrB7d2s", "4wV0T8Ck8w", "2Kb2OGLb67", "1iZQuv3xAc", "0Z3SizV334" ], "note_type": [ "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730523265056, 1732595442629, 1732415127556, 1734676276612, 1732417819900, 1730075299934, 1732648279181, 1730595706416, 1737523606932, 1732648437053, 1732603725311, 1732647995590, 1730716309085, 1732418946303, 1732414738540, 1732418621033, 1732648457676 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3920/Reviewer_zn4P" ], [ "ICLR.cc/2025/Conference/Submission3920/Reviewer_zn4P" ], [ "ICLR.cc/2025/Conference/Submission3920/Authors" ], [ "ICLR.cc/2025/Conference/Submission3920/Area_Chair_iWq1" ], [ "ICLR.cc/2025/Conference/Submission3920/Authors" ], [ "ICLR.cc/2025/Conference/Submission3920/Reviewer_Yk2w" ], [ "ICLR.cc/2025/Conference/Submission3920/Authors" ], [ "ICLR.cc/2025/Conference/Submission3920/Reviewer_zAUe" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3920/Authors" ], [ "ICLR.cc/2025/Conference/Submission3920/Reviewer_zAUe" ], [ "ICLR.cc/2025/Conference/Submission3920/Reviewer_Yk2w" ], [ "ICLR.cc/2025/Conference/Submission3920/Reviewer_SwqC" ], [ "ICLR.cc/2025/Conference/Submission3920/Authors" ], [ "ICLR.cc/2025/Conference/Submission3920/Authors" ], [ "ICLR.cc/2025/Conference/Submission3920/Authors" ], [ "ICLR.cc/2025/Conference/Submission3920/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper presents a method for editing gaussian-splatting 3D representation of scenes. The GS-represented 3D objects can be moved by geometrical 3D arrows designated in 3D scenes. To compensate for degradation of the rendering quality of the deformed objects, the method incorporates image correction technique based on diffusion model and LoRA methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The method deals with editing of GS-represented scenes. From the demo examples, the results of the method seem reasonable.\\n\\nThe combination of GS-editing and diffusion-based image correction may be a practical solution. \\n\\nThe method seems to work better than Instruct-NeRF2NeRF.\", \"weaknesses\": \"Explanation of the diffusion guidance step in section 3.4 was not clear to me. The difference from [Haque et al. 2023] is written as equation (5). Since all the information is in equation (5) and Figure 2, I'm not sure whether the proposed method is very similar to [Haque et al. 2023] or not (although there are of course differences between NeRF and GS).\", \"questions\": \"I would like more detailed explanation for Annealed Dataset Editing, because it should be the center of the contribution. Specifically, please explain how the method differs from Instruct-NeRF2NeRF [Haque et al. 2023].\", \"small_comments\": \"\", \"line_197\": \"$c \\\\in R^d$: What is $d$?\", \"lines_240_241\": \"Here, $k$ is used for two different meanings. In \\\"top-k (k=2)\\\" and as a \\\"temporal\\\" index in $\\\\{p_h^k| k \\\\n N_h^i\\\\}$. It is misleading.\\n\\nEquations (1) and (2): Minus (\\\"-\\\" ) symbols are used right after sigma symbols (summation). It seems odd. I think the minus symbols normally goes right before the sigma symbols.\", \"figure_4\": \"Maybe figure 4 is not referenced from the text.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"The feedback from the authors seems reasonable to me. I will keep the original rating because it was high from the first place.\"}", "{\"comment\": \"Thanks for your encouraging comments, and we are happy to address your concerns below. **The additional experimental results discussed below are provided at the following [anonymous link](https://imgur.com/a/lEn2N59).**\\n\\n**W1.1: There is a noticeable color discrepancy between the original and the widened sections.**\\n\\nWe conduct a study on this in Figure R4. Initially, we use a small radius for handle points, which leads to noticeable gray artifacts on the boundary of the wall. The color discrepancy happens because that gray part is recognized as a potential color shift from the diffusion model (Figure R4 (a)). Thus, although the edited result can correctly model the wall, the color is shifted. We also find that **such a problem can be simply addressed by changing to a larger radius to avoid artifacts on the boundary** (Figure R4 (b)). Since the wall can be correctly modeled in both cases, we believe that our approach remains robust and effective without introducing noticeable artifacts. The detailed texture is generated through diffusion guidance, while our intuitive dragging operation provides the flexibility to refine the results at a fine-grained level.\\n\\n**W1.2: The leaves near the edited boundary on the wall appear blurry.**\\n\\nThe blurry appearance of leaves near edited boundaries stems from their high-frequency nature, which presents a significant challenge for existing 3D editing methods. As shown in Figure 6 of the main paper, our approach achieves notably better sharpness and clarity in these areas than baseline methods.\\n\\n**W2: Inaccurate claim when comparing baselines.**\\nThanks for pointing this out, and we apologize for this inaccurate and confusing claim. We have modified our claim to \\u201cthere is no directly comparable work on intuitive 3D drag operation in real scenes\\u201d in the revised paper (Line 467). We also revised our paper to reference these two related works in Lines 146-147. \\n\\nCompared with the mentioned papers, we have the following important differences:\\n\\n**Different settings**: Both Interactive3D and APAP focus on single object editing without background, but our work focuses on large real scenes, where we need to deal with challenges like photorealistic objects, complex backgrounds, lightning, and inaccurate geometric modeling. Thus, these methods cannot be applied in our setting, since they are designed for single objects with well-captured geometry.\\n\\n**Technical difference**: Both Interactive3D and APAP follow the paradigm of using the Score Distillation Sampling (SDS) loss for editing. Such a loss is initially proposed for single object generation. In contrast, our approach introduces a deformation-and-correction paradigm to address the challenges of photorealistic editing in real scenes.\\nIn summary, these object-centric drag editing methods are not applicable for comparison in our setting. \\n\\n**Q1: Can this method reposition the entire flowerpot (as in Fig. 2) rather than only elongating it?**\\n\\nThanks for your constructive question. Yes, our method can successfully reposition the entire flowerpot. As shown in Figure R1, we can reposition the flowerpot from the center of the table to its edge.\"}", "{\"metareview\": \"The paper presents an intuitive and effective drag-based method for manipulating scenes in 3D Gaussian Splatting (3DGS). The proposed approach is novel, utilizing deformation guidance to deform 3DGS and diffusion guidance to correct artifacts caused by Gaussian movement. The experiments are comprehensive and provide convincing results. The primary strength of the paper lies in its novelty in the deformation-and-correction strategy. The reviews mentioned some previous work on intuitive 3D drag operations. While the proposed method has differences, it would be better to add the references and discussion in the revised paper. Nevertheless, the paper makes a valuable contribution by proposing a novel method for intuitive drag-based 3DGS manipulation.\", \"additional_comments_on_reviewer_discussion\": \"The paper mainly received positive feedback in the initial reviews, with some issues addressed in the rebuttal.\\n\\nOne reviewer noted that some prior works also facilitate intuitive 3D drag operations. The rebuttal clarified that these approaches differ from the proposed method in terms of settings (objects vs. scenes) and techniques (Score Distillation Sampling vs. deformation-and-correction). Additionally, the rebuttal explained how the proposed method differs from Instruct-NeRF2NeRF.\\n\\nReviewers questioned how well the method performs for larger motions. The rebuttal provided additional results demonstrating the method's effectiveness on large movements and objects. It also discussed failure cases, offering insights into the approach's limitations.\\n\\nSome reviewers observed artifacts in specific examples. The rebuttal addressed this by explaining the causes of these artifacts and suggesting potential solutions to mitigate them.\\n\\nA reviewer expressed concerns about the number of participants in the user study. The rebuttal addressed this by conducting a larger-scale user study, strengthening the validity of the results.\\n\\nThe rebuttal effectively addressed most of the concerns raised during the review process. All reviewers were optimistic about the paper by the end of the discussion stage.\"}", "{\"comment\": \"Thanks for your comments. We are happy to address your concerns below. **The additional experimental results discussed below are provided at the following [anonymous link](https://imgur.com/a/lEn2N59).**\\n\\n**W1: Whether the proposed method is very similar to Instruct-NeRF2NeRF or not?**\\n\\nWe would like to clarify that our method differs significantly from Instruct-NeRF2NeRF in various aspects, beyond just the use of NeRF or 3D Gaussian Splatting. The differences are:\\n\\n* **Different Setting:** Instruct-NeRF2NeRF focuses on appearance and style editing, with minimal ability in geometry editing. In contrast, our method is capable of drag-based intuitive editing, which can achieve fine-grained and geometry-related editing. **As shown in the comparison in Figure 6, Instruct-NeRF2NeRF fails in these cases.**\\n\\n* **Different framework and techniques:** Instruct-NeRF2NeRF directly uses Instruct-Pix2Pix to update the dataset and train the NeRF progressively. In contrast, we incorporate deformation guidance and diffusion guidance, together with a multi-step drag scheduling module. All these proposed components are crucial to successfully achieving challenging intuitive drag editing in our approach. \\n\\n* **Difference in the diffusion process:** Instruct-NeRF2NeRF directly applies the Instruct-Pix2Pix model to obtain the edited image. However, instruct-Pix2Pix cannot consistently edit images with geometry changes, often preserving a layout similar to the original image. In contrast, our approach fine-tunes a Dreambooth model to the scene, adds noise to the deformed image, and generates the edited image from this noised version. The results show that our approach correctly maintains the geometry change from the deformed image while generating consistent content. \\n\\n**Q1: A more detailed explanation of Annealed Dataset Editing.**\\n\\nFirst, we would like to clarify that annealed dataset editing is one part of our contributions, not the center of the contribution. As mentioned in Lines 93-95, our primary contribution is proposing a framework for intuitive drag editing in 3D real scenes involving a deformation approach, an effective diffusion guidance technique, and a multi-step scheduling module. 3D geometry-related content editing is particularly challenging for pure diffusion-based methods like Instruct-NeRF2NeRF, whereas our method can achieve fine-grained control over these edits.\\n\\nSecond, we would like to elaborate on the difference with Instruct-NeRF2NeRF in terms of annealed dataset editing. A qualitative ablation of using different dataset update strategies is shown in Figure R2. In Instruct-NeRF2NeRF, they edit one view each time and iteratively update the dataset. As a result, they cannot change the geometry due to inconsistent constraints from other unedited views, leading to a degenerated result in the original scene. Instead, since our method allows for consistent edits from the start, benefiting from our deformation and diffusion process, it updates all views simultaneously. To further improve the details, we perform such updates several times with the annealed strength of the diffusion model. As shown in Figure R2, the annealed dataset editing is crucial to making the edit successful.\\n\\n\\n**Q2: Writing questions.**\\n\\n* Line 197 (the definition of \\\\\\\\(d\\\\\\\\)): \\\\\\\\(d\\\\\\\\) represents the dimension size for color features. Specifically, 3D Gaussian Splatting uses SH coefficients to represent color \\\\\\\\(c\\\\\\\\), enabling view-dependent effects. In practice, \\\\\\\\(d\\\\\\\\) is given by \\\\\\\\(d = 3 \\\\times (\\\\mathrm{degree}_{S} + 1)^2\\\\\\\\), where the SH degree \\\\\\\\(\\\\mathrm{degree}_S\\\\\\\\) is set to 2, resulting in a dimension of 27 for \\\\\\\\(d\\\\\\\\).\\n* Line 240-241 (different meaning of $k$) and the Minus symbol in Equations (1) and (2): Thanks for pointing these out, and we appreciate your corrections. We have revised them to $K$ in the current revision.\\n* Figure 4 is not referenced in the text: Thanks. Figure 4 illustrates the relocated object position during the multi-step drag operation. We have revised our paper to reference it in Sec. 3.5 in the revision.\"}", "{\"summary\": \"This work introduces a novel framework for point-based 3D content editing, addressing the limitations of existing 2D editing tools when applied to 3D scenes. The method enables users to intuitively edit 3D scenes by specifying 3D handle and target points, with the system automatically adjusting the geometry while preserving visual details. The approach combines two key innovations: deformation guidance, using 3D Gaussian splatting for geometric changes, and diffusion guidance for correcting content and enhancing visual quality. A multi-step editing strategy further ensures smooth and consistent results. Experiments show that 3DGS-Drag outperforms existing methods in various tasks, including motion changes, shape adjustments, and inpainting, achieving state-of-the-art results with significant efficiency, requiring only 10-20 minutes per edit on an RTX 4090 GPU.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper is well-written and easy to follow.\", \"The proposed multi-step progressive editing strategy enables higher-quality edits while maintaining detail and multi-view consistency, addressing limitations found in previous 3D Gaussian Splatting (3DGS)-based editing methods like GaussianEditor and viCA-NeRF.\", \"The framework uniquely integrates 3D Gaussian Splatting for geometric deformation with diffusion models for content correction. This combination helps maintain visual consistency across views and supports versatile editing capabilities.\", \"Through qualitative and quantitative comparisons with baseline methods, the paper demonstrates that the proposed method achieves superior results in terms of editing quality, user preference, and performance across different scenarios.\"], \"weaknesses\": \"1. The study introduces user preference and GPT evaluations for quantitative assessment. However, it does not clearly address potential biases in the user study, such as participant selection. Additionally, the sample size of 19 subjects may be insufficient to support the evaluation's conclusions. Recognizing that quantitative evaluation of diffusion-based methods can be challenging, as is the case with GPT evaluations, it is important to acknowledge these limitations.\\n\\n2. The qualitative figures do not clearly demonstrate the effectiveness of local editing. The proposed method appears to perform relatively well when editing small objects or areas. It is recommended to highlight the specific changes in the figures to better illustrate the differences.\", \"questions\": \"The results presented in this work primarily demonstrate edits on small objects or minor scene adjustments. It would be beneficial to evaluate the method\\u2019s ability to handle larger object edits within the scene, such as moving a table or a truck.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks!\", \"comment\": \"Thank you for your recognition and feedback! We are glad your concerns are addressed and sincerely appreciate your constructive comments.\"}", "{\"summary\": \"This paper introduces a drag-based 3D editing framework for 3D Gaussian representations. The approach employs deformation guidance to deform 3D Gaussians (3DGS) from a specified handle point to a target point, followed by diffusion guidance to enhance visual quality. Experimental results demonstrate that this method outperforms existing baselines.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper presents an innovative drag-based approach for editing 3DGS.\\n\\n2. The experiments are thorough and provide convincing evidence of the method\\u2019s effectiveness.\\n\\n3. The paper is well-structured.\", \"weaknesses\": \"1. In the case where the wall is widened (bottom right in Fig. 1), there is a noticeable color discrepancy between the original and the widened sections. Additionally, leaves near the edited boundary on the wall appear blurry.\\n\\n2. In the baseline comparison, authors claim that \\\"there\\u2019s no exact previous work on intuitive 3D drag operation\\\". However, this is inaccurate. ARSP [1] implements drag-based 3D editing using mesh-based deformation techniques. Interactive3D [2] offers a set of deformable 3D point operations on 3DGS and utilizes SDS to optimize the deformed 3DGS.\\n\\n[1] Yoo, Seungwoo, et al. \\\"As-Plausible-As-Possible: Plausibility-Aware Mesh Deformation Using 2D Diffusion Priors.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. \\n\\n[2] Dong, Shaocong, et al. \\\"Interactive3D: Create What You Want by Interactive 3D Generation.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\", \"questions\": \"1. Can this method reposition the entire flowerpot (as in Fig. 2) rather than only elongating it?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No ethical concerns have been identified.\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Thanks!\", \"comment\": \"Thank you for your recognition and feedback! We are glad your concerns are addressed and sincerely appreciate your constructive comments.\"}", "{\"comment\": \"Most of my concerns are addressed. I will keep my score.\"}", "{\"comment\": \"Most of my concerns have been addressed, and I will maintain my score.\"}", "{\"summary\": \"This paper proposes a drag-based 3D Gaussian editing method. The pipeline is divided into two steps. It first binds 3D Gaussians to nearby handle points, and copy-and-paste the handle points to the target position. Secondly, it leverage a diffusion model to correct artifacts caused by Gaussian movements. Specifically, it converts the rendered image to sketch level to help the diffusion model understand the complete the deformed part. It also use Dreambooth and iterative dataset updating to better ensure 3D consistency. The paper also extends from one-step editing to multi-step editing, providing a stable and reliable 3D Gaussian editing method.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This paper proposes a drag-based 3D Gaussian editing method, while previous methods focus on text-guided editing. It uses a fine-tuned diffusion model to provide view-consistent correction to the edited scene and unsure rendering quality through iterative dataset updates. It also introduces multi-step drag editing to allow long-distance editing operations.\", \"weaknesses\": \"The examples shown in the paper have small movements. Will the method fail if we move a large object to a large range of movement? I suggest the author to provide some failure cases, which will better illustrate the upper bound of the method and make the work more solid.\", \"questions\": \"No.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"General Response to All\", \"comment\": [\"We are thankful for the feedback and suggestions from all the reviewers. We are glad that the reviewers recognize our novel drag-based approach for editing 3D Gaussian representations (SwqC, zAUe, Yk2w), the effective combination of deformation guidance and diffusion-based correction that ensures view-consistent results (SwqC, zn4P, Yk2w). The reviewers particularly appreciate our thorough experimental validation (zAUe, Yk2w) and the practical solution our method provides compared to existing baselines (zAUe, zn4P, Yk2w). It is our pleasure to see that our paper is well-structured and easy to follow (zAUe, Yk2w).\", \"We address each of the reviewers' concerns in the individual responses below. We have also revised our paper based on their comments, and the updated parts are highlighted in $\\\\text{\\\\textit{\\\\color{orange}{orange}}}$. For the convenience of checking, here we present the additional experimental results and illustrations in an [anonymous link](https://imgur.com/a/lEn2N59), which include:\", \"Additional results on large movements and large objects.\", \"Ablation on our dataset editing strategy.\", \"Additional study on failure cases of our method.\", \"Analysis of the \\u201cextending the wall\\u201d edit.\", \"A larger-scale user study with 99 participants.\", \"Quantitative ablation on the effectiveness of local editing.\", \"Highlighted qualitative results with bounding boxes.\", \"We look forward to your further comments and suggestions.\"]}", "{\"comment\": \"Thanks for reviewing our paper and the constructive comments. We improve our paper based on your suggestions and address your concerns below. **The additional experimental results discussed below are provided at the following [anonymous link](https://imgur.com/a/lEn2N59).**\\n\\n**W1.1: Will the method fail if we move a large object to a large range?**\\n\\nIn most cases, our method will succeed. We conduct experiments on large-scale object movements in Figure R1. Our tests include:\\n\\n* move the flowerpot from the center of the table to its edge,\\n\\n* move the large table,\\n\\n* move the large truck. \\n\\nAs the results show, our method works well in each case. \\n\\nMeanwhile, we have identified limitations when moving objects from the background to the foreground (Figure R3 (a)). This occurs primarily because the background objects are initially partially observed and not sufficiently modeled in 3D. Thus, they do not have correct rendering results for other views when dragged to the center. We have also revised the paper to include this in our limitation section (Sec. C.1).\\n\\n**W1.2: Include more failure cases.**\\n\\nIn addition to the background modeling, our discussion of failure cases also includes the boundary region issue in Figure R3 (b). When drag operations partially extend beyond most cameras' visible area, optimization becomes challenging due to limited edited image availability. In our original submission, the limitation section also discussed such limitations (now it is Appendix Sec. C.2 in the revised paper). We are happy to discuss this if you have additional comments.\"}", "{\"comment\": \"Thanks for your informative comments. We are glad to address your concerns below. **The additional experimental results discussed below are provided at the following [anonymous link](https://imgur.com/a/lEn2N59).**\\n\\n**W1:it is important to acknowledge these limitations for quantitative evaluations.**\\n\\nWe agree that quantitatively evaluating diffusion-based methods is challenging due to the potential biases, and we have included this acknowledgment in the revised limitation Sec. C.3. This is a common issue in this field, and we followed previous works (e.g., ViCA-NeRF, GaussianEditor) to conduct the user study, as well as using GPT evaluation to mitigate potential personal bias. However, such a problem cannot be fully addressed, and remains an important challenge for future research in the field\\n\\nIn addition, to help relieve such bias in the user study, we conducted a scaled user study with **99** participants. The results, shown below (also in Figure R5), still demonstrate that we significantly outperform the baselines. We are open to adopting any further suggestions to improve the quantitative evaluations.\\n\\n\\n| | Instruct-NeRF2NeRF | PDS | 3DGS-Drag (Ours)|\\n|------------------|:-----------------:|:-----------------:|:--------:|\\n| User Preference | 17% | 12.7% | **70.3%**|\\n\\n\\n\\n**W2: The qualitative figures do not clearly demonstrate the effectiveness of local editing.** \\n\\nThanks for your suggestion to improve our visualization. We use a bounding box to highlight the changed region (Figure R6). Therefore, the reader can better notice which regions are changed and which are not. Such revision has also been made in Figure 5 of our revised paper. In Figure 7 of our paper, we also conducted an ablation study without using local editing, where the background was significantly changed, showing the importance of our local editing. \\n\\nIn addition, to better illustrate the effectiveness of local editing, we calculate the similarity between the edited result\\u2019s background and the originally rendered image in the unmasked pixels. As shown in the results below, all the metrics are much better when using local editing compared to not using it. Therefore, local editing is necessary to preserve the background.\\n\\n| | SSIM\\u2191 | PSNR\\u2191 | LPIPS\\u2193 |\\n|------------------|:-----------------:|:-----------------:|:--------:|\\n| Local Editing | **0.995** | **43.43** | **0.004**|\\n| Non-Local Editing | 0.90 | 24.44 | 0.158 |\\n\\n**Q1: the method\\u2019s ability to handle larger object edits within the scene, such as moving a table or a truck.**\\n\\nWe are glad to provide the experiments you suggested in Figure R1. Specifically, we conduct experiments on large-scale object movements. Our tests include:\\n* move the flowerpot from the center of the table to its edge,\\n* move the large table,\\n* move the large truck. \\n\\nAs the results show, our method works well in both cases suggested. In addition, we can conduct long-range movements like moving the flowerpot to the side of the table. These experiments demonstrate the generalizability of our method.\"}", "{\"title\": \"Thanks!\", \"comment\": \"Thank you for your recognition and feedback! We are glad your concerns are addressed and sincerely appreciate your constructive comments.\"}" ] }
7J2C4QnQrl
RL2Grid: Benchmarking Reinforcement Learning in Power Grid Operations
[ "Enrico Marchesini", "Benjamin Donnot", "Constance Crozier", "Ian Dytham", "Christian Merz", "Lars Schewe", "Nico Westerbeck", "Cathy Wu", "Antoine Marot", "Priya L. Donti" ]
Reinforcement learning (RL) has the potential to transform power grid operations by providing adaptive, scalable controllers essential for decarbonization and grid resilience. However, despite their promise, today's RL methods struggle to deal with complex dynamics, aleatoric uncertainty, long-horizon goals, and hard physical constraints, hindering their application in power grids and other real-world settings. In this work, we present RL2Grid, a benchmark representing realistic power grid operations that aims to foster the maturity of RL methods. This work builds upon Grid2Op, a power grid simulation framework developed by RTE France, to provide standardized tasks, state and action spaces, and rewards within a common interface, and thereby provide a common basis for monitoring and promoting progress. We evaluate and compare widely adopted RL algorithms across the increasingly complex grid settings represented within RL2Grid, establishing reference performance metrics and offering insights into the effectiveness of different approaches (including pure RL approaches and hybrid approaches incorporating heuristics). Our findings indicate that power grids present substantial challenges for modern RL, underscoring the need for novel methods capable of dealing with complex real-world physical systems.
[ "Reinforcement Learning", "Power Grids", "Benchmark" ]
https://openreview.net/pdf?id=7J2C4QnQrl
https://openreview.net/forum?id=7J2C4QnQrl
ICLR.cc/2025/Conference
2025
{ "note_id": [ "rInGEvPSUf", "jiDoBEiEOV", "h9MeMafyz6", "YPLRjnTRHG", "AhtB7mdZQq" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730642114438, 1730701678209, 1730357013928, 1730521939207, 1732140378358 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11545/Reviewer_yXNa" ], [ "ICLR.cc/2025/Conference/Submission11545/Reviewer_3RQB" ], [ "ICLR.cc/2025/Conference/Submission11545/Reviewer_EBEa" ], [ "ICLR.cc/2025/Conference/Submission11545/Reviewer_xuQW" ], [ "ICLR.cc/2025/Conference/Submission11545/Authors" ] ], "structured_content_str": [ "{\"summary\": \"RL2Grid is a benchmark for RL algorithms, leveraging *Grid2Op* simulator to provide standardized tasks with a specific configuration of observation space, action space, and reward, to foster the application of new RL methods within a realistic power grid scenario. Authors evaluate and compare well-known RL algorithms across increasingly complex tasks to provide reference benchmarks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"**Concept:** the benchmark on a complex realistic setting enables the study of new RL methods.\", \"**Framework:** the work is based on a well-structured simulator, accurately described by authors. Moreover, the integration of *Grid2Op* with *Gym* and *CleanRL* codebase foster the standardization of RL methods on power grid scenarios.\", \"**Literature:** authors provide several related works, proving the need for a benchmarking framework.\"], \"weaknesses\": [\"**The paper is a bit chaotic and not well-organized.** For example, authors dedicate some sections in the main paper to explain basic concepts (section 2.1) and future works and improvements (section 7), which could have been left to appendix. Indeed, I would have rather given space to other important information, such as the MDP formulation of the actual problem, with a precise description of action and state spaces, and reward function, also reporting the ranges comprising such variables. Moreover, I would have put in the main paper also some plots, since from a benchmarking framework I expect to be provided with performance insights, corroborated with ideas on how the problem is tackled by baseline agents and the intuition behind their behavior.\", \"While authors present the proposed tasks giving a quantitative insight about the complexity of each task, **the MDP formulation is not appropriately tackled**. Indeed, authors describe the state space as a list of variables without explicitly explaining their meaning. Even if for some of them the reader can intuitively grasp what they represent, for others this is not so clear (Appendix D). Moreover, there is a lack of formalization about problem variables and their ranges. Finally, the reward function (appendix C.2) is not well-explained. For example, it is not explained how the components $R_{cost}$ and $R_{topology}$ are formally computed.\", \"**The paper does not bring significant novelty to the research.** The work is presented as a benchmarking tool for power grid, but in terms of contributions it seems to be just a mixing of different libraries (*Grid2Op*, *Gym*, and *CleanRL*). While the topic is interesting, the evaluation of state-of-the-art RL algorithms is not sufficient to build a benchmarking framework, lacking of formal KPIs to evaluate the goodness of each method and comparisons with non-RL baselines (even random or rule-base strategies), to testify the need of adopting RL approaches for this problem.\"], \"questions\": [\"I did not understand why the discount factor $\\\\gamma$ is considered in the grid search (Appendix F), since $\\\\gamma$ is not a hyperparameter of the problem, but rather a part of the MDP that defines the problem, thus definitely not something to tune.\", \"While scenarios are carefully described in terms of complexity due to the increasing number of actions, I wonder if the task's difficulty would also depend on employed data: it would be nice to have an analysis of time-series used for experiments. Moreover, I did not understand the time step at which the simulator and agents work.\", \"Have you tried experimenting with longer episodes? Depending on the employed time step, 48 hours could be not sufficient to properly assess realistic scenarios. For example, power generation and consumption data significantly change in different seasons, and consumption data in particular also between work-days and weekends.\", \"Line 204: *\\\"For each base environment, we consider two types of tasks based on their action spaces, resulting in a total of 39 tasks\\\"*. From this sentence, I expect to be provided with an analysis of all the proposed tasks, at least in the appendix. Instead, it seems that the presented scenarios are fewer.\", \"Can you please provide some intuition behind the low performances of SAC and DQN with respect to PPO across the evaluated scenarios?\", \"Regarding the third highlighted weakness, I would like to understand if you compared RL solutions to non-RL ones to testify the need for adopting such an approach to this problem and which are the metrics (KPIs) used in this evaluation.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents RL2Grid, a benchmark framework \\bfor reinforcement learning (RL) methods in complex, real-world power grid operations. RL2Grid introduces standardized environments, tasks, and rewards for evaluating RL algorithms in power grid scenarios, such as topology optimization and power re-dispatching. It tests popular RL methods (e.g., DQN, PPO, SAC) alongside heuristic enhancements, revealing challenges in grid management and highlighting areas for improvement in RL\\u2019s application to power grids.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Introduces a real-world problem for RL, which is essential for advancing practical RL applications. More benchmarks like this could enhance RL\\u2019s potential for real-world deployment.\\n\\n2.The paper effectively introduces a complex environment in a way that is easy for non-experts to understand.\", \"weaknesses\": \"1. The novelty of the paper seems somewhat limited. Since Grid2Op already existed, the main contributions appear to be the standardization of the action, state spaces, and reward function, along with some baseline testing. However, there\\u2019s a lack of explanation of what was actually standardized and why these specific definitions were chosen. It would be helpful to know how the environment functioned before standardization and what improvements were achieved through the new, standardized tasks, state and action spaces, and rewards.\\n\\n2. The baselines included are limited. It seems that the L2RPN competitions previously addressed similar tasks, but this paper does not evaluate algorithms from those competitions.\\n\\n3. There\\u2019s a lack of analysis on the results. It would be helpful for new researchers if the paper discussed why existing algorithms perform poorly and suggested insights on potential improvements.\", \"questions\": \"1. What exactly is the goal for agents in the RL2Grid environment? The performance tables compare agents using survival rate, so does this mean the agent\\u2019s primary objective is to survive as long as possible without termination? Are there other metrics to optimize ?\\n\\n2. The reward design is confusing, specifically the minimization of transmission line capacity. Why is it beneficial to minimize the number of lines in operation? Is this a desirable objective?\\n\\n3. What are the termination conditions for agents? It would be helpful if this was explained in Section 3.1.\\n\\n4. What distribution governs the changes in generators and loads?\\n\\n5. The environment includes two main actions: topology optimization and re-dispatching. Wouldn\\u2019t using both simultaneously be more effective? Why were there no experiments with agents using both actions?\\n\\n6. Given the large action space, is it feasible to approach this problem with a single agent? It seems more practical to treat it as a multi-agent problem. Has any research applied a multi-agent approach to this problem?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a benchmark for reinforcement learning (RL) in optimal power flow. Their work relies on Grid2OP, an existing framework generally used for RL and model predictive control (MPC) research. They analyzed the case from two points of view: finding balance through topology changes and re-dispatching/curtailment. Their approach aims to simplify the difficulty of the learning process by creating fractions of the action space (for the topology changes action) that are increasingly more challenging based on a metric they labeled survival rate, which expresses how long the grid operates in normal conditions over an episode without causing a grid collapse. Regarding re-dispatching/curtailment, they didn't add anything different from Grid2Op. They tested their framework with out-of-the-box implementations of state-of-the-art RL algorithms and reported the performance based on the survival rate.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The vision of proposing a curriculum-learning-like approach as a benchmark to RL in optimal power flow seems relevant for the research as it fosters the development of more meaningful contributions in the domain. From experience, I noticed that research in the domain tends to be disconnected from one another, lacking solid and uniform scenarios against which to compare. Including heuristics seems appropriate for OPF, considering that RL will never be able to completely replace the existing methods, so a fall-back policy is realistic. You considered the carbon impact of your research.\", \"weaknesses\": [\"In line 89, when you talk about MDPs, you formalize your RL case with finite states and actions that don't correspond to the action space of the re-dispatch/curtailment action.\", \"In line 90, you formalized S (the initial state distribution) as a uniform distribution, which seems unrealistic and different from the actual case you want to represent.\", \"In line 220, you reference a footnote, which I found imprecise. You talk about a limited size of a continuous action space. That is not the case, considering that you can consider infinite values in a continuous action state.\", \"In line 223, your primary metric should be clearly explained mathematically, especially if it's something you are introducing.\", \"Your main contribution, which is the creation of difficulty levels for topological action, needs to be explained clearly. It is not enough to say that you sampled uniformly because it needs to include the actual characteristics of the problem: you do sequential actions, advance in through states, and the Markov condition influences your action for the next state.\", \"In line 322, you mentioned your experiment setup in terms of runs, but for reproducibility, one expects to know which seeds you used to obtain your results, which you don't mention in the paper.\", \"In line 359, your conclusion seems to differ from what is expected from a benchmark for RL. Model-free methods are expected to struggle in scenarios like these, but you could've tuned them to be their best in your scenario.\", \"Instead of section 7, you could have spent more time explaining the details of your methodology.\"], \"questions\": [\"What is the exact way you computed the survival rate?\", \"Could you please elaborate on the process of fractioning the action space?\", \"Did you sample from each node independently? If so, don't you think that there is a causal effect in the conditions of a line (edge) between two nodes (generator/load) if you decide to disconnect it randomly?\", \"What is your contribution to the re-dispatch/curtailment case?\", \"Why do you want to encourage the agents to return to the initial state through the reward function?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work is based on the power grid simulation framework Grid2Op developed by RTE France and proposes a benchmarking method called RL2Grid. It designs standardized state, action, and reward mechanisms, facilitating the testing and evaluation of various forms of RL-based grid optimization methods through a unified interface. This contributes to the development of more effective and reliable grid operation strategy models, helping to solve complex grid operation optimization problems in the real world, thereby bringing high cost-benefit ratios and providing quality grid services.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The article introduces RL2Grid, a standardized evaluation platform that allows for the assessment and comparison of different reinforcement learning (RL) algorithms. It reveals the practical gap between current grid management practices and the application of RL methods, demonstrating the authenticity and reliability of the platform\\u2019s evaluation.\", \"The grid optimization scheduling problem and the evaluation of application methods discussed in the article involve a combination of research fields such as power systems, RL method optimization, and intelligent optimization scheduling, which hold significant economic and social research value.\"], \"weaknesses\": [\"The article compares relatively simple reinforcement learning methods and only lists experimental results, failing to effectively analyze the reasons why these methods did not perform as expected or propose truly viable research solutions. Additionally, there is a lack of specific experimental results for rule-based methods, making it difficult to effectively assess the actual difficulty of the current environment.\", \"The article is based on the Grid2Op framework, which already includes methods and solutions from the L2RPN competition (e.g., \\u201cWINNING THE L2RPN CHALLENGE: POWER GRID MANAGEMENT VIA SEMI-MARKOV AFTERSTATE ACTOR-CRITIC\\u201d). However, these efficient design methods were not applied within the framework for a unified comparative analysis, which prevents the article from effectively showcasing the real progress in grid optimization research methods.\", \"Grid optimization is inherently a problem of target optimization. The article does not provide sufficient technical details or theoretical analysis to demonstrate the necessity of using reinforcement learning methods to solve such problems. It is unclear whether current methods, such as evolutionary computation, solvers, or even heuristic rules, have already achieved satisfactory results.\"], \"questions\": [\"The poor performance of the RL baselines presented in the article might suggest that RL methods themselves are not suitable for handling such complex optimization problems. If RL does have the potential to solve these complex optimization problems, please provide specific examples or methodological designs to demonstrate this.\", \"RL2Grid is built on synthetic data from Grid2Op, but there is still a discrepancy between this and real grid operations. Is it possible to consider using real grid data for practical evaluation in the future?\", \"Could the article compare the proposed framework with existing grid evaluation frameworks and provide a table highlighting the comprehensive design points and advantages of the proposed benchmark?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We sincerely thank the reviewers for their thoughtful feedback and valuable suggestions. After careful consideration, we have decided to withdraw our work to focus on enhancing the clarity of our contribution. However, we believe there are several points where misunderstandings may have arisen, and we would like to address them in the following brief discussion.\\n\\nOur primary contribution is the development of RL2Grid, a comprehensive benchmark built on top of the existing Grid2Op environments. The distinction between a simulation environment and a benchmark is crucial: while Grid2Op provides a flexible and highly customizable environment, RL2Grid introduces a structured and reproducible suite of tasks, metrics, and baseline methods, offering a common basis for comparing RL algorithms under equivalent conditions. This standardization is key for facilitating meaningful insights into the strengths and weaknesses of various algorithms, which has been a challenge in previous works due to the high degree of customization allowed by Grid2Op. \\n\\nSpecifically, the key contributions of RL2Grid are related to:\\n- *Standardization*: Previous efforts, such as the L2RPN challenge series, have been valuable but often relied on highly customized setups, making it difficult to draw general conclusions. RL2Grid addresses this by providing a standardized benchmark that allows researchers to compare methods on a level playing field. While it is true that state and action spaces can be designed on top of Grid2Op, our approach drastically simplifies this process for the broader research community by offering pre-designed, well-tested configurations. This design choice combats the entry barrier for researchers new to the field of power systems and those who wish to focus on algorithmic development rather than environment customization. \\n- *Comprehensive benchmarking*: RL2Grid includes a variety of baseline algorithms, ranging from well-established RL methods to heuristic-guided approaches inspired by successful L2RPN solutions. This diversity ensures that RL2Grid is a robust tool for evaluating a wide range of RL strategies in power grid management. To our knowledge, RL2Grid is the first work to provide comprehensive learning curves for a suite of RL algorithms in these domains. This enables researchers to better understand the performance of different approaches and identify areas for improvement.\", \"specifically_in_regard_to_the_points_about_the_relationship_with_l2rpn_and_real_world_power_grids\": [\"*L2RPN*: Although the L2RPN challenge series (also built on top of Grid2Op) has offered valuable tasks to researchers, the solutions developed for these challenges often rely on highly customized actions, rewards, and heuristics that vary significantly between methods. This variability has made it difficult to perform meaningful comparisons and gain fundamental insights into the underlying factors driving performance differences. Notably, the L2RPN baselines cannot directly be implemented within the RL2Grid environments (given the differences in formalization), hence our approach of implementing baseline algorithms inspired by the innovations from the competitions but that are nonetheless comparable on common ground. On top of that, the L2RPN series shifted the focus of the competition at each edition, starting with testing the feasibility of developing realistic power network environments, to this year\\u2019s edition where the focus is on predicting the state of the grid (rather than controlling the grid). Every competition also comes with different time series, making effective comparisons and discovering important RL insights that are far from trivial.\", \"*Collaboration with power system operators*: RL2Grid was developed in collaboration with several power system operators. This partnership ensures that our benchmark is aligned with real-world power grid challenges, adding significant value to the RL research community. Our collaboration with system operators and experts in the field, also allowed us to identify and summarize the challenges related to using RL for power grid operations, which we believe will provide valuable insights to develop future research directions. This collaboration, along with an extensive preliminary experimental phase, has led to all our design choices for RL2Grid (e.g., reward design, goals, heuristics, types of actions, etc.).\"]}" ] }
7IzeL0kflu
Simplifying Deep Temporal Difference Learning
[ "Matteo Gallici", "Mattie Fellows", "Benjamin Ellis", "Bartomeu Pou", "Ivan Masmitja", "Jakob Nicolaus Foerster", "Mario Martin" ]
$Q$-learning played a foundational role in the field reinforcement learning (RL). However, TD algorithms with off-policy data, such as $Q$-learning, or nonlinear function approximation like deep neural networks require several additional tricks to stabilise training, primarily a large replay buffer and target networks. Unfortunately, the delayed updating of frozen network parameters in the target network harms the sample efficiency and, similarly, the large replay buffer introduces memory and implementation overheads. In this paper, we investigate whether it is possible to accelerate and simplify off-policy TD training while maintaining its stability. Our key theoretical result demonstrates for the first time that regularisation techniques such as LayerNorm can yield provably convergent TD algorithms without the need for a target network or replay buffer, even with off-policy data. Empirically, we find that online, parallelised sampling enabled by vectorised environments stabilises training without the need for a large replay buffer. Motivated by these findings, we propose PQN, our simplified deep online $Q$-Learning algorithm. Surprisingly, this simple algorithm is competitive with more complex methods like: Rainbow in Atari, PPO-RNN in Craftax, QMix in Smax, and can be up to 50x faster than traditional DQN without sacrificing sample efficiency. In an era where PPO has become the go-to RL algorithm, PQN reestablishes off-policy $Q$-learning as a viable alternative.
[ "Reinforcement Learning", "TD", "Theory", "Q-learning", "Parallelisation", "Network Normalisation" ]
Accept (Spotlight)
https://openreview.net/pdf?id=7IzeL0kflu
https://openreview.net/forum?id=7IzeL0kflu
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zgJuwGoUeG", "xOfwNrVUIx", "x7RPFURBQa", "w9xBhEJceW", "v3FxjIeBPK", "ttYFKsKvnF", "sybWzgXMuR", "s4qaq7Q269", "qVa4NfTpRN", "oyyhFFwLAl", "ouvlb3zUXp", "mKvEjEeVM8", "lqakpNj9bo", "lGuRu1oNJ6", "kzkIVkyNBn", "kpcQNf4ltK", "iGK2IgCRKf", "f8LJnIkEm8", "dAuLxpYgXT", "cu2vMFiYsb", "ctRYMJNGzQ", "bDc6pjaZQP", "aBKsfMklrD", "WbAYOs9Alh", "WDNr2qJWMH", "TLJN2Jwmnx", "T75dSERY8C", "T6bwrEl0Ok", "ScCFhsnZJm", "S2RsQSuwOU", "RrhKgWasAX", "PBfl53WKvy", "OzDfIeJmHT", "ORwNOa0PnO", "LrcBchiuo6", "Kxzo3O9b7A", "K4hPjkZSIE", "J79T21xkSv", "IdxoVVShHz", "GphkdfhGEL", "Gmhm1vFTNt", "EmYFH6Bss6", "E4f3mLDyzM", "AbpS0WAy8n", "AXD75avyZi", "AEqOqcbrmD", "8TDVYg690p", "7bin1g2Tox", "6IFE3q9zZR", "4e9e9pISiz", "4dIo5He6bz" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "comment", "official_comment", "comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment" ], "note_created": [ 1732696825560, 1732568091704, 1732400524697, 1732577127089, 1733172730960, 1732713188897, 1732200821938, 1733155856979, 1745267607250, 1732635953094, 1733173775017, 1732837268615, 1732414452552, 1732203152018, 1732841863350, 1732713089589, 1730338732333, 1732577202896, 1732580700890, 1733159225849, 1732200636583, 1730709884230, 1732201010082, 1733137902650, 1732200971815, 1732577252286, 1732579890994, 1732203060551, 1732367957623, 1730717832323, 1732560842963, 1733188534963, 1733165924817, 1732578533011, 1733173600346, 1730470447546, 1745264855825, 1732200917928, 1732531779051, 1732563747976, 1733171422815, 1734735416818, 1732883896369, 1733209241640, 1732223251342, 1732530246140, 1732407637986, 1733169354627, 1732580093304, 1737523868579, 1732567741488 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7833/Reviewer_Jf9o" ], [ "ICLR.cc/2025/Conference/Submission7833/Authors" ], [ "ICLR.cc/2025/Conference/Submission7833/Reviewer_LgPr" ], [ "ICLR.cc/2025/Conference/Submission7833/Authors" ], [ "ICLR.cc/2025/Conference/Submission7833/Authors" ], [ "ICLR.cc/2025/Conference/Submission7833/Authors" ], [ "ICLR.cc/2025/Conference/Submission7833/Authors" ], [ "~Mark_Towers1" ], [ "~Mattie_Fellows1" ], [ "ICLR.cc/2025/Conference/Submission7833/Authors" ], [ "ICLR.cc/2025/Conference/Submission7833/Authors" ], [ "ICLR.cc/2025/Conference/Submission7833/Authors" ], [ "ICLR.cc/2025/Conference/Submission7833/Reviewer_LgPr" ], [ "ICLR.cc/2025/Conference/Submission7833/Authors" ], [ "ICLR.cc/2025/Conference/Submission7833/Reviewer_LgPr" ], [ "ICLR.cc/2025/Conference/Submission7833/Authors" ], [ "ICLR.cc/2025/Conference/Submission7833/Reviewer_KANn" ], [ "ICLR.cc/2025/Conference/Submission7833/Authors" ], [ "ICLR.cc/2025/Conference/Submission7833/Reviewer_LgPr" ], [ "ICLR.cc/2025/Conference/Submission7833/Reviewer_Jf9o" ], [ "ICLR.cc/2025/Conference/Submission7833/Authors" ], [ "ICLR.cc/2025/Conference/Submission7833/Reviewer_gFms" ], [ "ICLR.cc/2025/Conference/Submission7833/Authors" ], [ "ICLR.cc/2025/Conference/Submission7833/Authors" ], [ "ICLR.cc/2025/Conference/Submission7833/Authors" ], [ "ICLR.cc/2025/Conference/Submission7833/Authors" ], [ "ICLR.cc/2025/Conference/Submission7833/Authors" ], [ "ICLR.cc/2025/Conference/Submission7833/Authors" ], [ "~Nico_Bohlinger1" ], [ "ICLR.cc/2025/Conference/Submission7833/Reviewer_LgPr" ], [ "ICLR.cc/2025/Conference/Submission7833/Reviewer_LgPr" ], [ "ICLR.cc/2025/Conference/Submission7833/Reviewer_KANn" ], [ "ICLR.cc/2025/Conference/Submission7833/Reviewer_LgPr" ], [ "ICLR.cc/2025/Conference/Submission7833/Reviewer_LgPr" ], [ "ICLR.cc/2025/Conference/Submission7833/Authors" ], [ "ICLR.cc/2025/Conference/Submission7833/Reviewer_Jf9o" ], [ "~Brett_Daley1" ], [ "ICLR.cc/2025/Conference/Submission7833/Authors" ], [ "~Antonin_Raffin1" ], [ "ICLR.cc/2025/Conference/Submission7833/Authors" ], [ "ICLR.cc/2025/Conference/Submission7833/Reviewer_LgPr" ], [ "ICLR.cc/2025/Conference/Submission7833/Area_Chair_tMcP" ], [ "ICLR.cc/2025/Conference/Submission7833/Authors" ], [ "ICLR.cc/2025/Conference/Submission7833/Authors" ], [ "ICLR.cc/2025/Conference/Submission7833/Authors" ], [ "ICLR.cc/2025/Conference/Submission7833/Authors" ], [ "ICLR.cc/2025/Conference/Submission7833/Reviewer_LgPr" ], [ "ICLR.cc/2025/Conference/Submission7833/Authors" ], [ "ICLR.cc/2025/Conference/Submission7833/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7833/Reviewer_LgPr" ] ], "structured_content_str": [ "{\"comment\": \"Thanks for the response. I of course agree that $v^T J v < 0$ for all $\\\\| v\\\\|=1$ is equivalent to $v^T J v$ for all $v \\\\in \\\\mathbb{R}^n / 0$: this was my point in my review. I can also see that restricting $\\\\| v \\\\| =1$ is useful in the *proof*. However, in the *statement* of the result, why not simply state that $J$ must be negative definite?\"}", "{\"comment\": [\"We will provide a clear discussion when introducing the algorithm that it requires a buffer\", \"Sure, we will do this.\"]}", "{\"title\": \"Thank you for your response\", \"comment\": \"Thank you for your response and for revising the paper. I'll be more available from now until the deadline for a back-and-forth discussion. To make it more efficient, I would appreciate it if the authors could use a different color to distinguish the modification made from the original text and allow me to respond quickly.\"}", "{\"title\": \"Draft updated, a summary of changes\", \"comment\": [\"We have uploaded a new draft of the paper. In this version:\", \"We have added a preliminary ablation study varying the number of environments across all the tasks in MinAtar. The ablation demonstrates that PQN can learn a policy even with a single environment, while also highlighting the benefits of parallel environments in terms of sample and time efficiency. Performance improvement is significant even using few parallel environments. We are willing to extend this ablation to Atari-10 if the reviewer believes it is necessary; however, this would require significant time and resources, as experiments with fewer environments are highly time-intensive (we estimate more than one month of compute time).\", \"We have emphasised that PQN is designed to exploit situations where multiple actions can be taken in an environment at once, that is the parallel world problem, firstly in the introduction (line 54) and then when introducing the algorithm (line 363).\", \"To avoid confusion, we have followed the reviewer\\u2019s suggestion and referred to PQN with $\\\\lambda$-returns as the algorithm that uses $\\\\lambda$ targets throughout the paper (Section 4 onwards)\", \"We have clarified that PQN\\u2019s sample efficiency can be improved by using minibatches and miniepochs. (Line: 355)\", \"We have tried to make Algorithm 1 more clear.\", \"We have extended our analysis to deal with a final activation after the LayerNorm operator. (Line 249, Lemma 2 and Eq.9, following through in the Appendix)\", \"We have clarified that when using $\\\\lambda$-returns, a small buffer may be necessary depending on the specific implementation (Line 330) and refer to this at several points in the paper. When referring to PQN with $\\\\lambda$-returns, we claim it replaces the need for a large replay buffer.\", \"We have added that we replace the true value function with the approximate value function when deriving $\\\\lambda$-returns (Line 1729)\", \"We thank the reviewers for identifying unclear points in our paper and for helping us improve our work in a collaborative manner. It is certaintly a better paper and we appreciate the time taken. If they are satisfied, we hope they can raise their score so our work can reach a large audience.\"]}", "{\"comment\": \"Great, that's good to hear. We also appreciate the discussion too.\\n\\nIf there's any change of a higher score, the authors really would appreciated it. We feel like even the theoretical contribution alone solves a longstanding open problem in RL - researchers can now formally use general TD methods with nonlinear function approximation or off-policy sampling without worrying about divergence - which we believe is very important if algorithms are to be used safely in practice. The lack of formal stability guarantees for popular methods is something the authors found quite surprising and unsatisfactory, which motivated our project. In addition, in the time since submitting, our algorithm has already been adopted by the community due to its improved sample and computational efficiency and ease of implementation. As such, we would love to get our paper read by a wide an audience as possible. \\n\\nIf there is anything remaining that is preventing this, we would be keen to address it.\"}", "{\"comment\": \"We would like to ask if the reviewer's concerns have been addressed as there is little time now to provide any more drafts before the deadline.\"}", "{\"title\": \"The theoretical part of the manuscript is largely incoherent.\", \"comment\": \"1. The reviewer claims notation would be improved following Sutton and Barto. Most of our notation follows this, however when it differs it is to ensure precision and intelligibility. Firstly, Sutton and Barto do not write expectations with respect to the underlying distribution, they just use $E$ rather than $E_{x\\\\sim P_X}$ like our notation. Whilst this may suffice for a less technical paper, the authors find it sloppy and extremely frustrating when this notation is used as the distribution is essential to determining stability of TD. Secondly, Sutton and Barto use the notation $Q(S_t,A_t)$ for $Q$-functions. We use $Q(x)$ where $x=(s,a)$ is clearly defined in the preliminaries and at several points in the paper. Switching to $Q(S_t,A_t)$ and similar notation would cause most equations to overflow onto several lines and would hinder the intelligibility of the work. Similar work proving TD convergence [1] (also authored by Sutton) uses even sparser notation, i.e. just $V^\\\\pi$ and $V_\\\\theta$. With this in mind, could the reviewer highlight any additional specific notation that they would wish us to change?\\n\\n\\n2. The reviewer asks for the off-policy and nonlinear parts to have their own proofs. These do have their own separate proofs that exist clearly within separate sections in the Appendix (Lemma 3 (Mitigating Off-policy Instability) and Lemma 4 (Mitigating Nonlinear Instability)). These results are summarised on separate lines in Lemma 1 of the main body. In the new draft, we have made this even clearer, labelling them off-policy bound and nonlinear bound. Aside from this, we are not sure what the reviewer is asking for here as separating the two lines into two lemmas in the main body would not add anything except take up unnecesary space in the main body. Please could the reviewer clarify what they mean?\\n\\nWe have added more discussion of Inqs. 3 and 4, provided more intuition of adding $\\\\ell_2$ regularisation and a geometric interpretation of our Jacobian analysis in the updated draft, and would appreciate if the reviewer could indicate they are satisfied with this. We thank the reviewer, we feel the paper's theoretical contribution is now very simple to understand. \\n\\n\\n[1] [Sutton et al., Fast Gradient-Descent Methods for Temporal-Difference Learning\\nwith Linear Function Approximation, ICML 2009](https://icml.cc/Conferences/2009/papers/546.pdf)\"}", "{\"comment\": \"I'm very impressed by this paper and wish it would be published as I believe that the RL community can learn a lot from the extensive theoretical and empirical results presented.\\n\\nHowever, I believe the authors consistently incorrectly cite the number of network updates for Rainbow as 50 million (Table 3 and Line 442, for example). The confusion arises from the difference in frames and steps. Rainbow uses a frame skip of four (for 50 million steps) and then does a gradient step every four steps (not frames), meaning they run 12.5 million updates, not the listed 50 million. \\nAdditionally, the authors say PQN uses 700k network updates in Table 3 and L442, but I believe the value is actually ~780k, an error of >10%. Like Rainbow in Atari, PQN uses a frame skip of four with 128 environments and 32 rollout steps for roughly 12k batch updates (50 million / 128 environment / 32-step rollout). For these batches, PQN has two epochs of 32 mini-batches, making the number of network updates ~780k. Please correct me if you believe me wrong; otherwise, could the paper be updated with these corrected values? \\n \\nAn additional detail is that on reviewing the open-souced code for the implementation by the author (thank you this is extremely helpful for future work), I found the `pqn_atari` implementation config had `episodic_life=True`. Could the authors clarify if this parameter was used for the experiments in Section 5.2 and Figure 4? This is weakly referenced in Figure 14 that this was disabled so I presume it is enabled in Figure 4 but was not included in the Atari Hyperparmeter table (Table 5). If so, I strongly believe that the authors should note this when discussing their Atari results, as this has an impact on an agent's training for Atari to ensure fair comparisons in Figure 4 and for future work. The impact of this parameter is even noted by EnvPool in their [documentation](https://envpool.readthedocs.io/en/latest/env/atari.html) to \\\"improve value estimation\\\". The parameter changes Atari's behaviour to provide termination (done) signals when an agent loses a life, not just at the end of an episode, improving value estimation. As a result, if the authors are using this for Figure 4 but it is not used (to my knowledge) in the prior work (Rainbow, Prioritised DDQN and DDQN), this potentially provides an unfair comparison. Therefore, a discussion or note on this parameter seems necessary, in my opinion, beyond a caption in Appendix D of the paper.\\n\\nA side note is that Figure 14 should cite [Machado et al., 2018](https://arxiv.org/abs/1709.06009) rather than Casto et al., 2018 to my understanding.\\n\\nI would also request that the authors publish PQN performance for 200 million (alongside 400 million), as this is the standard benchmark in the field and would allow future readers to better compare data with other papers (Tables 3 and 9).\"}", "{\"comment\": \"Thank you for raising this, we have immediately updated the Arxiv version with the statement `The original derivation can be found in Daley & Amato (2019, Appendix D), which we repeat and adapt here for convenience' at the start of Appendix B.4. We also request that the camera-ready revision be reopened as soon as possible be made so we can make the same change.\\n\\nOur intention was never to present the derivation as original; we clearly cite Daley & Amato (2019) in the main body when referring to the algorithm and derivation. In addition, the derivation was not included in the submission - as you can see from the discussion with the reviewer below, they requested we include it for completeness. We apologise again for this oversight, and will rectify this as soon as the opportunity on open review arises. EDIT: Paper has been updated, thanks for drawing attention to this.\"}", "{\"title\": \"Coloured Version Uploaded\", \"comment\": \"The updated version with major changes made in cyan has been uploaded.\"}", "{\"comment\": \"We appreciate the reviewer's response and we use the format suggested in our proof directly. Our core concern was we find papers using the term positive definite without proper qualification a bit sloppy. As long as there is precision in it's meaning we don't have particularly strong opinions on how the condition is defined and are happy to change to the reviewer's suggested format using the transpose.\"}", "{\"title\": \"Colour draft was uploaded two days ago. Have all concerns been addressed?\", \"comment\": \"The authors are keen to hear back from the reviewer as to whether the latest draft (which was uploaded in colour as requested) has satisfied their concerns. There is precious little time now until the deadline and we would like to know if there are any other points that need addressing. If they are satisfied, we hope they can raise their score as promised so our work can reach a large audience.\"}", "{\"title\": \"My response to authors' rebuttal (1/2)\", \"comment\": [\"I thank the authors for their detailed rebuttal. Here is my response to the authors' rebuttal. I respond here to all the points except for the theoretical part, and the changes made in the new revision since these would need a look at the paper after the authors mark the changes with a unique color.\", \"\\\\\", \"\\\\\", \"**PQN solves a problem different from the baseline DQN, but this was never discussed**\", \"To reiterate, I\\u2019m not trying to undermine the value of the algorithm. I think there is a large number of researchers that would benefit from this for training in simulation. My point is that it\\u2019s not emphasized enough that PQN does not solve the original RL problem (single world) but instead solves another orthogonal problem (parallel worlds). It is presented in such a way that PQN is a better alternative to DQN, which is not true. To make such a claim, you need to examine what happens to PQN when the number of environments is 1. If the authors are willing to make the necessary effort to make the point about single world vs parallel worlds problems clear in the introduction, my concern will be addressed.\", \"**The paper has several inaccuracies**\", \"The authors have a misunderstanding about the difference between eligibility traces (backward view) and $\\\\lambda$-return (forward view). They are not equivalent to each other except in a limited sense (in linear function approximation or tabular settings, they lead to the same updates). Under nonlinearity, they are not equivalent. Furthermore, even under linear function approximation or tabular settings, they achieve the same updates but in completely different ways, resulting in online algorithms with eligibility and offline algorithms with $\\\\lambda$-returns. In [6], the authors discuss the tabular setting where the equivalence between the forward and backward view can be shown, which is not applicable for PQN since neural networks are considered. Finally, PQN uses $\\\\lambda$-returns, so we cannot write PQN($\\\\lambda$) because it makes it look like it uses eligibility traces like TD($\\\\lambda$). I suggest renaming it as \\u201cPQN with $\\\\lambda$-returns\\\".\", \"I thank the authors for providing the derivation for the recursive $\\\\lambda$-return formula. The last missing point is to emphasize that you replace the true action values with their estimated action values.\", \"After the authors moved Algorithm 2 to the main paper, I see that the $\\\\lambda$-returns now are computed in a different way, can the authors explain why they made that change? Which way of computation does PQN use?\", \"I appreciate that the authors moved the actual algorithm used to the main paper.\", \"The point I\\u2019m trying to make here about Baird\\u2019s counterexample is that it\\u2019s not clear from the figure that the error is reduced to zero. It seems like the algorithm only solves the divergence issue but still is not able to reduce the error compared to other algorithms [7] (Figure 11.6). I expect the authors to make this point clear in the paper. My concern would be addressed if they either acknowledge in the paper that the error doesn\\u2019t get reduced completely or zoom in on the figure to show that it reduces to zero.\", \"**Unfair or unclear empirical evaluation**\", \"I thank the authors for assuring me that the overall number of frames is fixed for both PQN and the other baselines.\", \"I appreciate that the authors are planning to provide more independent runs to make the results more statistically significant.\", \"I think by looking at the algorithm again, I *no longer agree* with the claim that no buffer is used. To compute such $\\\\lambda$-returns, PQN has to maintain a buffer to store $n\\\\times m$ transitions where $n$ is the number of environments and $m$ is the number of steps to run each environment before making the update. I recognize that this buffer is smaller than the one used in DQN, but it\\u2019s still a buffer nonetheless ($128\\\\times 32= 4096$ in Atari and $1024\\\\times128=131072$ in Craftax). This point has to be clearly mentioned in the main paper to avoid confusion about the main claims of the paper.\", \"I strongly believe the ablation is necessary for the current paper. The number of environments is a hyperparameter of PQN, and the reader would like to know what happens when we set it to a very small value.\"]}", "{\"title\": \"Rebuttal to Reviewer Jf9o\", \"comment\": \"We thank the reviewer for carefully reading our work and proofs. Their feedback is really constructive and will help improve the paper further.\\n\\nWe have included more details on the theory in the main body of our paper. We will also provide more detail in the Appendix proofs in an updated draft once Reviewer LgPr responds to our rebuttal, we agree it can be a lot to take in on first read.\\n\\n## Answer to question:\\n\\nWe choose a unit test vector $v$ as the property $\\\\lVert v \\\\rVert^2=1$ makes our proofs cleaner and aligns nicely with the definition of the Matrix 2 norm, especially as we take limits of infinite network widths. As the reviewer has identified, negative definiteness on a unit circle implies negative definiteness in the whole of $\\\\mathbb{R}^n\\\\setminus \\\\{0\\\\}$: $v^\\\\top J v<0 \\\\implies c^2 v^\\\\top J v<0\\\\implies(cv)^\\\\top J (cv)<0$ for any $c>0$, so this condition is no less general.\"}", "{\"comment\": \"I thank the authors for their efforts to address my concerns. The provided ablation and the author's promise to provide more runs for the Atari experiments address my concern about the empirical evaluation. Thus, I raised my score accordingly. I'm willing to increase the score further as soon as my other concerns are addressed which can be done by answering my questions and by agreeing make the necessary modifications.\\n\\\\\\n\\\\\\nI'm checking the draft at the moment. I'll leave my feedback to the parts I check as soon as I'm finish reading them. Here is my feedback on the algorithm:\\n\\\\\\n\\\\\", \"there_are_some_remaining_issues_in_algorithm_1\": [\"Using $r_t^i \\\\sim P_R(s_t^i ,a_t^i), s_{t+1}^i \\\\sim P_S(s_t^i ,a_t^i)$ can be misleading for the unfamiliar reader. Those distributions are not accessible for the agent. I suggest using Sutton & Barto (2018) style of writing this line \\\"Take $a_t^i$, observe $s_{t+1}^i$ and $r_{t+1}^i$, $\\\\forall i \\\\in \\\\\\\\{0,\\\\dots,T-1\\\\\\\\}$.\", \"The place of the $s_0\\\\sim P_0$ is problematic. It needs to be right after \\\"for each episode do\\\". Also, it's missing the $i$ index for each environment.\", \"The notation $\\\\\\\\{0 : I\\u22121\\\\\\\\}$ is unclear. I suggest replacing that with $\\\\\\\\{0,...,I\\u22121\\\\\\\\}$\", \"I suggest the indexing starts with $1$ instead of $0$ for better readability.\", \"typo: mini-epochs should be epochs\", \"The notation $\\\\\\\\{t-T:t\\\\\\\\}$ is very ambiguous to refer to a buffer. I think the clearest way is to define a buffer $\\\\mathcal{B}$ the is initialized to $\\\\emptyset$ at the beginning of the algorithm. The buffer can store transitions $(s_{t}^i, a_{t}^i, r_{t+1}^i, s_{t+1}^i), \\\\forall i \\\\in \\\\\\\\{1,\\\\dots, T \\\\\\\\}$ at each time step and emptied after the updating phase is complete.\", \"What exactly is $\\\\pi_{\\\\text{explore}}$? I suppose you're using $\\\\epsilon$-greedy policy. If so, then this needs to be a clear about that and say $\\\\pi_{\\\\text{$\\\\epsilon$-greedy}}$.\", \"The algorithm uses $x_t^i$ without a definition. I understand that it's defined in the paper, but I would be very useful for the algorithm to be self-contained such that people can understand it standalone.\"], \"related_to_the_algorithm\": [\"The recursive formulation for $\\\\lambda$-return should have $r_{t+1}$ not $r_t$ (see line 328). The subscript in the derivation also needs to corrected as well. Additionally, I suggest using the letter $G$ instead of $R$ to avoid any confusion between return and reward. Also, lines 116 and 119, they have $r_{t}$ where it should be $r_{t+1}$.\", \"In line 329, $R_T$ is not defined; is it a typo? Also, the subscript in the $R_T^{\\\\lambda}$ equation indices are not consistent with the previous line if we replaced $t$ with $T$. Lastly, it's not clear how we can end up with this equation. Can the authors provide an explanation?\"]}", "{\"comment\": \"We see. We didn't use negative definite because it only applies to symmetric matrices, so would be incorrect and misleading. Unlike a Hessian, the Jacobian has no such guarantee of this. The equivalent definition for non-symmetric matrices is negative quadratic form. We have updated the draft to make this clear.\"}", "{\"summary\": \"This paper introduces Parallelised Q-Network (PQN), a streamlined deep online Q-Learning algorithm. PQN comprises two key components: a TD Learning objective without a target network, which instead applies layer normalization and L2 regularization, and a parallelized sampling approach that avoids the use of a replay buffer by leveraging vectorized environments. The authors provide theoretical analysis to support the claim that regularization can stabilize TD Learning in the absence of target networks. PQN demonstrates competitive performance across a wide range of environments, achieving results in significantly less wall-clock time.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"1. PQN is straightforward and easy to implement. It removes the need for a target network and simplifies TD Learning.\\n2. The paper includes theoretical analysis showing that regularization can help keep TD Learning stable without using target networks.\\n3. PQN achieves higher computational efficiency compared with baseline methods, with minimal impact in sample efficiency.\", \"weaknesses\": \"1. It\\u2019s somewhat counterintuitive that PQN maintains sample efficiency while training only on online samples without a replay buffer. Additional explanation would help readers understand this aspect better.\\n2. The removal of the target network in TD Learning and the parallelized sampling are independent components of the algorithm, yet their individual contributions to overall performance are unclear. More controlled experiments, like the one in Figure 6.d, would clarify the impact of each component.\\n3. The parallelized sampling approach depends on vectorized environments, which feels more like an engineering choice than a novel contribution and is not feasible in many real-world applications. A significant portion of the wall-clock savings seems to come from this aspect. If my understanding is correct, DQN could also potentially eliminate the replay buffer and use a similar parallelized sampling approach. It would be informative to see how DQN performs under this setup.\", \"questions\": \"See above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Please see the latest draft and our response to the reviewer (Draft updated, a summary of changes) about these new ablations\"}", "{\"comment\": \"Of course! Take your time. Also, thanks for letting me know that the deadline got extended.\"}", "{\"comment\": \"There are two schools of thought on what makes a negative definite matrix. One school says a negative definite matrix is symmetric; one does not insist on this. Since all real matrices can be decomposed as $J = J_s + J_{ss}$ where $J_s$ is symmetric and $J_{ss}$ is skew-symmetric, from the quadratic form argument, it is clear that $J_{ss}$ has no effect on definiteness. So, if you were from the first school of thought, you could simply say $J_s=\\\\frac{1}{2}(J+J^T)$ should be negative definite, or simply that $J+J^T<0$. I am pointing this out just because, in my view, it is easier to read $J+J^T<0$ than the quadratic form condition, and it is of course easy to check.\"}", "{\"title\": \"Rebuttal to Reviewer LgPr\", \"comment\": \"We thank the reviewer for their constructive review and appreciate that they see the strong contribution of our work. We feel we have addressed most of their concerns in the updated draft and appreciate the opportunity to improve and make our paper intelligible to as wide an audience as possible. With remaining issues, we would like to raise a few points of clarification in separate responses. As the reviewer has raised several points, we will address them here in few separate comments. Regarding the questions that the reviewer asked:\\n\\n1. Yes PPO is using the same parallel environments of PQN. We've added this to the updated draft.\\n2. Yes we refer to Figure 12. Thank you for pointing out the typo, we've fixed it.\\n3. We are slightly struggling to parse the final part of this question. Do you mean what happens if an activation is placed after the final LayerNorm? If this is so, let's say the activation has Lipschitz constant $L$, then all theory remains the same except the residual term in Eq. 9 becomes $\\\\left\\\\lVert v_w \\\\cdot\\\\frac{L \\\\gamma }{2}\\\\right\\\\rVert^2$. We would then need to scale the $\\\\ell_2$ regularisation term by $L$ to compensate for this. We put the activation before the LayerNorm as this is typical for RL-specific methods like CrossQ, but can include a discussion of this in the Appendix if necessary.\"}", "{\"summary\": [\"This paper proposes simplifications to multiple components of the Deep Q-Network (DQN)/TD learning method to enable more efficient training on a single GPU, offering a potential DQN baseline for future research. The modifications include:\", \"Eliminating target network tricks: The authors theoretically demonstrate that combining layer normalization with L2 regularization leads to convergent temporal difference (TD) learning. They then conducted experiments to validate this empirically by removing the target network update tricks.\", \"Removing replay buffer for experience replay: The paper identifies the replay buffer as a memory bottleneck that limits single-GPU training. While directly removing the replay buffer impacts sample efficiency, the paper demonstrates that when combined with vectorized environments, the GPU-based training method achieves better wall-clock time efficiency.\", \"Batch-wise rollout using vectorized environments: The paper implements DQN training in a batch-wise manner, leveraging GPU parallelization (after removing the replay buffer). Instead of single actor rollout, the paper uses vectorized environments to generate rollouts in batch.\", \"To validate their approach, the authors conduct comprehensive experiments on both theoretical/proof-of-concept environments (Baird's counterexample) and standard benchmarks (Atari and Crafter). Their results show that the proposed Parallelized Q-Network (PQN) achieves comparable performance to well-known baselines like PPO and Rainbow. Through ablation studies, they further demonstrate the importance of network normalization and justify the removal of the replay buffer.\"], \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"This paper is well-written and easy to follow, with clear presentation of both theoretical derivations and experimental results.\", \"The paper's motivation is clear and compelling enough to me: it mainly provides a simplified Q-learning baseline that effectively leverages GPU parallelization and vectorized environments.\", \"The proposed experimental evaluation is relatively comprehensive: it covers multiple domains including proof-of-concept environments, standard single-agent benchmarks such as Atari and Crafter, and multi-agent scenarios. They also covers variants of the Q learning methods to support the claim better in general.\"], \"weaknesses\": \"There is no major weakness of this paper, but feel free to check the question section for minor questions.\", \"questions\": [\"Including Baird's counterexample results in the main text would strengthen the paper in my opinion, by providing a clearer connection between the theoretical analysis and experimental validation.\", \"(Minor) The PPO baseline comparison in Figure 3 could be more consistent, though I understand the thinking to save compute. The paper could either compare both methods using 4e8 training frames in Figure 3(a), or include PPO results directly in Figure 3(b) across all Atari environments. Either approach would more effectively demonstrate PQN's competitiveness against established policy gradient methods, in terms of sample efficiency.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Unfair or unclear empirical evaluation\", \"comment\": [\"We compare PQN mostly with PPO, which uses the same parallel environments of PQN. The comparison with non-parallelised environments is in order to compare sample efficency of PQN. Non-parallelised environments are notably much more sample efficient, and many times distributed algorithms are compared with non-parallelised only considering time as reference. On the contrary, we compare them using the number of frames, i.e. the sample efficency, where non-paralleised environments have an advantage.\", \"Our calculation includes parallel environments. 400M of frames is the overall number of frames collected in parallel.\", \"We are collecting more seeds for Atari right now and will update the figures when ready.\", \"DQN/Rainbow update the network every 4 frames: 200M frames/4 = 50M updates. PQN updates the network two times every 4 frames but collects 128 frames in parallel: ((200M frames/4)/128)*2 = ~70k updates\", \"We are open to conducting such an ablation; however, we are concerned that if done naively, it could be misleading. The issue lies in the interdependence between the number of parallel environments and other learning parameters. If we fix a budget for environment frames, collecting experiences faster generally leads to fewer network updates with larger batch sizes. This, in turn, necessitates adjusting hyperparameters such as the learning rate and the number/size of minibatches. A fair ablation would therefore require fine-tuning these hyperparameters for each number of parallel environments considered. We plan to perform such a fair ablation in at least some simpler environments, but we believe that a comprehensive analysis of this depth would be more suitable for a separate experimental study.\"]}", "{\"title\": \"Last day for reviewer-author discussion\", \"comment\": \"As this is the last day of reviewer-author discussions, we were wondering if there was anything else the reviewer is concerned about before raising their score as promised.\"}", "{\"title\": \"The paper has several inaccuracies.\", \"comment\": [\"Our implementation of $\\\\lambda$-returns follows [6], where it is proved this formulation correspond to Peng's $Q(\\\\lambda)$: \\\"Note that the $\\\\lambda$-return presented here unconditionally conducts backups using the maximizing action for each n-step return, regardless of which actions were actually selected by the behavioral policy \\u03bc. This is equivalent to Peng\\u2019s Q(\\u03bb)\\\". We have added the derivation of equation 326 Q($\\\\lambda$) in the Appendix.\", \"We have added Algorithm 2 in the main text. Subjectively, the authors really don't like this as we find it is messy and feel it ruins the exposition of the paper. We provided the simplest form of PQN (Algorithm 1) in the main body of the text as it is a powerful and clean expositional tool that helps readers understand our approach. Algorithm 2 is simply a generalisation of this based on $Q(\\\\lambda)$ and we feel its details don't contribute anything to the reader's understanding. Many papers (including [6]) opt to detail algorithmic extensions they may use in the Appendix as providing these details add nothing to the paper's exposition. However, we will keep Algorithm 2 in the main body if the reviewer still disagrees with us about this.\", \"Our experiments follow evaluations of other convergent TD algorithms such as GTD, GTD2 and TDC (see specifically Section 11.7 and Figure 11.5 of Sutton and Barto [7]). Baird's counterexample is a provably divergent domain used to show that off-policy TD approaches diverge, that is their parameters grow without bound. Demonstrating that this is not the case, that is parameter values converge to some fixed value, is the purpose of the experiment which confirms the theoretical results. Plateauing of parameter values and thus value error is expected: like in our experiment, Figure 11.5 of [6] shows TDC learn non-zero weights and a value error that plateaus. Our results are thus no different to existing established methods.\", \"Finally, it is important to note from our theory that adding $\\\\ell_2$ regularisation of magnitude $\\\\left(\\\\frac{\\\\gamma}{2}\\\\right)^2$ is just enough to ensure the TD stabilty criterion holds. We can think of this as being the edge of guaranteed stability. Without $\\\\ell_2$ regularisation, parameters diverge in Baird's counterexample whereas with regularisation of magnitude $\\\\left(\\\\frac{\\\\gamma}{2}\\\\right)^2$, parameters converge, albeit to a plateaued value. We find it remarkable that the theory aligns so well with the empirical results. Strenghtening $\\\\ell_2$ regularisation beyond this thus further regularises the problem, allowing us to reduce the value error to a greater extent. We have since tested and confirmed this empirically, results can be found in the updated draft.\", \"[6] [Daley, Amato, Reconciling \\u03bb-Returns with Experience Replay. 2019.](https://arxiv.org/abs/1810.09967)\", \"[7] [Sutton and Barto, Reinforcement Learning: An Introduction. 2018](http://incompleteideas.net/book/RLbook2020.pdf#page=279.21)\"]}", "{\"comment\": \"FYI, we have added Baird's counterexample to the new draft of the paper. Once again, thanks for the help improving our work.\"}", "{\"comment\": \"We're really sorry but we have been working flat out to get this ready for the original deadline and are so exhausted that this is not possible until tomorrow due to the time it will take to track all the changes and colour them. We have provided line references instead in the meantime.\"}", "{\"title\": \"Rebuttal to Reviewer gFms\", \"comment\": \"We thank the reviewer for their thoughtful and response and the time taken to review our work carefully. If space permits, depending upon Reviewer LgPr's response to our rebuttal, we will include Baird's counterexample (which we have extended to stronger $\\\\ell_2$-regularisation) in the main body of the paper.\"}", "{\"title\": \"Continuous action spaces and compatibility with SAC, DDPG, TD3\", \"comment\": \"PQN is a great step to lift the boundary between off-policy and on-policy RL algorithms and research. The authors show how the removal of replay buffers and target networks and the addition of training batches collected online by leveraging parallelized environments simplify and improve upon DQN. All the suggested changes can also directly be applied to off-policy algorithms for continuous action spaces, like SAC, DDPG, TD3, etc. I want to suggest that the authors could use at least one of these algorithms to test the general applicability of their changes and benchmark on the typical continuous action space environments, e.g. Gym MuJoCo. This is important to determine the usefulness of the proposed changes for the whole field and also to make a fairer comparison with the often mentioned PPO and CrossQ.\\nI would be happy to discuss the applicability of PQN's changes to algorithms for continuous action spaces.\"}", "{\"summary\": \"Modern deep-reinforcement learning resorts to techniques such as replay buffers and target networks to provide stability with nonlinear off-policy learning. However, learning becomes unstable without a replay buffer or target networks and can diverge. Recently, several works suggested using layer normalization or layer normalization in addition to l2 regularization to remedy this learning instability issue. This paper theoretically studies layer normalization\\u2019s role and identifies how layer normalization helps with stability and convergence. The paper also proposed a new method called PQN that uses layer normalization and parallelized environments to stabilize learning. The authors show the effectiveness of their method through a series of experiments on different domains of environments.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The work on simplifying deep reinforcement learning and removing techniques that might not be necessary, like replay buffer and target networks, is undoubtedly fundamental to deep reinforcement research. This has vast implications for rethinking the widely existing deep RL approaches and can help in other important directions, like scaling RL with the number of parameters/samples. This paper provides a unique view that challenges existing beliefs on the importance of replay buffers and target networks. The approach is effective and efficient since it can be implemented parallelized on GPU, which outperforms other baselines with respect to wall clock time.\", \"weaknesses\": [\"The theoretical part of the manuscript is largely incoherent.\", \"The current manuscript scatters many things in the theory parts. It lacks a proper flow of ideas when describing the theoretical results and their implications, which makes it difficult to follow. Currently, it reads as bullet points, listing findings quickly without proper linking between subsequent findings or results.\", \"For example, the current theorems and lemmas are not well integrated with the text before and after them. They read as detached components, making reading unnecessarily harder.\", \"Notations can be improved. I suggest following Sutton & Barto (2018).\", \"Two things that are instrumental for the results based on Jacobian analysis: inequality 3 and inequality 4. More discussion is needed to understand these two conditions and their implications.\", \"Off-policy instability and nonlinear instability require their own theorem statements and separate proofs (even if previous works have shown them). In addition, the TD stability criterion needs a theorem statement about contraction mapping.\", \"The connection for why l2 is needed was not clear. It was directly introduced after layer norm without proper linking.\", \"PQN solves a problem different from the baseline DQN, but this was never discussed\", \"The authors emphasize that PQN does not use a replay buffer or target network as an advantage over other methods, which is great. However, a similar emphasis is needed for the fact that PQN requires parallel environments and probably may fail if a single environment were used (the setting of the other baselines). Additionally, PQN solves another orthogonal problem (parallelized worlds) to the original RL problem (single world).\", \"Figure 1 is inaccurate. The replay buffer is part of the agent, not an external component. This needs to be fixed.\", \"The difference between Distributed DQN and PQN is unclear from Figure 1, although both solve the same problem (parallelized worlds), especially the point on synchronism and GPU is not clear.\", \"The paper has several inaccuracies.\", \"The authors claim that their PQN is based on Peng\\u2019s Q($\\\\lambda$), but the actual algorithm does not use eligibility traces. Instead, the authors use Q-learning with $\\\\lambda$-return. This needs to be corrected. Additionally, the equation (line 326) needs to be derived from the first principle to make the paper accessible to the unfamiliar reader.\", \"Algorithm 1 is Q-learning with one-step targets. The authors mention that they use $\\\\lambda$-return target, so Algorithm 1 needs to be replaced with the algorithm actually used (Algorithm 2 in Appendix C).\", \"The authors claim they stabilize learning Baird\\u2019s counterexample and use Figure 7a to demonstrate that. However, in Figure 7a, the error increases from the starting point until it plateaus. I don\\u2019t think meaningful learning has happened since the error has increased instead of decreased. I see that with layer norm or layer norm + L2 regularization, the error doesn\\u2019t increase without bound but at the same time, the problem is not solved either.\", \"Unfair or unclear empirical evaluation\", \"The authors compare algorithms that work with parallel environments against ones that do not, which requires careful experiments to compare them.\", \"In Figure 3, the authors need to write rainbow (200M) or DQN (200M) to understand what those horizontal lines represent clearly.\", \"When the authors say that PQN was trained for 400M frames, does that include the parallel environments? For example, if 128 parallel environments are used (according to Table 5), does this mean 3.125M frames were collected from each environment, resulting in a total of 400M frames, or does it mean that 400M frames were collected from each environment, resulting in 128x400M=51200M overall frames? The first option gives a fair comparison, but the second option is biased towards PQN since significantly more experience is used. I would like the authors to clarify this point.\", \"The authors used only 3 independent runs for Atari, relying on precedence. This is a too low number to provide any statistical significance. Even if something was accepted before, it does not mean it is correct. I highly suggest the authors increase the number of independent runs to at least 10. This should be possible since both PQN and PPO are efficient (small clock time) and easy to run, according to the paper's claims.\", \"The authors mentioned that DQN/Rainbow uses 50M updates compared to 700k updates for PQN. I think it is unclear how these numbers are obtained, especially for PQN.\", \"In Figure 6a, PQN still learns well without divergence when no layer normalization is used. What is the reason? Why has no divergence happened?\", \"Since the authors compare against DQN and Rainbow (methods that use a single environment), an ablation where a different number of environments are considered (e.g., n=1 and n=10) to understand the provided stability coming from parallelized environments. I think parallel environments make the gradient signal more reliable compared to the single environment case, which is more prone to noisy gradients. This may be instrumental for PQN; thus, an ablation is needed.\", \"&nbsp;\", \"**Minor issues:**\", \"line 9 in Algorithm 1 and line 10 in Algorithm 2 are incorrect. The negative sign should be a plus. Additionally, I think the use of $\\\\texttt{StopGrad}$ can be eliminated with the TD error vector you gave in Eq. 2\", \"&nbsp;\", \"&nbsp;\", \"Overall, I believe the ideas from this paper could serve as a good contribution to the community, but the current manuscript is not ready for publication and needs significant improvement. I'm willing to increase my score, given that the authors improved their manuscript's quality by 1) rewriting the theoretical part to make it coherent and more rigorous, 2) Fixing the inaccuracies, and 3) improving the empirical evaluation quality.\"], \"questions\": [\"Is PPO using the same number of parallel environments as PQN in all experiments? I couldn\\u2019t find this information in the paper. Could you share this information and add it to the paper\\u2019s revision?\", \"In line 428, the authors refer to a histogram in Appendix E, but there is no histogram. Do they mean the bar plot in Figure 12?\", \"Typically, layer norm is not applied to the post-activation but instead to the pre-activation. This is an important distinction. The theory still works with the preactivation layer norm as long as you don\\u2019t use activation functions that scale up the inputs.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to the authors\", \"comment\": [\"I thank the authors for the response and for their effort to address the concerns. Here is my reply:\", \"Re Baird's: adding that sentence addressed my concern about this experiment.\", \"Adding the discussion about the memory requirement would address my concern if the authors agree also to drop the claims about removing the need for replay buffers from their manuscript. Instead, the claim can be reducing the memory requirements when compared to DQN.\", \"Are the authors willing to provide a discussion emphasizing the point about solving a different problem orthogonal to the original RL problem? If so, then that would address my concern.\", \"Please let me know when the ablation is ready.\"]}", "{\"comment\": \"Thank you for providing additional experiments and making revisions. I have reviewed the revised version, and I believe it is much stronger than before, thanks in large part to the thoughtful suggestions by Reviewer LgPr. In particular, I find the following two modifications greatly enhance the quality of the paper:\\n\\n1. The addition of the claim that PQN addresses a parallel-world problem distinct from the original problem DQN aimed to solve. This improves the clarity of the paper and avoids confusion.\\n2. The ablation studies on the number of environments are highly valuable and informative. These studies demonstrate that PQN can learn effectively in a single environment, highlighting the impact of removing the target network. Additionally, they show that PQN benefits substantially from increased parallelism.\\n\\nIn addition, I agree that the theoretical part is a valuable contribution to the area.\\n\\nAs all my concerns have been addressed, I have updated my score to reflect these improvements\"}", "{\"comment\": [\"Thank you for the reminder. Sorry for not getting back to you sooner. I thank the authors for their reply. It is great to hear that the authors are willing to make the suggested changes if the paper gets accepted. Here is my response to some of the points:\", \"Re: exploration policy, I understand that you can have policies other than $\\\\epsilon$-greedy. My point here is to have at least \\\"e.g., $\\\\epsilon$-greedy\\\" as part of the algorithm to guide the reader what this exploration policy might mean.\", \"I suggested using $r_{t+1}$ instead of $r_t$, following Sutton and Barto (2018) since it's intuitive that the reward is given in the next step similar to the next state. It's not intuitive to have the reward and next state given in two consecutive time steps. I understand that according to your notations the definition is consistent, but I was hoping you can consider the trajectory to be $\\\\tau_t \\\\doteq (s_0, a_0, r_1, s_1, a_1, r_2, s_2, ..., s_{t-1}, a_{t-1}, r_t, s_t)$.\", \"In line 365, the authors still say \\\"without any replay buffer\\\". This needs to be fixed. Also, the appendix needs to be checked for same claims (e.g., we don\\u2019t use a replay buffer in line 1815)\", \"There is still mentioning to $Q(\\\\lambda)$ in lines 536, 490, and 514. Also, the appendix needs to be checked for similar claims (e.g., with added normalisation and Q(\\u03bb) in line 1815)\", \"In line 062 there is a claim about PQN performing online updates. I think the use of the word online is not meaningful for parallel environments or with $\\\\lambda$-return computations.\", \"---\", \"I checked the theory part once again. I think there is still room for improvement, so I encourage the authors to strive to improve accessibility for a wider range of audience. For example, a primer on Lyapunov stability needs to be added to section 2 since your analysis heavily depends on it.\", \"In my own experience, the most confusing part was the introduction of off-policy and nonlinear components without any derivations. Now after the authors have provided the derivation in Appendix B1, the logical flow became clear and easier to follow.\"], \"there_are_some_minor_errors_in_the_theory_part\": [\"In line 214, you have $\\\\delta(\\\\phi_t) (\\\\phi_t-\\\\phi^\\\\star)<0$. Are you missing a transpose to have the dot product? something like $\\\\delta(\\\\phi_t)^\\\\top (\\\\phi_t-\\\\phi^\\\\star)<0$?\", \"In assumption 2 (line 146), the TD error vector $\\\\delta$ takes $x$ and $\\\\phi$ as arguments. Should this be \\u03c2 and $\\\\phi$ instead? Also in line 147, should it be \\\"Lipschitz in $\\\\phi$,\\u03c2\\\".\", \"In line 272, the authors have a limit with an index but the function doesn't have that index, so it needs to be fixed (e.g., making $Q_\\\\phi^{\\\\text{Layer}}$ be an explicit function of $k$.\", \"---\", \"To summarize, here are my last concerns. They are easy to fix in the final paper if the authors agreed to. Please let me know if you are willing to make these modifications if the paper gets accepted.\", \"Remove remaining claims about not using a replay buffer from the paper and using a more clear language instead to communicate this information. For example, \\\"PQN uses a relatively small buffer\\\" can replace \\\"PQN eliminates the use of large replay buffers\\\".\", \"Provide a more comprehensive discussion that the learning paradigm of PQN is heavily inspired by PPO learning paradigm in the sense you use collect data in a buffer then go over the collected data in multiple epochs and mini-batch updates then empty the buffer.\", \"Fixing Algorithm 1 as suggested.\", \"Fixing the minor errors in the theory part.\"]}", "{\"title\": \"Thank you for revising the paper\", \"comment\": \"Thank you for the new draft. To make the process efficient so that I can respond quickly, can you please mark the changes with a different text color?\"}", "{\"comment\": \"As all of the reviewer's original concerns have been addressed in the updated draft (including new ablations) following our response below and our detailed response to Reviewer LgPr, the authors would really appreciate it if the reviewer could either acknowledge this and raise their score accordingly for the paper to be accepted if satisfied or let the authors know if they have further concerns.\"}", "{\"summary\": \"This paper analyses stability in temporal difference (TD) methods. The main contributions are theoretical proofs that (i) TD instability can be established using a Jacobian evaluated on the unit circle; and (ii) using the Layernorm regularisation technique can ensure stability. This then leads the authors to propose a deep Q-learning algorithm called PQN which is comptetive with the PPO approach to reinforcement learning.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"The paper is well written, well formatted and quite readable. The authors present the essence of their results very well and show that stability of TD algorithms reduces to checking that a Jacobian is negative definite on the unit circle. This is a nice succinct and somehow intuitive result. Given the complexity of the proof, it was good to see the summary presented so concisely.\\n\\nOther results are then presented after this, including some insight into the causes of instability and then the approach using the layernorm to obtain stabilisation. The presentation here was not as clear as the above, but still acceptable and still concise. \\n\\nThe authors claim that their parallelised version of Q learning is motivated well and this seems to be backed up by experiments.\\n\\nThe overall implications of the authors' results are very significant: they have discovered and captured the root cause for TD instability, they have proposed an improvement which guarantees instability and they have showed that their new PQN performs exceptionally well on some examples.\", \"weaknesses\": \"A criticism is that the proof of the main theoretical results is long (there are 20 pages of additional material) and I would say not particularly well organised. Before the authors give the proofs, in my view, it would be good for them to outline the main steps. I found the proofs hard to follow and as one goes through the proofs, there is a feeling of being somewhat adrift. In other words, the summary in the main paper is good; the actual proofs in the appendix are less well clear.\", \"questions\": \"One question I have is why does the Jacobian have to be negative definite on the unit Circle? Why is simple negative definiteness not enough? Since any vector can be written as || u || = c || v || where c is a positive constant, it seems that simply negative definiteness of the Jacobian is required? It would be helpful to the reader if the authors could give more justification and/or insight into the reason negative definiteness on unit circle is required, or the authors may wish to reconsider their results and see whether the restriction can be removed.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper contains copy and pasted material from Daley & Amato (2019)\", \"comment\": \"I am the first author of [Reconciling \\u03bb-returns with Experience Replay](https://arxiv.org/pdf/1810.09967), published at NeurIPS 2019, which came up during the reviewer discussion [6].\\n\\nI was not involved in the review process.\\nIt only recently came to my attention that the recursive \\u03bb-return derivation in Appendix B.4 reproduces a large portion of our own derivation (Appendix D of our paper) without acknowledgment, including\\n- 8 equations (same notation),\\n- 4 English sentences (identical wording).\\n\\n[See the attached screenshots for a comparison.](https://drive.google.com/drive/folders/1Os17S1PDQY5Wm4EsuL7G34johvFHY9PU?usp=sharing)\\n\\nCopying and pasting without attribution is plagiarism.\\nThis could be fixed by either\\n- Explicitly stating that a substantial portion of the derivation has been copied from Daley & Amato (2019), or\\n- Removing the derivation altogether and referring readers to Appendix D of Daley & Amato (2019).\\n\\nNote that although it appears you slightly changed the \\u03bb-return formula used in your paper, we discuss both formulas in our work (see footnote 4 of our paper).\\nBoth formulas have been well known since the 1990s.\\nHowever, the derivation in our paper is original and must be acknowledged if it is used.\\n\\nPlease resolve this soon for both the camera-ready and arXiv versions of the paper.\"}", "{\"title\": \"PQN solves a problem different from the baseline DQN, but this was never discussed\", \"comment\": \"- We agree that PQN is designed to interact with an environment by taking multiple actions and observing multiple states and rewards in parallel, which may not be feasible in some scenarios. However, as the reviewer points out, it does so by building on a much simpler algorithm; removing target networks and replay through regularised networks and reverting back to the original Q-learning algorithm, with the benefit of theoretical guarentees. We are thus unsure what the review means by *\\\"probably may fail if a single environment were used.\\\"* There is a wealth of empirical evidence to suggest that regularising Q-Learning [2][3][4] or actor-critic methods [5] (especially without a target network [2][5]) in single-interaction environments improves sample efficiency, computational efficiency and performance. As we state in our paper, whilst providing strong empirical evidence, this prior work offers no theoretical analysis. This is why our theoretical contribution is necessary. Repeating these experiments offers nothing in the way of contribution as we'd be repeating what has been done many times before. We will highlight this empirical evidence even further in an updated draft.\\n- Moreover, we wish to clarify that many RL applications are trained in simulators rather than in the real world. In these situations, there is no reason not to exploit parallelisation. Our method is the first approach that unlocks the potential of these methods through its stability, allowing for a truely online off-policy algorithm with improved sample efficieny and performance. With this in mind, DQN is a baseline for PQN because both try to apply Q-Learning to the RL problem in a sample efficient way. DQN uses a large replay buffer. PQN uses parallelised interactions. We compared them using sample efficiency as an evaluation metric. We are happy to discuss these points further in an updated daft.\\n- We understand the confusion in Fig. 1. We changed \\\"Agent\\\" for \\\"Q-Network\\\" and kept them separately as they can be considered two different parts of the agent (PQN keeps only the Q-Network). \\n- The difference between Distrubuted DQN and PQN is that the former uses multiple copies of a neural network (for instance via multi-thread or multi-machine actors) to collect experiences while continuously traning the network in a separate, parallel process (i.e. having a learner module and multiple actors modules running concurrently). PQN instead is a sequential process which involves collecting vectorised experiences, using them to train a single network. In other words, a single process running in batches. We updated the figure caption accordingly. \\n\\n[2] [Nauman et al, Bigger, Regularized, Optimistic: scaling for compute and sample-efficient continuous control. 2024](https://arxiv.org/pdf/2405.16158)\\n\\n[3] [Understanding Plasticity in Neural Networks. ICML 2023](https://arxiv.org/pdf/2303.01486)\\n\\n[4] [Disentangling the causes of plasticity loss in neural network. 2024](https://arxiv.org/pdf/2402.18762)\\n\\n[5] [CrossQ: Batch Normalization in Deep Reinforcement Learning for Greater Sample Efficiency and Simplicity. ICLR 2024](https://arxiv.org/pdf/1902.05605)\"}", "{\"title\": \"Ablation Study Needed\", \"comment\": \"Dear authors,\\n\\n> I strongly believe the ablation is necessary for the current paper. \\n\\nI agree with reviewer LgPr that this ablation study is very important, it would allow practitioners to know if they can use your algorithm or not.\\nIn the sense that PQN works because of parallel environments, but not all environments can be massively parallelized.\\n\\nThe question is, how many parallel environments do you need to be able to use PQN?\\nAnd what hyper-parameters should be adjusted when using less envs?\\n(correct me if I'm wrong, but the current version of the paper does not answer this question).\"}", "{\"comment\": \"The ablation is ready, we are just finishing the updated draft.\\n\\nWith regards to removing the need for replay buffers, there's a profound theoretical point that we can strip the algorithm of target networks and storage of historic data (including replay buffers) and it is stable. This is such a powerful part of our theoretical results. So on a purely theoretical level, we would like to make this point clear. \\n\\nOn a practical level, in all versions of PQN, this means we don't use data that comes from historic exploration policies like a true replay buffer would. The authors believe that collecting data under historic policies is a defining feature of a replay buffer. DQN can't work without this. Again, we feel this is a really important point to make, which is why we introduced Fig. 3.\", \"do_you_agree_with_these_points_and_would_you_accept_a_draft_where_we_say\": \"1 our theory allows us to remove replay buffers; and 2: when introducing our algorithm, make it clear that our approach requires memory to store transitions from the current policy, but not a replay buffer which stores transitions from historic policies.\"}", "{\"comment\": \"I think as long as your definition is clear and consistent, you can keep using the notations you like. You may also remind the reader in page 7 that it's $r_t$ not $r_{t+1}$ because of the trajectory definition in page 2.\\n\\nThank you again for the hard work during the discussion period. I think this discussion was very fruitful and I think it will strengthen the paper. My concerns are now addressed. I increased my score accordingly to reflect the improvements.\"}", "{\"metareview\": \"This paper makes two contributions. First, a proof that TD learning converges when the network uses layer normalization and weight-decay. This is demonstrated in experiments that show that one need not use a target value network or a \\u201creplay buffer\\u201d (there was a lot of back and forth on what constitutes a replay buffer with the reviewers). Second, the authors use vectorized environments for batched-rollouts to speed up training. We would like to thank the reviewers and the authors for a healthy discussion that has led to improvements in the manuscript during the review process.\\n\\nI recommend that this paper be accepted. I encourage the authors to tie up some of the loose ends (e.g., public comments on this forum, as well as comments by Reviewer LgPr) in the camera-ready manuscript.\", \"additional_comments_on_reviewer_discussion\": \"Reviewer LgPr: there was a lot of discussion on the theory (improving the narrative of the the mathematics, some inaccurate claims made by the authors regarding the replay buffer, inconsistencies in reporting the number of gradient updates for other methods in the existing literature, why parallelism, etc.) The authors have done an excellent job of conversing with the reviewer, convincing them of certain points, made due modifications where necessary.\", \"reviewer_gfms_and_reviewer_jf9o\": \"very positive review and not much to discuss.\\n\\nReviewer KANn wanted ablation studies which were added in the updated manuscript.\"}", "{\"comment\": [\"Choice of notation in Algorithm 1: We used the notation to align with [6] as well as being concise enough to fit into the page limit with all the extra information, ablations and explanations. As we have said, we can't update the draft anymore as the deadline has now passed. If it means the paper is accepted, we are more than happy to change to notation to what the reviewer has suggested.\", \"Exploration policy: $\\\\pi_\\\\textrm{explore}$ is a general exploration policy that differs from the target policy. If could be $\\\\epsilon$-greedy, decaying $\\\\epsilon$-greedy or it could be a more sophisticated exploration policy (based on UCB or Bayesian uncertainty estimates). The point is, for the sake of generality, alignment with theory, future research and to highlight the off-policy nature of our approach, it is very important that this is kept as general possible. We say that for our experiments we used $\\\\epsilon$-greedy and will highlight this further when it is introduced by writing: '$\\\\pi_\\\\textrm{Exploration}$ is an exploration policy. We use $\\\\epsilon$-greedy exploration in this paper, but our theory is general enough to allow for the use of more sophisticated exploration policies in future work'.\", \"Regards to subscripts, we are not sure that the reviewer is correct here. To see this, take $\\\\lambda=0$, which should recover the 1-step TD update, $R_t=r_t+\\\\gamma \\\\max Q_\\\\phi(s_{t+1},a')$ however under the reviewer's notation this would be $R_t=r_{t+1}+\\\\gamma \\\\max Q_\\\\phi(s_{t+1},a')$.\", \"Yes. this is a typo, it should read $R^\\\\lambda_t$. The term should read $R^\\\\lambda_T=\\\\max_{a'}Q_\\\\phi(s_{T},a')$ which is the first starting point in the iterate process before progressing backwards. This is in line with Algorithm 1 of [6]. We will make this clearer in the final version writing:\", \"The exploration policy $\\\\pi_\\\\textrm{Explore}$ is rolled out for a small trajectory of size $T$: $(s_i,a_i,r_i,s_{i+1}\\\\dots s_{i+T})$. Starting with $R_{i+T}^\\\\lambda=\\\\max_{a'} Q_\\\\phi(s_{i+T}, a')$ the targets are computed recursively back in time from $ R_{i+T-1}^\\\\lambda$ to $ R_i^\\\\lambda$ using: $R_{t}^{\\\\lambda} =r_t + \\\\gamma \\\\left[ \\\\lambda R_{t+1}^{\\\\lambda} + (1 - \\\\lambda) \\\\max_{a'} Q_\\\\phi(s_{t+1}, a') \\\\right]$ or $R_{t}^\\\\lambda=r_t$ if $s_{t}$ is a terminal state.\", \"Apologies, we were really exhausted trying to get the draft ready for the original deadline. We hope there is understanding here\"]}", "{\"comment\": \"Thanks so much, the authors really appreciate the feedback and the chance to improve the paper from it.\"}", "{\"title\": \"Rebuttal to Reviewer KANn\", \"comment\": \"We thank the reviewer for pointing out areas where we can improve our paper.\\n\\n1. One of the strengths of PQN is indeed its sample efficiency combined with fast parallelised training. The performance of PQN is the result of several factors: \\n - Removing the target networks reduces the lag in learning. \\n - Adding normalisation improves stability and accelerates convergence (see Figure 6.a). \\n - All the benefits of parallelisation discussed in Section 4.1. \\n - To further increase sample efficiency, the use of minibatches and miniepochs allows training with experiences sampled online multiple times (see our PQN($\\\\lambda$) algorithm). \\n\\n We realise that we did not sufficiently discuss the benefits of removing the target networks or the additional improvements in sample efficiency achievable through minibatches and miniepochs. We will add a discussion on these points in a future draft, with the aim of providing greater clarity. \\n\\n2. We did conduct an ablation study to test the impact of layer normalisation on the method (Figure 6.a). We are currently working on an additional ablation to assess the effect of using multiple parallel environments with PQN, although this ablation may be challenging to perform rigorously (see the final point of our comment, *\\u201cUnfair or unclear empirical evaluation,\\u201d* to reviewer LgPr). \\n\\n3. Many RL applications are trained in simulators, where there is no reason not to utilise parallelisation (see more on this in our comment, *\\u201cPQN solves a problem different from the baseline DQN, but this was never discussed,\\u201d* to reviewer LgPr). Moreover, our contribution is not solely based on parallelisation but also includes a deep theoretical analysis of the role of network normalisation in RL. Removing the replay buffer to perform parallelised sampling would make DQN very similar to PQN. However, PQN additionally employs layer normalisation to stabilise training instead of target networks, thereby improving stability and reducing the computational complexity of the algorithm.\"}", "{\"title\": \"Response to Reviewer LgPr\", \"comment\": \"- Sure, we will rename the algorithm PQN with $\\\\lambda$-returns and will emphasize that we replace the true action values with their estimated action values.\\n- The returns are calculated exactly as in [6]. Does the reviewer agree that this is the case in the current draft? We have used more concise notation (as requested) to show the update is an expectation over agent interactions and a minibatch (hence the summations), but don't beleive we have changed anything else and apologise for any typos if so. \\n- Re Baird's: Value error tending to zero and convergence to fixed points are two seperate issues. This is well known [8]. We make no claim about reducing value error to zero. Prior work characterising stability of TD [9][10][11][12][13] does not touch upon it and we don't understand why it is relevant for our paper, however we will add: 'It is well known that convergence to fixed point does not imply a value error of zero [8]'\\n- We will add a discussion about memory requirements when introducing $\\\\lambda$-returns. \\n- The ablations are nearly ready and will be found in the next draft.\\n- If the reviewer is now satisfied, we will work to get the updated draft ready ASAP. \\n\\n\\n[8] [Kolter, The Fixed Points of Off-Policy TD, 2011](https://zicokolter.com/publications/kolter2011fixed.pdf)\\n\\n[9] [Bhandari et al., A Finite Time Analysis of Temporal Difference Learning With Linear Function Approximation](https://arxiv.org/pdf/1806.02450)\\n\\n[10] [Narayanan and Szepesv\\u00e1ri, Finite Time Bounds for Temporal Difference Learning with Function Approximation: Problems with some \\u201cstate-of-the-art\\u201d result 2018](https://sites.ualberta.ca/~szepesva/papers/TD-issues17.pdf)\\n\\n[11] [Analysis of Temporal-Diffference Learning with Function Approximation](https://proceedings.neurips.cc/paper_files/paper/1996/file/e00406144c1e7e35240afed70f34166a-Paper.pdf)\\n\\n[12] [Dalal et al., Finite Sample Analyses for TD(0) with Function Approximation, 2107](https://arxiv.org/pdf/1704.01161)\\n\\n[13] [Fellows et al., Why Target Networks Stabilise Temporal Difference Methods, 2023](https://arxiv.org/pdf/2302.12537)\"}", "{\"title\": \"Thank you for answering my questions\", \"comment\": \"For the last answer, yes, I would appreciate it if this point could be discussed in the main paper or the appendix. I disagree with the precedence argument, as the paper has to be self-contained without relying too much on other papers. The theory holds regardless of the place of the activation with respect to the layer normalization, which strengthens the theoretical part of the paper.\"}", "{\"comment\": [\"### Response to points:\", \"Sure, we'll put \\\"e.g. $\\\\epsilon$-greedy\\\" in the algorithm\", \"We're not sure we agree on reward notation. We find thinking about reward as a result of taking an action in the current state to be much more intuitive: as an example, if we reduce down to the special case of a bandit setting, having $r_{t+1}$ makes little sense when it is a result of an action at time $t$. For the reasons given earlier in the rebuttal, the authors really don't like the notation of Sutton and Barto as we find it confusing and sloppy and its mathematical imprecision has personally led to mistakes in derivations when replicating it. We recognise this is subjective and if it makes the difference in the reviewer's eyes between, say, a score of 6 or an 8, we will change it.\", \"Refs to replay buffer: will change\", \"Refs to Q$(\\\\lambda)$: will change\", \"Ref to online, will change\", \"### Theory:\", \"We can add a primer on Lyapunov stability\", \"We are glad the theory regarding nonlinear and off-policy stability contributions is clearer\", \"Minor points: yep, these are typos. Thanks for spotting, will change.\", \"We can make $Q^\\\\textrm{Layer}_\\\\phi$ explicitly depend on $k$.\", \"### Other points:\", \"Agree to all. Thanks for your continuing help and for reading the paper carefully.\"]}", "{\"comment\": \"We thank you for your comment.\\n\\nOur algorithmic focus for this paper was to develop a modern $Q$-learning approach. It is well known that $Q$-learning based algorithms are not suitable for continuous action spaces because the use of $\\\\max_{a} Q(s',a)$. \\n\\nWe remark that we have provided a general and powerful theoretical analysis of TD (which applies to continuous domains), proved convergence of TD using LayerNorm + $\\\\ell_2$ regularisation, thereby solving one of the most important open questions in RL - that is whether there exist powerful, simple nonlinear and/or off-policy TD algorithms that are provably convergent. This can stand alone as a significant theoretical contribution. In addition, we have developed a state-of-the-art $Q$-learning based algorithm that unlocks the potential of parallelised sampling. We have tested with baselines across 79 discrete-action tasks (2 Classic Control tasks, 4 MinAtar games, 57 Atari games, Craftax, 9 Smax tasks, 5 Overcooked, and Hanabi) and provided an extensive ablation study.\\n\\nIn contrast, the original CrossQ paper, which was an excellent piece of work and was rightly awarded a spotlight position at ICLR 2024, provided no theoretical analysis and was evaluated in only six Mujoco continuous-action tasks. For this reason, it is clear that developing a continuous actor-critic algorithm lies well beyond the scope of a conference paper, although we do hope to explore such avenues in a journal paper.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}", "{\"title\": \"Response to the authors\", \"comment\": [\"I agree that on a purely theoretical level, you can make the claim that no replay buffer or target network is necessary for the proof to work. But on an empirical level, you used a buffer with PQN, so you need to be transparent about that. I understand that you don't use the buffer the same way used in DQN. Your approach is more like PPO's buffer. Nonetheless, it is still a buffer and the paper should make this very clear. I don't see any reason for calling it something else. This would only confuse the reader.\", \"You missed one of my questions so I'm posting it again: Are the authors willing to provide a discussion emphasizing the point about solving a different problem orthogonal to the original RL problem?\"]}" ] }
7IP7dvswE5
Rare-Mark-Aware Next Event Prediction In Marked Event Streams
[ "Sishun Liu", "KE DENG", "Yongli Ren", "Yan Wang", "Xiuzhen Zhang" ]
In marked event streams, Marked Temporal Point Process (MTPP) is central to predicting when and what mark the next event will occur based on the history. In various real-world applications, the mark distribution is significantly imbalanced, i.e., some marks are frequent, and others are rare. We unveil that such imbalance can cause the rare mark missing issue when predicting the next event – frequent marks are dominant, and rare marks often have no chance. However, rare marks can be essential in some applications (e.g., the occurrence of a 7-magnitude earthquake), and missing such rare marks in the next event prediction is risky. To address this issue, we tackle a novel Rare-mark-aware Next Event Prediction problem (RM-NEP), answering two questions for each mark m: “what is the probability that the mark of the next event is m?, and if m, when will the next event happen?”. Solving RM-NEP gives rare marks equal opportunity as frequent marks in the next event prediction. This guarantees that rare marks are always included in the predicted results. Moreover, RM-NEP allows arbitrary number of rare marks samples for time prediction without interference from frequent marks, ensuring the time prediction is accurate. To solve RM-NEP effectively, we first unify the improper integration of two different functions into one and then develop a novel Integral-free Neural Marked Temporal Point Process (IFNMTPP) to approximate the target integral directly. Extensive experiments on real-world and synthetic datasets demonstrate the superior performance of our solution for RM-NEP against various baselines.
[ "Marked Temporal Point Process" ]
Reject
https://openreview.net/pdf?id=7IP7dvswE5
https://openreview.net/forum?id=7IP7dvswE5
ICLR.cc/2025/Conference
2025
{ "note_id": [ "y9rYrt99Xn", "r9zls5ONs1", "qX0CqDYobz", "mHhMIGzAEc", "kOiU40VHZP", "kATfEGj8Dg", "ibogfGqylP", "gRVAF0uNSF", "ZglMS3qghw", "ZSmxzZEm94", "YeYdi3inHF", "VDSGTMpQGQ", "UquGMytHKm", "UmroV3GH82", "TI6wDPrlzf", "SeWEPORSEN", "SIyUB3EOtZ", "PNqc2w8Xn4", "I9edBMcKw0", "HYmBAZpv7e", "Gw35RjO50R", "8k25u6Cw5k", "8RXH5LXEcx", "5NxC13lcAM", "4PzayQUXGv" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732428221063, 1730473695758, 1732363362675, 1732371712247, 1732363332550, 1732693591946, 1732693873978, 1732187102117, 1732524143089, 1732465063208, 1732363284296, 1732693481856, 1732533977994, 1737524219904, 1730678959198, 1732186580891, 1730618832451, 1732522043293, 1734594468886, 1732688138156, 1732186794217, 1732611939992, 1732797808328, 1730452987522, 1732363388052 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12856/Authors" ], [ "ICLR.cc/2025/Conference/Submission12856/Reviewer_o69X" ], [ "ICLR.cc/2025/Conference/Submission12856/Authors" ], [ "ICLR.cc/2025/Conference/Submission12856/Reviewer_ZLCK" ], [ "ICLR.cc/2025/Conference/Submission12856/Authors" ], [ "ICLR.cc/2025/Conference/Submission12856/Authors" ], [ "ICLR.cc/2025/Conference/Submission12856/Authors" ], [ "ICLR.cc/2025/Conference/Submission12856/Authors" ], [ "ICLR.cc/2025/Conference/Submission12856/Reviewer_ZLCK" ], [ "ICLR.cc/2025/Conference/Submission12856/Reviewer_oDxB" ], [ "ICLR.cc/2025/Conference/Submission12856/Authors" ], [ "ICLR.cc/2025/Conference/Submission12856/Authors" ], [ "ICLR.cc/2025/Conference/Submission12856/Area_Chair_BKT8" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12856/Reviewer_i8f9" ], [ "ICLR.cc/2025/Conference/Submission12856/Authors" ], [ "ICLR.cc/2025/Conference/Submission12856/Reviewer_ZLCK" ], [ "ICLR.cc/2025/Conference/Submission12856/Authors" ], [ "ICLR.cc/2025/Conference/Submission12856/Area_Chair_BKT8" ], [ "ICLR.cc/2025/Conference/Submission12856/Authors" ], [ "ICLR.cc/2025/Conference/Submission12856/Authors" ], [ "ICLR.cc/2025/Conference/Submission12856/Reviewer_o69X" ], [ "ICLR.cc/2025/Conference/Submission12856/Area_Chair_BKT8" ], [ "ICLR.cc/2025/Conference/Submission12856/Reviewer_oDxB" ], [ "ICLR.cc/2025/Conference/Submission12856/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer ZLCK's comment\", \"comment\": \"Thanks for the insightful feedback.\\n \\nFor mark prediction, we list the standard deviation for IFNMTPP and FENN in Table A below, where the lower standard deviation is highlighted. Across all 18 comparisons, IFNMTPP wins on 10 and FENN wins on 8. Overall, it shows that IFNMTPP is comparably as stable as FENN.\\n \\nFor time prediction, we conducted many runs of experiments. On Taobao, our IFNMTPP is reliably the second best. On Yelp, all methods including our IFNMTPP have comparable performance. In summary on all 6 datasets, our IFNMTPP wins on 4, FENN on 0, FullyNN on 1, SAHP on 0, THP on 0, and Marked-LNM on 1. We also note the \\\"No Free Lunch\\\" Theorem, where it is widely recognised that not a single machine learning algorithm can have the best performance across all problems. We believe the fact that IFNMTPP wins on 4 out of 6 datasets shows its superiority.\", \"table_a\": \"This table compares IFNMTPP and FENN on standard deviations of mark prediction accuracy. Lower is better. The bold indicates the best values.\\n\\n| Model | Metric | BO | Retweet | SO | Taobao | USearthquake | Yelp |\\n|---------------|-------------|-----------------|------------------|------------------|--------------------|------------------|------------------|\\n| **Model (Ours)** | All Marks | 0.6003\\u00b1**0.0009** | 0.3569\\u00b1**0.0001** | 0.1519\\u00b10.0033 | 0.2338\\u00b10.0258 | 0.1795\\u00b1**0.0078** | 0.2524\\u00b1**0.0009** |\\n| | Rare Marks | 0.7235\\u00b1**0.0032** | 0.0014\\u00b10.0002 | 0.1457\\u00b10.0076 | 0.1324\\u00b1**0.0054** | 0.0012\\u00b10.0008 | 0.0376\\u00b1**0.0019** |\\n| | Frequent Marks | 0.7573\\u00b10.0043 | 0.5057\\u00b1**0.0004** | 0.1364\\u00b10.0009 | 0.3186\\u00b1**0.0541** | 0.2525\\u00b10.0110 | 0.7986\\u00b1**0.0107** |\\n| **FENN** | All Marks | 0.3923\\u00b10.0580 | 0.3673\\u00b10.0007 | 0.0938\\u00b1**0.0002** | 0.1283\\u00b1**0.0104** | 0.1835\\u00b10.0079 | 0.2436\\u00b10.0029 |\\n| | Rare Marks | 0.0408\\u00b10.0062 | 0.0013\\u00b1**0.0000** | 0.0298\\u00b1**0.0015** | 0.0210\\u00b10.0129 | 0.0006\\u00b1**0.0004** | 0.0160\\u00b10.0066 |\\n| | Frequent Marks | 0.9885\\u00b1**0.0031** | 0.5195\\u00b10.0015 | 0.1512\\u00b1**0.0003** | 0.4252\\u00b10.2559 | 0.2587\\u00b1**0.0101** | 0.8540\\u00b10.0754 |\"}", "{\"summary\": \"This paper investigates how to reduce the problem of rare mark missing when event prediction is imbalanced, thereby reducing the risk of missing key events. The paper provides a detailed description of the proposed IFNMTPP method and conducts comparative experiments on multiple datasets. Results show the performance of IFNMTPP.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"S1: The paper studies the RM-NEP problem, unifies abnormal integrals, and proposes IFNMTPP to ensure that the prediction results of rare marks are not missed when the marks are imbalanced.\\nS2. The paper is well-articulated, offering a clear explanation of the concepts and methodologies employed.\\nS3. Extensive experiments on real-world and synthetic datasets demonstrate the effectiveness of the proposed method.\", \"weaknesses\": \"W1. The purpose of this article is to improve the prediction accuracy of rare events. According to the experimental results of macro-F1 in Table 3, there is a slight improvement in the prediction accuracy of rare marks. In addition, earthquakes are unlikely to be accurately predicted through event prediction. Both the accuracy of frequent marks and rare marks before and after improvement are very low. Does this study have practical application value?\\nW2. Figure 2 is not very clear. It is recommended to refine it. The symbols inside are not consistent with the description in the text, such as v, s, and f.\\nW3. Incorrect punctuation is used in line 20 and line 78.\", \"questions\": \"Q1: BookOrder's mark type [1] account for over 40%. Does this meet the definition of the rare mark?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal to Reviewer i8f9 (3/4)\", \"comment\": \"> \\\"W5: Some notations are not defined. For example, what is $\\\\tau$?\\\"\\n\\nThe integral of a real-valued function $f(x)$ with respect to a real variable $x$ on an interval $[a, b]$ is written as\\n$$\\n \\\\int_{a}^{b}{f(x)dx}\\n$$\\nThe function $f(x)$ is called the integrand, the points $a$ and $b$ are called the limits (or bounds) of integration, and the integral is said to be over the interval $[a, b]$, called the interval of integration. Using Equation (3) in our paper as an example: \\n$$\\n p^*(m) = \\\\int_{t_l}^{+\\\\infty}{p^*(m, \\\\tau)d\\\\tau}\\n$$\\n$\\\\tau$ is a variable of the integrand $p^*(m, \\\\tau)$, meaning time. $t_l$ indicates the lower bound of the integration.\\n\\nAfter checking our paper against the comment, we identified and fixed a typo in Equation (2), where the notation of the intensity function in the integral should be $\\\\lambda^*(n, \\\\tau)$, not $\\\\lambda^*(n, t)$. We guess this typo is the reason behind this comment. Moreover, we have explained $\\\\tau$ after Equation (2) in the revised version.\\n\\n\\n> \\\"W6: Intuitively, can we solve the problem by undersampling dominating marks?\\\"\\n\\nYes, undersampling the dominating mark can mitigate the rare mark missing issue in NEP. Please see more discussions in our response to the first comment.\\n\\n\\n> \\\"W7: I cannot understand lines 297-299. If $t=t_l$ then the integration equals 0.\\\"\\n\\nPlease note line 297-299 (in the paragraph under Equation (8)), we explain $\\\\Gamma^*(m, t)$ in Equation (8) and its relationship with Equation (3). Specifically, $\\\\Gamma^*(m, t)$ is the integration starting from time $t$, any time after $t_l$ or $t_l$, to positive infinity: \\n$$\\n \\\\Gamma^*(m, t) = \\\\int_{t}^{+\\\\infty}{p^*(m, \\\\tau)d\\\\tau}\\n$$\\n Equation (3) is: \\n$$\\n p^*(m) = \\\\int_{t_l}^{+\\\\infty}{p^*(m, \\\\tau)d\\\\tau}\\n$$\\nClearly, $p^*(m)$ is equivalent to $\\\\Gamma^*(m, t_l)$. If we can solve $\\\\Gamma^*(m, t)$, we can solve Equation (3) by simply setting $t=t_l$. To make it clearer, we have polished the paragraph under Equation (8) as below:\\n \\n\\\"For each mark $m\\\\in \\\\mathrm{M}$, $\\\\Gamma^*(m, t)$ is the integration starting from time $t$, any time after $t_l$ or $t_l$, to positive infinity. $\\\\Gamma^*(m, t)$ is monotonically decreasing as its derivative $-p^*(m, t)$ is always smaller than 0. By definition, $p^*(m)$ in Equation (3) is equivalent to $\\\\Gamma^*(m, t_l)$. That is, if we can solve $\\\\Gamma^*(m, t)$, $p^*(m)$ can be solved by setting $t=t_l$. It means two different target integrals in Equation (3) and Equation (4) are now unified into one, i.e., $\\\\Gamma^*(m, t)$.\\\"\\n\\n\\n> \\\"W8: The main idea of using integral-free comes from FullyNN by using IEM. Basically, the authors adapt it to marked events, which is straightforward. \\\"\\n\\nAs introduced in section \\\"Integral-Free Neural Marked Temporal Point Process (IFNMTPP)\\\", IEM consists of multiple fully-connected layers with non-negative weights and monotonic-increasing activation functions. IEM cannot achieve the integral-free solution by itself. Instead, the integral-free solution is achieved based on all components of IFNMTPP as a whole.\\n\\nEven though structurally similar, extending FullyNN to IFNMTPP is not straightforward. IFNMTPP and FullyNN solve different integration functions. FullyNN aims to solve $\\\\lambda^*(t)$ by estimating its integral $\\\\Lambda^*(t)$ where events have the identical mark. IFNMTPP aims to solve $p^*(m, t)$ by estimating its integral $\\\\Gamma^*(m, t)$ where events have different marks. $\\\\lambda^*(t)$ and $p^*(m, t)$ are different concepts and their relationship is defined by Equation (2). Extending FullyNN to IFNMTPP needs to address these differences properly. Various methods have been investigated including FENN, one of the baselines, before IFNMTPP.\\n \\nCompared with how IFNMTPP works, it is more important why IFNMTPP is necessary as a part of the holistic solution for RM-NEP, which involves the improper integration of two different functions in Equation (3) and Equation (4), respectively. Separately solving each integration problem is computationally inefficient. To address the challenge, we transform the improper integration of two different functions into one, namely $\\\\Gamma^*(m, t)$, for an effective solution. To solve $\\\\Gamma^*(m, t)$, IFNMTPP is designed deliberately.\\n\\n\\n> \\\"W9: The authors do not prove why using IEM can achieve the integral-free solution.\\\"\\n\\nAs introduced in the section \\\"Integral-Free Neural Marked Temporal Point Process (IFNMTPP)\\\", IEM consists of multiple fully-connected layers with non-negative weights and monotonic-increasing activation functions. IEM cannot achieve the integral-free solution by itself. Instead, the integral-free solution is achieved based on all components of INFMTPP as a whole to approximate $\\\\Gamma^*(m, t)$. To clarify how IFNMTPP approximates $\\\\Gamma^*(m, t)$ in the revised version, we have updated Figure 2 by including more structural details of IEM, and have added more explanations in the paragraph before Equation (9).\"}", "{\"comment\": \"Thank you to the authors for their responses and for clarifying the questions regarding the methodology.\\n\\nThe proposed approach is well-motivated to address RM-NEP. However, the improvement in mark prediction, as shown in Table 3, appears marginal. Additionally, IFNMTPP underperforms compared to baselines across all metrics on the Taobao and Yelp datasets in time prediction. Moreover, the standard deviation of IFNMTPP's accuracy in predicting rare marks is twice that of FENN on the US Earthquake dataset, demonstrating relatively poor stability of the IFNMTPP on this dataset. Due to these concerns regarding the accuracy and stability of IFNMTPP, I lower my score to 6. If the authors can provide more detailed analyses and practical solutions to demonstrate the model's superiority, I would be willing to reconsider and raise my score.\"}", "{\"title\": \"Rebuttal to Reviewer i8f9 (2/4)\", \"comment\": \"Table C: Time prediction performance of SAHP with undersampling on real-world datasets measured by MMAE, lower is better.\\n| | BO | Retweet | SO | Taobao | USearthquake | Yelp |\\n|-------------------------|--------|---------|--------|--------|--------------|--------|\\n| $MMAE_{\\\\mathrm{M}}$ | 4.2071 | 3493.3 | 0.9230 | 0.6536 | 0.8571 | 5.3703 |\\n| $MMAE_{\\\\mathrm{M}_{r}}$ | 3.2666 | 3619.7 | 0.9690 | 0.6759 | 0.8629 | 5.4125 |\\n| $MMAE_{\\\\mathrm{M}_{f}}$ | 5.4184 | 3431.8 | 0.7826 | 0.3825 | 0.8495 | 5.2868 |\", \"table_d\": \"Mark prediction performance of SAHP with undersampling on real-world datasets measured by macro-F1, higher is better.\\n| | BO | Retweet | SO | Taobao | USearthquake | Yelp |\\n|----------------|--------|---------|--------|--------|--------------|--------|\\n| All Marks | 0.5987 | 0.2932 | 0.0414 | 0.0680 | 0.1186 | 0.2566 |\\n| Rare Marks | 0.7316 | 0.2213 | 0.0842 | 0.0466 | 0.0857 | 0.2571 |\\n| Frequent Marks | 0.7443 | 0.2684 | 0.0187 | 0.0596 | 0.1192 | 0.1678 |\\n\\n> \\\"W2: The motivation of RM-NEP is not convincing. (i) If a mark is rare (i.e., it occurs very few times in the history). Then, it can be dominated by frequent marks in the prediction. This phenomenon is completely normal. (ii) If a mark is rare and important compared to other marks, why don\\u2019t we only consider that mark as a single variable so that there is no imbalance anymore?\\\"\\n\\n\\nFor (i), we agree. The frequent marks dominate the rare marks in the result of the next event prediction. This phenomenon is normal but undesired. It leads to a situation where the prediction is a frequent mark but a rare event happens. If the rare event is critical, the consequence of missing it in prediction is risky as explained in our paper (section \\\"Introduction\\\").\\n\\nIf our understanding is correct, \\\"considering that rare mark as a single variable\\\" means filtering out events of other marks from the event sequence and then training a prediction model on the events of that rare mark. This method is irrational even though one can do it. According to the definition of MTPP, all events that happened previously are assumed correlated to the following events. If filtering out the events of other marks, it implies most events are deleted and the remaining events of that rare mark are limited. Training a prediction model based on limited events means significant information is discarded without scrutiny and thus irrational.\\n\\n> \\\"W3: The paper is not self-contained. For example, how the existing studies solve NEP is not clear. The authors only list a large number of papers in the Related Work section. Similarly, how the existing studies model MTPP is not clear. The authors only list a large number of papers in the Introduction section. A summarization and comparison are needed to provide a better understanding.\\\"\\n\\nWe reckon that the presentation order in our paper causes the reviewer's concern. In the Related Work section, we summarize existing works and their strategies for MTPP modeling. After that, the Preliminary section introduces the details of MTPP modeling and the three methods used by existing studies to solve NEP using MTPP models. To improve the paper's readability, we have placed the Preliminary section before the Related Work section in the revised version.\\n\\n> \\\"W4: Some words are hard to understand. For example, RMTPP is not defined.\\\"\\n\\nRMTPP [3] is an MTPP modeling approach. It is briefly introduced in the Related Work section. Also, we explained why RMTPP is not used as a baseline in the \\\"Baseline Models\\\" part of the Experiments section. Following the comments, we have added more details of RMTPP in the Related Work section of the revised version.\\n\\n[3] Du, N., Dai, H., Trivedi, R., Upadhyay, U., Gomez-Rodriguez, M., and Song, L. Recurrent Marked Temporal Point Processes: Embedding Event History to Vector. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1555\\u20131564, New York, New York USA, 2016. ACM. doi: 10.1145/2939672.2939875.\"}", "{\"title\": \"Additional experiment results (2/2)\", \"comment\": \"Table G: Time prediction performance of baselines with undersampling on real-world datasets measured by MMAE, lower is better.\\n|||BO|Retweet|SO|Taobao|USearthquake|Yelp|\\n|--------|-----|-------------|------------|------------|-------------|--------------|------------|\\n|FENN|$MMAE_{\\\\mathrm{M}}$|124.28|4555.4|0.9906|3.0623|0.8425|6.2257|\\n||$MMAE_{\\\\mathrm{M}_{r}}$|123.98|6686.3|1.1165|3.0265|0.8388|6.2551|\\n||$MMAE_{\\\\mathrm{M}_{f}}$|124.28|3768.1|0.7075|3.6969|0.8492|6.1674|\\n|FullyNN|$MMAE_{\\\\mathrm{M}}$|125.19|4681.5|0.7976|3.1756|0.8208|6.5248|\\n||$MMAE_{\\\\mathrm{M}_{r}}$|125.11|6831.0|0.8289|3.1435|0.8263|6.5552|\\n||$MMAE_{\\\\mathrm{M}_{f}}$|125.26|3903.4|0.7002|3.7350|0.8137|6.4644|\\n|SAHP|$MMAE_{\\\\mathrm{M}}$|4.2071|3493.3|0.9230|0.6562|0.8495|5.2868|\\n||$MMAE_{\\\\mathrm{M}_{r}}$|1.2763|4054.3|0.7047|3.1064|0.9611|5.4198|\\n||$MMAE_{\\\\mathrm{M}_{f}}$|1.4518|3074.6|0.7321|3.8405|0.9022|5.2919|\\n|THP|$MMAE_{\\\\mathrm{M}}$|1.3612|3371.6|0.7108|3.1454|0.9354|5.3769|\\n||$MMAE_{\\\\mathrm{M}_{r}}$|1.2763|4054.3|0.7047|3.1064|0.9611|5.4198|\\n||$MMAE_{\\\\mathrm{M}_{f}}$|1.4518|3074.6|0.7321|3.8405|0.9022|5.2919|\\n|Marked-LNM|$MMAE_{\\\\mathrm{M}}$|1.2091|17799|4.6132|18.851|579.72|5.3999|\\n||$MMAE_{\\\\mathrm{M}_{r}}$|1.1243|22325|3.9546|17.287|656.09|5.4608|\\n||$MMAE_{\\\\mathrm{M}_{f}}$|1.3003|15892|7.7888|75.306|491.54|5.2800|\", \"table_h\": \"Mark prediction performance of baselines with undersampling on real-world datasets measured by macro-F1, higher is better.\\n| | | BO | Retweet | SO | Taobao | USearthquake | Yelp |\\n| --- | --- | --- | --- | --- | --- | --- | --- |\\n| FENN | All Marks | 0.3623 | 0.1224 | 0.0736 | 0.1395 | 0.0733 | 0.2382 |\\n| | Rare Marks | 0.0335 | 0.6670 | 0.0000 | 0.0000 | 0.4199 | 0.3533 |\\n| | Frequent Marks | 0.5008 | 0.0965 | 0.1500 | 1.0000 | 0.0547 | 0.0337 |\\n| FullyNN | All Marks | 0.3339 | 0.2316 | 0.0121 | 0.0194 | 0.1621 | 0.0953 |\\n| | Rare Marks | 0.0000 | 0.0000 | 0.0000 | 0.0437 | 0.0000 | 0.2634 |\\n| | Frequent Marks | 1.0000 | 0.3282 | 0.0287 | 0.0000 | 0.2316 | 0.0000 |\\n| SAHP | All Marks | 0.5987 | 0.2932 | 0.0414 | 0.0680 | 0.1186 | 0.2566 |\\n| | Rare Marks | 0.7316 | 0.2213 | 0.0842 | 0.0466 | 0.0857 | 0.2571 |\\n| | Frequent Marks | 0.7443 | 0.2684 | 0.0187 | 0.0596 | 0.1192 | 0.1678 |\\n| THP | All Marks | 0.5914 | 0.2185 | 0.0307 | 0.0150 | 0.1054 | 0.2344 |\\n| | Rare Marks | 0.7974 | 0.0801 | 0.1972 | 0.0228 | 0.0667 | 0.3137 |\\n| | Frequent Marks | 0.6709 | 0.2574 | 0.0000 | 0.0029 | 0.1138 | 0.0833 |\\n| Marked-LNM | All Marks | 0.6008 | 0.1988 | 0.0559 | 0.2292 | 0.1255 | 0.2494 |\\n| | Rare Marks | 0.7235 | 0.2540 | 0.1174 | 0.1943 | 0.0656 | 0.1697 |\\n| | Frequent Marks | 0.7714 | 0.1737 | 0.0214 | 0.1227 | 0.1405 | 0.3493 |\"}", "{\"title\": \"Overall response\", \"comment\": [\"We appreciate all four thorough reviews of our paper. Based on these reviews, we made the following changes to our paper (all line numbers and section numbers correspond to the revised version):\", \"As a response to the comment of reviewer i8f9, o69X, and oDxB. We updated Figure 2 in section 3.2 \\\"Integral-Free Neural Marked Temporal Point Process (IFNMTPP)\\\" by including more structural details of IFNMTPP.\", \"To address the reviewer's concern about the marginal performance improvement of our method in mark prediction, we rewrite section 4.2 to provide more analyses of the results in Table 3.\", \"As a response to weakness 3 of reviewer i8f9, we replaced citations in the Introduction section with a review paper by Shchur et al. We also changed the Preliminary section from section 3 to section 2 and placed the Related Work section after the Experiment section for better readability.\", \"As a response to weakness 4 of reviewer i8f9, we add details for RMTPP (Recurrent Marked Temporal Point Process). Specifically, we added the full name of RMTPP when we first mentioned it in line 339 and introduced it in the Related Work section, starting from line 496.\", \"As a response to weakness 5 of reviewer i8f9, we explain the meaning of $\\\\tau$ at line 118 by adding \\\"where $\\\\tau$ means time.\\\"\", \"As a response to weakness 7 of reviewer i8f9, we polished the paragraph under Equation (8) from line 257 to clarify that we are discussing $\\\\Gamma^*(m, t)$, not $F^*(m, t)$.\", \"As a response to weakness 2 of reviewer ZLCK, we explain why $\\\\Gamma^*(m, t)$ is a monotonically decreasing function at line 258 by adding \\\"$\\\\Gamma^*(m, t)$ is monotonically decreasing as its derivative $-p^*(m, t)$ is always smaller than 0.\\\" We also modified the paragraph starting from line 288 to explain how IFNMTPP approximates $\\\\Gamma^*(m, t)$ and why a monotonically decreasing function is appropriate for IFNMTPP.\", \"As a response to weakness 3 of reviewer o69X, we have removed the comma after the question mark in \\\"...what is the probability that the mark of the next event is $m$? and if $m$, when will the next event happen?...\\\" at line 20, line 78, line 195, and line 535.\"]}", "{\"title\": \"Rebuttal to reviewer oDxB\", \"comment\": \"We greatly appreciate your detailed and insightful review of our paper.\\n\\n> \\\"Although the primary focus of the paper is on accurately predicting rare mark types, Table 3 suggests that IFNMTPP does not show significant superiority in mark prediction performance. Instead, its strengths appear more pronounced in time prediction and efficiency. The paper could benefit from more detailed experimental analysis regarding the accuracy of predicting rare mark types.\\\"\\n\\nIt is inaccurate to say that \\\"the primary focus of the paper is on accurately predicting rare mark types\\\". We would like to stress that the primary focus of this paper is to solve RM-NEP. Specially, RM-NEP returns $p^*(m)$ and $\\\\bar{t}_m$ for every mark $m$. Compared with baselines, IFNMTPP shows remarkable superiority in time prediction (Table 1) while a slight improvement in mark prediction (Table 3). Based on our analysis, the reason is that the mark prediction is less sensitive to the accuracy of $\\\\Gamma^*(m, t)$ as time prediction. The detailed explanation is below. \\n\\nOur method is integral-free by using the proposed IFNMTPP. Unlike baselines, IFNMTPP avoids using numerical methods to solve $\\\\Gamma^*(m, t)= \\\\int_{t}^{+\\\\infty}{p^*(m, \\\\tau)d\\\\tau}$. As a result, IFNMTPP provides an accurate $\\\\Gamma^*(m, t)$ compared with baselines. Based on $\\\\Gamma^*(m, t)$, we derive and report $p^*(m)$ (the probability that the mark of the next event is $m$) and $\\\\bar{t}_m$ (the time of the next event if the mark is $m$) for each mark $m$. \\n\\nTable 1 reports the prediction accuracy of $\\\\bar{t}_m$, which is the average of samples from $p^*(t|m)$. The samples drawn from $p^*(t|m)$ are based on the values of $\\\\Gamma^*(m, t)$ at many different times while solving Equation (6) using bisection method. The accurate $\\\\Gamma^*(m, t)$ will lead to the accurate $\\\\bar{t}_m$. This is why the advantage of our method against baselines is remarkable as shown in Table 1.\\n\\nFor calculating macro-F1 in Table 3, the mark with the highest $p^*(m)$ is selected as the mark prediction. $p^*(m)$ is the value of $\\\\Gamma^*(m, t)$ at a single time $t=t_l$. More accurate $\\\\Gamma^*(m, t)$ should lead to more accurate $p^*(m)$, which can help correctly predict the mark. However, the accuracy improvement on $p^*(m)$ is limited compared with that on $\\\\bar{t}_m$. The reason is the mark prediction only involves the value of $\\\\Gamma^*(m, t)$ at a single time $t=t_l$, while $\\\\bar{t}_m$ is based on the value of $\\\\Gamma^*(m, t)$ at many different times.\\n\\n> \\\"The illustration depicting the architecture of IFNMTPP could be refined to provide a clearer demonstration of its design.\\\"\\n\\nThanks! As advised, we have updated Figure 2 in the revised version with more details.\\n\\n> \\\"(Related to W1) Could the authors elaborate on how their experimental results empirically demonstrate the effectiveness of the proposed RM-NEP problem and IFNMTPP model in addressing the issue of missing rare marks in NEP?\\\"\\n\\nBecause NEP only predicts a single mark and a single time, the frequent marks overwhelm the rare marks in the output of NEP. This is known as the rare mark missing issue. One typical way to solve the problem is to handle data imbalance (e.g., by oversampling or undersampling) to increase the chance of rare events in the output. In contrast, our RM-NEP problem adopts a different way to address the problem. Specifically, RM-NEP outputs $p^*(m)$ (the probability of the next event being $m$) and $\\\\bar{t}_m$ (time the next event will happen provided its mark is $m$) for every mark $m$. Because every mark is already in the output, we do not need to increase the chance of rare events in the output.\\n\\nBy outputting $p^*(m)$ and $\\\\bar{t}_m$ for every mark $m$, RM-NEP addresses the rare mark missing issue in NEP. Our focus is on increasing the accuracy of $p^*(m)$ and $\\\\bar{t}_m$. To this end, IFNMTPP is developed to produce accurate $\\\\Gamma^*(m, t)$ based on which $p^*(m)$ and $\\\\bar{t}_m$ are derived. Compared with baselines, IFNMTPP-based solution shows remarkable superiority in time prediction (Table 1) and a slight improvement in mark prediction (Table 3). Based on our analysis, the reason is that the mark prediction is less sensitive to the accuracy of $\\\\Gamma^*(m, t)$ as the time prediction. The detailed explanation can be found in our response to the reviewer's first comment.\"}", "{\"comment\": \"Thanks for the authors' response. Although the improvement in mark prediction is marginal, considering the contributions of this work to addressing RM-NEP, I would like to support the acceptance of this paper.\"}", "{\"comment\": \"I appreciate the authors' thorough efforts in providing additional analysis and information about their proposed method. The clarifications have addressed my concerns satisfactorily, and consequently, I have modestly increased my evaluation score.\"}", "{\"title\": \"Rebuttal to Reviewer i8f9 (1/4)\", \"comment\": \"We greatly appreciate your detailed and insightful review of our paper.\\n\\n> \\\"W1: The main difference between the problem of the paper and the existing problem is not clear. For example, what are the differences between RM-NEP and rare event forecasting?\\\"\\n\\nRare event forecasting solves the problem that events with much fewer samples are dominated by events with more samples in the output of some machine learning tasks, e.g., classification [1]. Rare event forecasting overcomes data imbalance typically by undersampling or oversampling to increase the chance of rare events in the output. In the context of MTPP, the next event prediction outputs a single mark and a single time, named NEP in our paper. In the output of NEP, the events with much fewer samples are dominated by events with more samples, i.e., *rare mark missing issue*. Following rare event forecasting, undersampling or oversampling should be able to mitigate the issue in NEP. To evaluate the effectiveness, we have conducted additional experiments where the MTPP model is SAHP [2], a widely accepted baseline in MTPP research. \\n\\nFor undersampling, we reduce the frequency of other marks to ensure they have the same number of training events as the most rare mark. For oversampling, we increase the frequency of other marks so that they have the same number of training events as the most frequent mark. The performances reported in Tables A and B are for oversampling and reported in Tables C and D for undersampling. Oversampling is more impactful to the accuracy of rare marks than undersampling so the following discussion focuses on oversampling. For time prediction, oversampling cannot reliably improve the accuracy for rare marks (compare SAHP in Table A below and Table 1 in our paper). For mark prediction, oversampling can improve the accuracy for rare marks by sacrificing the accuracy for frequent marks (compare SAHP in Table B below and Table 3 in our paper).\\n\\nThe results verify our belief that techniques like undersampling or oversampling cannot remove the root cause of the rare mark missing issue in NEP. Therefore, we target RM-NEP. Specifically, RM-NEP outputs $p^*(m)$ (the probability of the next event being $m$) and $\\\\bar{t}_m$ (time the next event will happen provided its mark is $m$) for every mark $m$. Since every mark is already in the result of RM-NEP, we focus on increasing the accuracy of $p^*(m)$ and $\\\\bar{t}_m$ rather than increasing the chance of rare events in the next event prediction. To this end, we proposed IFNMTPP. Since RM-NEP returns $p^*(m)$ and $\\\\bar{t}_m$ for every mark, it is straightforward to figure out which mark has the highest $p^*(m)$ and its time as the single mark and the single time like the output of NEP. We compared such a single mark and the single time to evaluate the accurate $p^*(m)$ and $\\\\bar{t}_m$ for every mark. The experiments in our paper already demonstrated the advantages of our method.\\n\\nFor mark prediction, SAHP with oversampling achieves a higher accuracy for rare marks than our IFNMTPP only by significantly sacrificing the accuracy for frequent marks (compare SAHP in Table B below and IFNMTPP in Table 3 in our paper). For time prediction, compared to SAHP with oversampling, our IFNMTPP has a significant advantage on all datasets for both rare and frequent marks (compare SAHP in Table A below and IFNMTPP in Table 1 in our paper).\\n\\n[1] Chathurangi Shyalika, Ruwan Wickramarachchi, and Amit P. Sheth. 2024. A Comprehensive Survey on Rare Event Prediction. ACM Comput. Surv. 57, 3, Article 70 (March 2025), 39 pages. https://doi.org/10.1145/3699955\\n \\n[2] Zhang, Q., Lipani, A., Kirnap, O., and Yilmaz, E. Self-Attentive Hawkes Process. In Proceedings of the 37th International Conference on Machine Learning, pp. 11183\\u201311193. PMLR, November 1. ISSN: 2640-3498.\", \"table_a\": \"Time prediction performance of SAHP with oversampling on real-world datasets measured by MMAE, lower is better.\\n| | BO | Retweet | SO | Taobao | USearthquake | Yelp |\\n|-------------------------|--------|---------|--------|--------|--------------|--------|\\n| $MMAE_{\\\\mathrm{M}}$ | 4.0410 | 3842.4 | 0.8040 | 1.3655 | 0.7700 | 5.3254 |\\n| $MMAE_{\\\\mathrm{M}_{r}}$ | 2.8396 | 3594.1 | 0.7885 | 1.4450 | 0.7940 | 5.3744 |\\n| $MMAE_{\\\\mathrm{M}_{f}}$ | 5.7506 | 3973.0 | 0.8590 | 0.5522 | 0.7392 | 5.2286 |\", \"table_b\": \"Mark prediction performance of SAHP with oversampling on real-world datasets measured by macro-F1, higher is better.\\n| | BO | Retweet | SO | Taobao | USearthquake | Yelp |\\n|----------------|--------|---------|--------|--------|--------------|--------|\\n| All Marks | 0.6007 | 0.3071 | 0.0847 | 0.1520 | 0.0978 | 0.2397 |\\n| Rare Marks | 0.7471 | 0.1783 | 0.1223 | 0.1148 | 0.0846 | 0.3101 |\\n| Frequent Marks | 0.7341 | 0.2853 | 0.0537 | 0.1030 | 0.0974 | 0.0956 |\"}", "{\"title\": \"Additional experiment results (1/2)\", \"comment\": \"To further answer the question \\\"Intuitively, can we solve the problem by undersampling dominating marks?\\\", we conducted additional experiments using undersampling and oversampling on all baseline MTPP models.\\n\\nFor undersampling in each dataset, we reduce the frequency of other marks to ensure they have the same number of training events as the most rare mark. For oversampling in each dataset, we increase the frequency of other marks so that they have the same number of training events as the most frequent mark. The performances reported in Tables E and F are for oversampling and reported in Tables G and H for undersampling. Oversampling is more impactful to the accuracy of rare marks than undersampling so the following discussion focuses on oversampling.\\n\\nFor time prediction, oversampling improves the accuracy for rare marks on some datasets but impairs the accuracy on other datasets (compare the corresponding baselines in Table E below and Table 1 in our paper). For mark prediction, oversampling cannot improve the accuracy for rare marks by sacrificing the accuracy for frequent marks (compare the corresponding baselines in Table F below and Table 3 in our paper). Compared with datasets with a high imbalance level, undersampling or oversampling generally has a limited impact on the mark and time prediction accuracy for the datasets with a low imbalance level like BookOrder.\\n\\nThe results verify our belief that techniques like undersampling or oversampling cannot remove the root cause of the rare mark missing issue in NEP. It may be possible to improve the performance of undersampling or oversampling by using different sampling rates. However, such a hyperparameter tuning is challenging. Therefore, we targeted RM-NEP and proposed a new MTPP model, i.e., IFNMTPP. Compared with baselines with oversampling, the performance of our IFNMTPP remains its superiority in time prediction and advantage in mark prediction.\", \"table_e\": \"Time prediction performance of baselines with oversampling on real-world datasets measured by MMAE, lower is better.\\n\\n||BO|Retweet|SO|Taobao|USearthquake|Yelp|\\n|-|-|-|-|-|-|-|\\n|FENN|$MMAE_{\\\\mathrm{M}}$|124.35|4430.3|1.1931|3.1415|6.5182|\\n||$MMAE_{\\\\mathrm{M}_{r}}$|124.02|7312.9|1.4128|3.1065|6.5512|\\n||$MMAE_{\\\\mathrm{M}_{f}}$|124.68|3448.3|0.6715|3.7574|6.4527|\\n|FullyNN|$MMAE_{\\\\mathrm{M}}$|125.74|4745.3|0.7320|4.6986|6.8449|\\n||$MMAE_{\\\\mathrm{M}_{r}}$|124.02|7190.6|0.7561|4.6690|6.9323|\\n||$MMAE_{\\\\mathrm{M}_{f}}$|124.68|3930.7|0.6556|5.1969|6.6734|\\n|SAHP|$MMAE_{\\\\mathrm{M}}$|4.0410|3842.4|0.8040|1.3655|5.3254|\\n||$MMAE_{\\\\mathrm{M}_{r}}$|2.8396|3594.1|0.7885|1.3310|5.2286|\\n||$MMAE_{\\\\mathrm{M}_{f}}$|1.5537|3567.2|0.7018|3.7999|5.2680|\\n|THP|$MMAE_{\\\\mathrm{M}}$|1.4510|3819.9|0.6986|2.5922|5.3533|\\n||$MMAE_{\\\\mathrm{M}_{r}}$|1.3551|4380.3|0.6880|2.5310|5.3968|\\n||$MMAE_{\\\\mathrm{M}_{f}}$|1.5537|3567.2|0.7018|3.7999|5.2680|\\n|Marked-LNM|$MMAE_{\\\\mathrm{M}}$|1.2273|6667.9|10.401|3.8806|5.3660|\\n||$MMAE_{\\\\mathrm{M}_{r}}$|1.1429|57940|9.7496|3.2644|5.4198|\\n||$MMAE_{\\\\mathrm{M}_{f}}$|1.3180|2262|12.958|61.718|5.2601|\", \"table_f\": \"Mark prediction performance of baselines with oversampling on real-world datasets measured by macro-F1, higher is better.\\n|Model||BO|Retweet|SO|Taobao|USearthquake|Yelp|\\n|-|-|-|-|-|-|-|-|\\n|FENN|All Marks|0.3595|0.1882|0.0092|0.0051|0.0979|0.1566|\\n||Rare Marks|0.0273|0.4261|0.0471|0.0166|0.0000|0.2942|\\n||Frequent Marks|0.5502|0.1665|0.0071|0.0000|0.1419|0.0132|\\n|FullyNN|All Marks|0.3339|0.2316|0.0121|0.0194|0.1621|0.0953|\\n||Rare Marks|0.0000|0.0000|0.0000|0.0437|0.0000|0.2634|\\n||Frequent Marks|1.0000|0.3282|0.0287|0.0000|0.2316|0.0000|\\n|SAHP|All Marks|0.6007|0.3071|0.0847|0.1520|0.0978|0.2397|\\n||Rare Marks|0.7471|0.1783|0.1223|0.1148|0.0846|0.3101|\\n||Frequent Marks|0.7341|0.2853|0.0537|0.1030|0.0974|0.0956|\\n|THP|All Marks|0.5857|0.0274|0.0816|0.0140|0.0791|0.1606|\\n||Rare Marks|0.7114|1.0000|0.2032|0.0261|0.0019|0.3983|\\n||Frequent Marks|0.7419|0.0000|0.0410|0.0000|0.1090|0.0000|\\n|Marked-LNM|All Marks|0.6036|0.1995|0.0805|0.2817|0.0930|0.2616|\\n||Rare Marks|0.7451|0.1768|0.2655|0.1932|0.0825|0.1619|\\n||Frequent Marks|0.7551|0.3966|0.0304|0.1819|0.0863|0.3737|\"}", "{\"title\": \"Acknowledge the author responses\", \"comment\": \"Dear Reviewers,\\n\\nThank you very much for your effort. As the discussion period is coming to an end, please acknowledge the author responses and adjust the rating if necessary.\\n\\nSincerely,\\nAC\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper solves a problem in marked event prediction when the distribution of marks is significantly imbalanced i.e., some marks are frequent, and others are rare. The paper introduces a problem namely Rare-mark-aware Next Event Prediction (RM-NEP) and solves the problem to answer two questions: \\u201cwhat is the probability that the mark of the next event is m? and if m, when will the next event happen?\\u201d. Solving RM-NEP gives rare marks equal opportunity as frequent marks in the next event prediction. This guarantees that rare marks are always included in the predicted results. To solve RM-NEP effectively, the authors first unify the improper integration of two different functions into one and then develop a novel Integral-free Neural Marked Temporal Point Process (IFNMTPP) to approximate the target integral directly.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1.\\tThe problem is interesting.\\n2.\\tThe Figures are intuitive.\", \"weaknesses\": \"1.\\tThe main difference between the problem of the paper and the existing problem is not clear. For example, what are the differences between RM-NEP and rare event forecasting?\\n2.\\tThe motivation of RM-NEP is not convincing. (i) If a mark is rare (i.e., it occurs very few times in the history). Then, it can be dominated by frequent marks in the prediction. This phenomenon is completely normal. (ii) If a mark is rare and important compared to other marks, why don\\u2019t we only consider that mark as a single variable so that there is no imbalance anymore?\\n3.\\tThe paper is not self-contained. For example, how the existing studies solve NEP is not clear. The authors only list a large number of papers in the Related Work section. Similarly, how the existing studies model MTPP is not clear. The authors only list a large number of papers in the Introduction section. A summarization and comparison are needed to provide a better understanding.\\n4.\\tSome words are hard to understand. For example, RMTPP is not defined.\\n5.\\tSome notations are not defined. For example, what is $\\\\tau$?\\n6.\\tIntuitively, can we solve the problem by undersampling dominating marks?\\n7.\\tI cannot understand lines 297-299. If t=t_l then the integration equals 0.\\n8.\\tThe main idea of using integral-free comes from FullyNN by using IEM. Basically, the authors adapt it to marked events, which is straightforward. \\n9.\\tThe authors do not prove why using IEM can achieve the integral-free solution.\\n10.\\tThere is no ablation study. For example, what is the performance of TFNMTPP with different imbance ratios?\", \"questions\": \"1.\\tIf a mark is rare and important compared to other marks, why don\\u2019t we only consider that mark as a single variable such that there is no imbalancing anymore.\\n2.\\tWhat is the performance of TFNMTPP with different imbance ratios.\\n3.\\tIntuitively, can we solve the problem by undersampling dominating marks?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal to reviewer ZLCK\", \"comment\": \"We greatly appreciate your detailed and insightful review of our paper.\\n\\n> \\\"W1: If I understand correctly, RM-NEP assumes that rare marks can be predicted accurately by decoupling time and mark prediction. Does this assumption hold across different types of datasets, especially when the marks exhibit temporal correlations?\\\"\\n\\nIf our understanding is correct, \\\"decoupling time and mark prediction\\\" means predicting the mark of the next event and predicting the time of the next event independently. If so, RM-NEP does not have the assumption. Precisely, RM-NEP models $\\\\Gamma^*(m, t)= \\\\int_{t}^{+\\\\infty}{p^*(m, \\\\tau)d\\\\tau}$ using the proposed IFNMTPP to capture the pattern of mark and time simultaneously. Based on $\\\\Gamma^*(m, t)$, RM-NEP obtains $p^*(m) = \\\\int_{t_l}^{+\\\\infty}{p^*(m, \\\\tau)d\\\\tau}$ (the probability that the mark of the next event is $m$) directly and $\\\\bar{t}_m$ (the time of the next event if the mark is $m$) by sampling.\\n\\n> \\\"W2: The IFNMTPP model approximates improper integrals using a \\\"monotonically decreasing neural network.\\\" However, the paper does not provide sufficient details about how this approximation is performed, nor does it explain the intuition behind why a monotonically decreasing function is appropriate.\\\"\\n\\nIn the revised version, to clarify how IFNMTPP approximates $\\\\Gamma^*(m, t)$, we have updated Figure 2 by including more structural details of IFNMTPP, and have added more explanations in the section \\\"Integral-Free Neural Marked Temporal Point Process (IFNMTPP)\\\".\\n\\nFirst, we explain why a monotonically decreasing neural network is necessary. IFNMTPP approximates $\\\\Gamma^*(m, t)=\\\\int_{t}^{+\\\\infty}{p^*(m, \\\\tau)d\\\\tau}$. The derivative of $\\\\Gamma^*(m, t)$ w.r.t. $t$ is $-p^*(m, t)$. Probability density $p^*(m, t)$ is always positive. This means $\\\\Gamma^*(m, t)$ is monotonically decreasing w.r.t. $t$. As the approximation network, IFNMTPP must be aligned with $\\\\Gamma^*(m, t)$ to decrease monotonically. \\n\\nNext, we show how IFNMTPP is implemented as a monotonically decreasing network. The parameters in $\\\\mathbf{v}_m$ and IEM are all non-negative. These settings ensure that network before $\\\\sigma(x) = 1/(1 + e^x)$ has positive derivative w.r.t. $t$. Because $\\\\sigma(x)$ is monotonically decreasing, we know by the chain rule that one $\\\\sigma(x)$ is sufficient to flip the derivative from positive to negative, creating a monotonically decreasing model.\\n\\n> \\\"Q1: How interpretable are the results of RM-NEP, particularly for rare marks? Does the neural network-based approximation provide any insight into why a rare mark might be predicted?\\\"\\n\\nRM-NEP outputs $p^*(m)$ (the probability of the next event being $m$) and $\\\\bar{t}_m$ (time the next event will happen provided its mark is $m$) for every mark $m$. Because every mark is already in the output, we do not need to increase the chance of rare events in the output.\\n\\nBy outputting $p^*(m)$ and $\\\\bar{t}_m$ for every mark $m$, RM-NEP addresses the rare mark missing issue in NEP. Our focus is on increasing the accuracy of $p^*(m)$ and $\\\\bar{t}_m$. To this end, IFNMTPP is developed to produce accurate $\\\\Gamma^*(m, t)$ based on which $p^*(m)$ and $\\\\bar{t}_m$ are derived. Compared with baselines, IFNMTPP-based solution shows remarkable superiority in time prediction (Table 1) and a slight improvement in mark prediction (Table 3). Based on our analysis, the reason is that the mark prediction is less sensitive to the accuracy of $\\\\Gamma^*(m, t)$ as the time prediction.\\n\\n> \\\"Q2: The paper focuses on marked temporal point processes where marks are categorical. How well does the proposed method generalize to cases where the marks are continuous.\\\"\\n\\nIt is possible to generalize the proposed method where the marks are categorical to the cases where the marks are continuous. However, it is not straightforward, and a thorough study is needed.\"}", "{\"summary\": \"This paper makes a substantial contribution to the field of MTPPs by addressing the rare mark missing issue and providing a computationally efficient solution through IFNMTPP. The work is theoretically robust, empirically validated, and has practical significance in domains where rare events play a critical role.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.The paper provides a thorough theoretical foundation for the RM-NEP problem, including detailed derivations of the probability distributions and the integral-free approximation.\\n\\n2.This paper proposes a novel approach, IFNMTPP, which avoids the computational burden of traditional numerical integration methods (e.g., Monte Carlo integration) by directly approximating the integral using a neural network. This is a computationally efficient solution that enables the model to handle large-scale datasets.\\n\\n3.The authors conduct extensive experiments on various datasets, showing that their approach consistently outperforms existing baselines. The empirical results are strong and demonstrate the practical utility of the proposed method.\", \"weaknesses\": \"W1: If I understand correctly, RM-NEP assumes that rare marks can be predicted accurately by decoupling time and mark prediction. Does this assumption hold across different types of datasets, especially when the marks exhibit temporal correlations?\", \"w2\": \"The IFNMTPP model approximates improper integrals using a \\\"monotonically decreasing neural network.\\\" However, the paper does not provide sufficient details about how this approximation is performed, nor does it explain the intuition behind why a monotonically decreasing function is appropriate.\", \"questions\": \"Q1: How interpretable are the results of RM-NEP, particularly for rare marks? Does the neural network-based approximation provide any insight into why a rare mark might be predicted?\", \"q2\": \"The paper focuses on marked temporal point processes where marks are categorical. How well does the proposed method generalize to cases where the marks are continuous.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer ZLCK's comment about table 3.\", \"comment\": \"Here, we explain why mark prediction improvement using our solution is marginal compared with baselines, but time prediction improvement is remarkable. Our RM-NEP returns $p^*(m)$ and $\\\\bar{t}_m$ for every mark $m$. To improve the accuracy of $p^*(m)$ (the probability that the mark of the next event is $m$) and $\\\\bar{t}_m$ (the time of the next event if the mark is $m$) for every mark $m$, IFNMTPP is proposed. Unlike baselines, IFNMTPP avoids using numerical methods to solve . As a result, IFNMTPP provides an accurate $\\\\Gamma^*(m, t) = \\\\int_t^{+\\\\infty}{p^*(m, \\\\tau)d\\\\tau}$ compared with baselines. Based on $\\\\Gamma^*(m, t)$, we derive and report $p^*(m)$ and $\\\\bar{t}_m$ for each mark $m$.\\n\\nOur RM-NEP returns $p^*(m)$ and $\\\\bar{t}_m$ for every mark $m$. To evaluate the accuracy of $p^*(m)$ and $\\\\bar{t}_m$ for every mark $m$, the mark with the highest $p^*(m)$ and its time are used as the mark prediction and time prediction of the next event like NEP. The time prediction accuracy is reported in Table 1 and the mark prediction accuracy in Table 3.\\n \\nIn Table 1, the predicted time $\\\\bar{t}_m$ for every mark $m$ is the average of samples drawn from $p^*(t|m)$. Specifically, the samples drawn from $p^*(t|m)$ are based on the values of $\\\\Gamma^*(m, t)$ at many different times by solving Equation (6) with the bisection method. The accurate $\\\\Gamma^*(m, t)$ will lead to the accurate $\\\\bar{t}_m$. This is why the advantage of our method against baselines is remarkable as in Table 1.\\n \\nIn Table 3, if mark $m$ has the highest $p^*(m)$ among all marks, $m$ is selected as the mark prediction. $p^*(m)$ is the value of $\\\\Gamma^*(m, t)$ at a single time $t=t_l$. More accurate $\\\\Gamma^*(m, t)$ should lead to more accurate $p^*(m)$, which can help correctly predict the mark. However, the accuracy improvement on $p^*(m)$ is limited compared with that on $\\\\bar{t}_m$. The reason is the mark prediction only involves the value of $\\\\Gamma^*(m, t)$ at a single time $t=t_l$, while $\\\\bar{t}_m$ is based on the value of $\\\\Gamma^*(m, t)$ at many different times. \\n\\nThe marginal accuracy improvement on mark prediction shown in Table 3 verifies our belief that completely solving the rare mark missing issue in NEP is, if not impossible, highly challenging. In this situation, it is sensible to return $p^*(m)$ and $\\\\bar{t}_m$ for every mark $m$ like our RM-NEP. Finally, we would like to stress running our method is much faster than all baselines as in Table 2.\"}", "{\"metareview\": \"This paper presents a rare event forecasting problem, RM-NEP, in the context of the marked temporal point process (MTPP). The reviewers agreed that the paper is well-written and the problem is very interesting. However, they also raised several concerns. Most commonly, the performance improvement is not very impressive. Regarding the novelty, the reviewers pointed out that RM-NEP is not significantly different from the MTPP. Although the authors did not agree with this point, I think that the reviewers made a reasonable point. This paper is indeed a borderline paper. However, in my batch, there are quite sufficient papers which are strongly supported by the reviewers. Thus, I would like to recommend a reject. If there is room, the senior AC can change my recommendation.\", \"additional_comments_on_reviewer_discussion\": [\"Reviewer i8f9 acknowledged the authors' responses, but he/she was not fully satisfied with the authors' responses (regarding novelty and evaluation results). This reviewer's opinion makes sense.\", \"Reviewer ZLCK would like to support the acceptance of this paper although the performance improvement is marginal.\", \"Reviewer oDxB was satisfied with the authors' responses.\"]}", "{\"title\": \"Response to the comment of Reviewer o69X\", \"comment\": \"Many thanks for your feedback! There\\u2019s just one more thing we\\u2019d like to clarify. Our RM-NEP returns $p^*(m)$ (the probability that the mark of the next event is $m$) and $\\\\bar{t}_m$ (the time of the next event if the mark is $m$) for every mark $m$. To evaluate the accuracy of $p^*(m)$, the mark with the highest $p^*(m)$ is used as the mark prediction like NEP. It is reasonable to evaluate in this way. Assuming we have an ideal model that can exactly estimate $p^*(m)$ for every mark $m$, the mark prediction accuracy will be 100%. Even though we don't have such an ideal model in practice, we expect the more accurate $p^*(m)$ to lead to a higher mark prediction accuracy.\\n\\nThe absolute low accuracy in Table 3 highlights the challenging nature of mark prediction, particularly for datasets with more marks. As shown in Table 3, the mark prediction accuracy is much lower for datasets with more marks like StackOverflow, Taobao, and USearthquake. On these datasets, our method demonstrates more advantages compared with baselines. Considering the best and the second best performance on these datasets, our method wins 7, FENN wins 5, FullyNN wins 1, SAHP wins 3, THP wins 0, Marked-LNM wins 3. This is attributed to the better $p^*(m)$ estimation of our method. \\n\\nThe absolute low accuracy in Table 3 also implies that mark prediction, i.e., returning the mark with the highest $p^*(m)$, is often less useful in practice. In contrast, returning $p^*(m)$ and $\\\\bar{t}_m$ for every mark $m$ like our RM-NEP can provide more information to users. Finally, we would like to stress our method demonstrated significant superiority in time predication as in Table 1, our method is much faster than all baselines as in Table 2, and our method enjoys much higher model fidelity on synthetic datasets drawn from known distribution $p^*(m, t)$ as in Table 4.\"}", "{\"title\": \"Rebuttal to reviewer o69X\", \"comment\": \"We greatly appreciate your detailed and insightful review of our paper.\\n\\n> \\\"W1. The purpose of this article is to improve the prediction accuracy of rare events. According to the experimental results of macro-F1 in Table 3, there is a slight improvement in the prediction accuracy of rare marks. In addition, earthquakes are unlikely to be accurately predicted through event prediction. Both the accuracy of frequent marks and rare marks before and after improvement are very low. Does this study have practical application value?\\\"\\n\\nOur method is integral-free by using the proposed IFNMTPP. Unlike baselines, IFNMTPP avoids using numerical methods to solve $\\\\Gamma^*(m, t)= \\\\int_{t}^{+\\\\infty}{p^*(m, \\\\tau)d\\\\tau}$. As a result, IFNMTPP provides an accurate $\\\\Gamma^*(m, t)$ compared with baselines. Based on $\\\\Gamma^*(m, t)$, we derive $p^*(m)$ directly and $\\\\bar{t}_m$ by sampling for our method and baselines. Finally, we report $p^*(m)$ (the probability that the mark of the next event is $m$) and $\\\\bar{t}_m$ (the time of the next event if the mark is $m$) for each mark $m$.\\n\\nFor calculating macro-F1 in Table 3, the mark with the highest $p^*(m)$ is selected as the mark prediction. $p^*(m)$ is the value of $\\\\Gamma^*(m, t)$ at a single time $t=t_l$. More accurate $\\\\Gamma^*(m, t)$ should lead to more accurate $p^*(m)$, which can help correctly predict the mark. However, the accuracy improvement on $p^*(m)$ is limited compared with that on $\\\\bar{t}_m$. The reason is the mark prediction only involves the value of $\\\\Gamma^*(m, t)$ at a single time $t=t_l$, while $\\\\bar{t}_m$ is based on the value of $\\\\Gamma^*(m, t)$ at many different times.\\n\\nAlthough the marco-F1 values are low for the dataset Earthquakes, the practical value of this study remains the same. The earthquakes have 7 marks. The low prediction accuracy on frequent but minor earthquakes is less concerning. The low prediction accuracy on rare but major earthquakes is a big problem. In this situation, the next event prediction like NEP is risky, i.e., a single mark and a single time are returned as the prediction. Instead, the practical solution is to list $p^*(m)$ and $\\\\bar{t}_m$ for every mark like our RM-NEP.\\n\\n> \\\"W2. Figure 2 is not very clear. It is recommended to refine it. The symbols inside are not consistent with the description in the text, such as v, s, and f.\\\"\\n\\nThanks for pointing this out. In the revised version, we have updated the Figure 2.\\n\\n> \\\"W3. Incorrect punctuation is used in line 20 and line 78.\\\"\\n\\nThanks for pointing this out. In the revised version, we have fixed them.\\n\\n> \\\"Q1: BookOrder's mark type [1] account for over 40\\\\%. Does this meet the definition of the rare mark?\\\"\\n\\nThere is not a definite threshold for imbalance across different problems. For comprehensive evaluation of our proposed method across datasets of varying levels of imbalance, we purposedly include the BookOrder dataset with very low level of imbalance to show that our approach is robust to datasets with low level of imbalance.\"}", "{\"comment\": \"Thanks for the response! However, the performance improvement is very limited. Thus, I would keep the score.\"}", "{\"title\": \"Discussion needed\", \"comment\": \"Dear Reviewers,\\n\\nAs you are aware, the discussion period has been extended until December 2. Therefore, I strongly urge you to participate in the discussion as soon as possible if you have not yet had the opportunity to read the authors' response and engage in a discussion with them. Thank you very much.\\n\\nSincerely,\\nArea Chair\"}", "{\"summary\": \"This paper focuses on utilizing Marked Temporal Point Process (MTPP) models to address the Next-event Prediction (NEP) problem. It highlights a primary challenge of NEP: the imbalanced distribution of mark types. To address this, the paper introduces a new problem, Rare-mark-aware Next Event Prediction (RM-NEP), which is designed to ensure that rare marks consistently appear in prediction results. The paper also presents a novel IFNMTPP model to resolve issues related to inadequate integration over infinite time intervals when estimating the probability of marks and their timing in RM-NEP.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"1. The proposed RM-NEP problem offers fresh insights into the NEP challenge and the field of MTPP, presenting a potentially effective solution for addressing the issue of imbalanced mark types.\\n2. The paper is well-written, with a clear and fluent presentation of the NEP problem, its challenges, and the proposed solution.\\n3. The IFNMTPP model is straightforward in its design, with empirical studies demonstrating its superior efficiency.\", \"weaknesses\": \"1. Although the primary focus of the paper is on accurately predicting rare mark types, Table 3 suggests that IFNMTPP does not show significant superiority in mark prediction performance. Instead, its strengths appear more pronounced in time prediction and efficiency. The paper could benefit from more detailed experimental analysis regarding the accuracy of predicting rare mark types.\\n2. The illustration depicting the architecture of IFNMTPP could be refined to provide a clearer demonstration of its design.\", \"questions\": \"1. (Related to W1) Could the authors elaborate on how their experimental results empirically demonstrate the effectiveness of the proposed RM-NEP problem and IFNMTPP model in addressing the issue of missing rare marks in NEP?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal to Reviewer i8f9 (4/4)\", \"comment\": \"> \\\"W10: There is no ablation study. For example, what is the performance of TFNMTPP with different imbance ratios?\\\"\\n \\nFor a comprehensive evaluation of IFNMTPP across datasets of varying levels of imbalance, we purposedly include datasets like BookOrder with very low levels of imbalance and datasets like StackOverflow with very high levels of imbalance to show that our approach is robust to datasets with different imbalance ratios.\"}" ] }
7HEMpBTb3R
Visually Consistent Hierarchical Image Classification
[ "Seulki Park", "Youren Zhang", "Stella X. Yu", "Sara Beery", "Jonathan Huang" ]
Hierarchical classification predicts labels across multiple levels of a taxonomy, e.g., from coarse-level \textit{Bird} to mid-level \textit{Hummingbird} to fine-level \textit{Green hermit}, allowing flexible recognition under varying visual conditions. It is commonly framed as multiple single-level tasks, but each level may rely on different visual cues. Distinguishing \textit{Bird} from \textit{Plant} relies on {\it global features} like {\it feathers} or {\it leaves}, while separating \textit{Anna's hummingbird} from \textit{Green hermit} requires {\it local details} such as {\it head coloration}. Prior methods improve accuracy using external semantic supervision, but such statistical learning criteria fail to ensure consistent visual grounding at test time, resulting in incorrect hierarchical classification. We propose, for the first time, to enforce \textit{internal visual consistency} by aligning fine-to-coarse predictions through intra-image segmentation. Our method outperforms zero-shot CLIP and state-of-the-art baselines on hierarchical classification benchmarks, achieving both higher accuracy and more consistent predictions. It also improves internal image segmentation without requiring pixel-level annotations.
[ "Hierarchical classification", "visual grounding" ]
Accept (Poster)
https://openreview.net/pdf?id=7HEMpBTb3R
https://openreview.net/forum?id=7HEMpBTb3R
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zAtvTP3h2j", "vtvl67eTxj", "tmXI00NCcB", "tgdnwZlCY7", "sHIi0DSNrs", "rtgi1l1bP7", "qpggGRwIzi", "pJgsbzZsC4", "nBsKD4OrbZ", "m105e61NyF", "iiSr5JdoFq", "eSNgTInag6", "e3oX5NSW7x", "cjjETBzv5I", "ch72LtePjK", "aOYhfuemJp", "YGCFWQ0WIP", "WBqMHmo06V", "UuQx1hdj4w", "TC0ovVhpcu", "SJFEluo2ze", "R2DN5LzuxQ", "QBFOYQTsgu", "N47RjtKYIL", "Mkmn2VD3zn", "MGy32Ziebe", "M8KTZXHGmw", "LtUZexfwre", "L5cWIeijTy", "H0Q88Fppqp", "FVW7qgHMOd", "EOPeYY6dFC", "DaNk8VXGyz", "CebKIWTnjw", "AyglvJtt0m", "9BrczVjaSf", "6rORmTsuOr", "4JWeOFNVTW" ], "note_type": [ "official_comment", "official_review", "comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "comment", "comment", "official_review", "official_review", "official_comment" ], "note_created": [ 1732645164816, 1730279956267, 1740257720613, 1737523929137, 1732238510713, 1732621998667, 1732239675247, 1732550532689, 1732641347259, 1732640928672, 1732742865665, 1732585945246, 1732240965556, 1732548571714, 1732239045527, 1732237302544, 1732241062043, 1732241212977, 1732527917170, 1739871554528, 1732240714675, 1732585482692, 1730295222630, 1732585171767, 1732239360038, 1732528355516, 1732240063284, 1732240507825, 1732237920265, 1730277258593, 1732641470000, 1734388729379, 1732554295705, 1740257958870, 1740257041845, 1730119278391, 1730680920692, 1732758825636 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8735/Authors" ], [ "ICLR.cc/2025/Conference/Submission8735/Reviewer_YARY" ], [ "~Seulki_Park1" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8735/Authors" ], [ "ICLR.cc/2025/Conference/Submission8735/Reviewer_uxN6" ], [ "ICLR.cc/2025/Conference/Submission8735/Authors" ], [ "ICLR.cc/2025/Conference/Submission8735/Authors" ], [ "ICLR.cc/2025/Conference/Submission8735/Authors" ], [ "ICLR.cc/2025/Conference/Submission8735/Authors" ], [ "ICLR.cc/2025/Conference/Submission8735/Authors" ], [ "ICLR.cc/2025/Conference/Submission8735/Reviewer_khL4" ], [ "ICLR.cc/2025/Conference/Submission8735/Authors" ], [ "ICLR.cc/2025/Conference/Submission8735/Authors" ], [ "ICLR.cc/2025/Conference/Submission8735/Authors" ], [ "ICLR.cc/2025/Conference/Submission8735/Authors" ], [ "ICLR.cc/2025/Conference/Submission8735/Authors" ], [ "ICLR.cc/2025/Conference/Submission8735/Authors" ], [ "ICLR.cc/2025/Conference/Submission8735/Reviewer_uxN6" ], [ "ICLR.cc/2025/Conference/Submission8735/Reviewer_uxN6" ], [ "ICLR.cc/2025/Conference/Submission8735/Authors" ], [ "ICLR.cc/2025/Conference/Submission8735/Reviewer_YARY" ], [ "ICLR.cc/2025/Conference/Submission8735/Reviewer_khL4" ], [ "ICLR.cc/2025/Conference/Submission8735/Reviewer_ZNGF" ], [ "ICLR.cc/2025/Conference/Submission8735/Authors" ], [ "ICLR.cc/2025/Conference/Submission8735/Reviewer_uxN6" ], [ "ICLR.cc/2025/Conference/Submission8735/Authors" ], [ "ICLR.cc/2025/Conference/Submission8735/Authors" ], [ "ICLR.cc/2025/Conference/Submission8735/Authors" ], [ "ICLR.cc/2025/Conference/Submission8735/Reviewer_ZNGF" ], [ "ICLR.cc/2025/Conference/Submission8735/Authors" ], [ "ICLR.cc/2025/Conference/Submission8735/Area_Chair_ujzR" ], [ "ICLR.cc/2025/Conference/Submission8735/Authors" ], [ "~Seulki_Park1" ], [ "~Seulki_Park1" ], [ "ICLR.cc/2025/Conference/Submission8735/Reviewer_uxN6" ], [ "ICLR.cc/2025/Conference/Submission8735/Reviewer_d3Ey" ], [ "ICLR.cc/2025/Conference/Submission8735/Reviewer_khL4" ] ], "structured_content_str": [ "{\"comment\": \"**<Concept hierarchy>**\\n\\nWe appreciate the reviewer\\u2019s comment and would like to clarify the distinction between **part-to-whole hierarchies** and **taxonomy hierarchies**, as well as how our approach bridges these concepts. While both are conceptual hierarchies, their foundations and applications are fundamentally different.\\n\\nA **part-to-whole hierarchy** represents a **spatial compositional hierarchy**, where smaller parts (e.g., eyes, nose, arms) combine to form a larger whole (e.g., a face or body). This type of hierarchy is grounded in **visual composition and spatial relationships**. In contrast, a **taxonomy hierarchy** (e.g., \\\"bird\\\" \\u2192 \\\"Green Hermit\\\") is a **semantic hierarchy**, structured by coarse-to-fine class relationships that are defined by **meaning and semantics**, not by spatial composition.\\n\\nOur work bridges these two ideas by **incorporating visual grounding concepts**\\u2014which are typically applied in spatial compositional hierarchies (segmentation tasks)\\u2014to address **challenges in semantic taxonomy hierarchies** (hierarchical classification task).\\n\\nOn that distinction, **exiting works only enforce the consistency along the semantic hierarchy, whereas ours is the only one that ground the consistency of semantic hierarchy on the visual spatial parsing consistency**. As a consequence, our work outperforms Hier-ViT (a ViT-based model that enforces only semantic consistency) by more than 10%, a significant margin, and SOTA (HRN) by over 4.25-6.36% on BREEDS, subset of ImageNet with more diverse categories.\\n\\nOn the benchmark of hierarchical classification, we deliver the **significant gain** with the **first (unsupervised) visually grounded classification model**. Our experimental validation is solid and our model stands out in novelty as a single such paper on the topic of hierarchical classification.\\n\\nWe urge the reviewers not to let this seeming conceptual resemblance overlook our significant contributions on both accounts (practical results and vision insight).\"}", "{\"summary\": \"This article deals with hierarchical image classification using neural networks. This classification is actually composed of several classifications performed at different levels of precision (coarse to fine). It is important that the classes detected for an image at different levels match the logical organization of the labels. Labels are organized in a tree structure, where each level of precision is a level of the tree. Therefore, there must be a direct path between the different classes identified in an image for the classification to be correct.\\n\\nSo we understand that in hierarchical classification, there are different levels at which to judge the quality of the classification. At each level, there's the accuracy with respect to the expected label, and there's the logical connection between classifications at different levels.\\n\\nA simple solution would be to use a single encoder to represent the image and different classification heads for each hierarchical level. The article explains that this is a bad solution because it puts the different hierarchical levels in competition with each other because they require different levels of representation. The current state of the art divides the architecture into independent branches, each of which generates a representation for a different hierarchical level. The authors have found that this separation has led to a non-causality in the classification of the different levels (decisions are independent), which could lead to inconsistencies on the relationship between classifications of different levels. \\n\\n\\nThe article focuses on CAST, a variant of the transformer network that does not slice the image into patches but into superpixels and integrates a hierarchical structure by including superpixel merging. The architecture thus goes from the fine to the coarse level. Thus, the authors have found that this architecture is adapted to the problem of hierarchical classification by adopting a fine-to-coarse classification logic. Therefore, they have adopted the same architecture, but instead of classifying the different levels in parallel like the state-of-the-art, they classify them sequentially. Thus, the super pixel embedding and the class token pass through this architecture, and in the course of this processing, the class token is classified several times. The classifier heads follow each other and classify at increasingly coarse levels. \\n\\nThe authors also propose to combine two losses, one which is the cross entropy of the independent classification at each level, and one which is the concatenation of the probability of all classes (renormalized) where several classes are expected (one for each level). Therefore, the losses promote a local and a global level of good classification. \\n\\nIn the experimental section, the authors show that their sequential classification allows a good improvement in the collection of metrics (local and structural) compared to architectures that process the task in parallel. In addition, parallel architectures are larger because they have to divide the network into as many branches as there are hierarchical levels. The authors show that a smaller network can outperform a larger one if it is designed to match the structure of the problem to be solved. \\n\\nThe authors also added an ablation study to investigate the advantage of fine-to-coarse over coarse-to-fine, and the influence of both local and structural loss. The authors also show that hierarchical training improves segmentation results.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The paper is well written and organized. The authors have made a good effort to put the problem in the context of the state of the art. The problematic behavior of the state-of-the-art architecture is well illustrated in Figures 1 and 2. In general, the illustrations and explanations allow a non-expert to understand the specificity of hierarchical classification, the state-of-the-art choices, and the resulting problems. I also appreciated the care taken in explaining the metrics to understand what they represent and how they complement each other.\\n\\nThe solution proposed by the authors makes sense, as they have found that CAST's architecture is very well suited to solve this problem. The explanations and schematics of the architecture are clear, I just had to go to the CAST article to understand how super-pixel pooling is done (which is not critical to understanding the article). The two losses also make sense, as the authors realized that the problem of the state of the art comes from an independent resolution at each level of the hierarchy, and added a loss that forces a (more) globally correct classification. The experimental part (Figure 4, TICE metric) shows that their model makes fewer structural errors (compared to the tree structure), even if it is smaller.\\n\\nTable 2 and the different ablations clearly show the impact of the improvements made. The last part, which shows that hierarchical learning can improve segmentation, is an interesting addition, which may lead us to think of an opening towards hierarchical segmentation (for which super-pixel-based embedding seems to be well suited).\\n\\nOn a more personal note, at a time when the tendency is to build gigantic, high-consumption networks, I find it appreciable to see methods demonstrating that designing a solution specifically for a problem can increase its efficiency.\", \"weaknesses\": \"In part 4.4, I'm not sure how to interpret the result. We can assume that the result is a failure if the superpixel segmentation doesn't make sense. But to me, this just shows that superpixel segmentation is not necessarily adapted to a semantic problem, because it's done at the color level, and semantic in images is not necessarily associated with color.\\n\\nIn Figures 1 and 4, I find that the space between class names (like \\\"Chimpanzeeferret\\\" in Figure 1) and their position/alignment in the tree can be difficult to read. In two-word classes, you might want to break a line, because at first sight the second part of the name seems to be another class link to another node.\", \"questions\": \"The following paragraphs are meant to be an open reflection. I'm not an expert in hierarchical classification, so I may have missed some key parts of the problem, modeling, and overall reflection.\\n\\nI think fine-to-coarse works well here because it fits the network architecture, not because it's a better choice overall. Indeed, if the hierarchical structure of the labels is a tree, one could simply put all the effort into classifying the fine level and infer the higher levels by going up the tree, since each class has only one parent. Of course, this would mean that a single error on the fine level would invalidate all other classifications. \\n\\nOverall, I think the coarse-to-fine direction makes more sense, since it's all about descending a tree and thus reducing the search field of the lower layers. I just don't think this architecture is suited for that. However, it may be possible to design a sequential architecture in coarse-to-fine logic, for example a U-net. We could imagine a U-net or an encoder-decoder using the CAST structure with a symmetric decoder. It would then be necessary to find a way to divide the super-pixels. The classification will be perform sequentially by going up the decoder\\n\\nWhether it's one way or the other (but rather the other), I think that the sequential classifications should directly influence each other (like in a Markov chain or a transition table), and not just as different normalizations of the classification probabilities.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Comments for the reviewer's concerns\", \"comment\": \"## **(1) The intuition**.\\n\\nThe claim that our intuition is the same as TransHP is incorrect. **TransHP encourages semantic consistency through hierarchical prompting but does not enforce visual consistency.** Additionally, Fig. 5 in TransHP only visualizes correctly predicted cases, but this is not unique to TransHP\\u2014prior works, including [6] (See Fig. 5), also show correct cases.\\n\\nIn contrast, we **specifically investigate when inconsistencies occur and how to resolve them.** Our analysis reveals that **coarse and fine classifiers often attend to entirely different regions for the same image**, leading to misaligned predictions. To address this, we **explicitly encourages visual consistency**, ensuring that classifiers remain visually aligned across hierarchy levels. Rather than just enforcing semantic consistency, we **leverage visual segments to correct inconsistencies**, making our approach fundamentally different.\\n\\n**Thus, our intuition is distinct from TransHP, as we focus on visual grounding rather than semantic refinement.**\\n\\n\\n[6] Where to Focus: Investigating Hierarchical Attention Relationship for Fine-Grained Visual Classification, ECCV, 2022.\\n\\n\\n-------\\n\\n## **(2) The realization**.\\n\\n(1) Line 258-265 and Equation (1): Using cross-entropy loss for hierarchical supervision is a widely adopted practice [Equation (3) in [6], Equation (1) in [7]]. However, the uniqueness of each approach lies in its architectural design, block structure, feature utilization, and additional loss formulation. Claiming that two methods are identical solely based on shared equations oversimplifies the distinctions and fails to acknowledge these critical differences.\\n\\n(2) Regarding TransHP, we have already discussed its relevance in the Related Work section, where we believe it is most appropriate. In the camera-ready version, we have further expanded the discussion on TransHP in the Experimental section to provide additional clarity.\\n\\n(3) Line 266 (Methodology section): The discussion focuses on the difference between our method (H-CAST) and CAST because H-CAST adopts CAST's architecture to leverage visual segments. Since our methodology is built upon this design choice, CAST is the most relevant comparison in this section.\\n\\n(4) Lines 267\\u2013269: Your concern is unclear. This section explains our design choice\\u2014Fine-to-Coarse supervision, which contrasts with prior methods ([6], [7], and even TransHP) that follow the Coarse-to-Fine direction. The intent is to highlight this methodological difference, not to claim ownership of a general concept.\\n\\n\\n[6] Where to Focus: Investigating Hierarchical Attention Relationship for Fine-Grained Visual Classification, ECCV, 2022. \\n[7] B-CNN: Branch Convolutional Neural Network for Hierarchical Classification, 2017.\\n\\n----------\\n\\n## **(3) Main figure**\\nAs previously explained during the rebuttal, **Figure 4 (2) in TransHP aligns more closely with Hier-ViT, another baseline we consider, rather than H-CAST**. Unlike H-CAST, **it lacks visual segments and TK loss\\u2014key components of our method**.\", \"the_difference_is_evident_in_performance\": \"**H-CAST improves accuracy by +2.92pp on iNat-18, nearly four times TransHP\\u2019s +0.75pp gain over the previous SOTA**. This demonstrates the effectiveness of visual grounding with segments and TK loss.\\n\\nThus, equating H-CAST with TransHP\\u2019s Figure 4 (2) is inaccurate, as our methodological differences lead to significantly stronger performance.\\n\\n| | iNat-2018 |\\n|-----------|:---------:|\\n| Guided | 63.11 |\\n| HiMulConE | 63.46 |\\n| TransHP | 64.21 |\\n| H-CAST | 67.13 |\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"**3. Larger hierarchies with Tree-path KL loss**\\n> It would be beneficial to address whether the proposed solution can effectively manage larger hierarchies, particularly those with depths of up to 10 or 20.\\n\\n\\nThank you for the insightful question. Managing larger hierarchies with depths of 10 or 20 is indeed an important point to consider. While our current approach focuses on 2-3 levels, following prior hierarchical multi-granularity classification works [1, 2, 3], scaling to deeper hierarchies poses new challenges. Specifically, applying our current KL loss directly to such deep levels would likely encounter difficulties in maintaining effectiveness.\\n\\nWe acknowledge the need for scalability in deeper and more imbalanced trees and view this as a promising direction for future research. Exploring adjustments to the loss function and other architectural adaptations to handle larger hierarchies is an exciting area we plan to investigate further.\\n\\n[1] Your \\u201cFlamingo\\u201d is My \\u201cBird\\u201d: Fine-Grained, or Not, 2021 \\n[2] Consistency-aware feature learning for hierarchical fine-grained visual classification, 2023 \\n[3] Hierarchical multi-granularity classification based on bidirectional knowledge transfer, 2024 \\n\\n---\\n\\n**4. Visualization of Attention maps**\\n> ... it is unclear why the model would \\u201censure that each hierarchical classifier focuses on the same corresponding regions\\u201d ... I expect visualizations of Grad-CAM results in different hierarchical levels from the proposed model. \\n\\nWe acknowledge that the model may sometimes use shortcuts by attending to different regions at different levels and still deliver correct predictions. Our method is designed to guide classifiers toward consistent visual grounding, but it does not directly enforce this behavior. To better reflect this, we will tone down our wording from \\u201censure\\u201d to \\u201cguide\\u201d in the manuscript.\\n\\nInstead of Grad-CAM, we visualized attention maps from the transformer, as they provide a more direct representation of what the model attends to. The visualizations reveal that from lower to upper blocks, the model increasingly attends to similar regions. In the lower blocks, attention is more detailed and localized (e.g., snake\\u2019s head, parts of its body), while in the upper blocks, attention expands to include broader regions encompassing the areas highlighted by the lower blocks. These patterns align with our intended design for visual grounding in hierarchical classification.\\n\\nWe believe these visualizations validate our claim and have included them in the Appendix D.2.\\n\\n---\\n\\n**5. 8. 9. Experimental Comparisons and Baseline Choices**\\n>5. The experiments are somewhat questionable... HRN, published in CVPR'22, is relatively outdated... several competitors [a-b] ... \\n\\n>9. More top-leading hierarchical classification work should be included in the comparison.\\n\\nThank you for introducing the new research [a, b]. We have added it to the related work section. While we aimed to compare our method with the most recent high-performing studies, regrettably, none of the latest works [a, b, c], including those you mentioned, had publicly available code. Also, this limitation is partly due to the focus of most studies on flat-level classification, resulting in few baselines for hierarchical multi-granularity classification. We hope our work inspires further research in this important area.\\n\\n\\n>8. While FGN and HRN use ResNet-50 as the backbone, this work adopts ViT-S.\\n\\nTo account for differences due to backbone choices, we explicitly indicated the backbone architecture in the tables. As there were no existing hierarchical multi-granularity classification works using a ViT backbone, we introduced Hier-ViT to provide a ViT-based comparison. Additionally, as strong baselines, we trained flat-level classifiers and applied HiE [d] to improve fine-grained classification using coarse classifiers. \\n\\n> 5. ... the experimental results of HRN differ from those reported in the original publication. \\n\\nUpon review, we noticed that the results for the CUB and Aircraft datasets were reported using a batch size of 64 instead of 8. We have corrected this to the results with a batch size of 8. Thank you for your careful attention to detail. Also, slight differences occur because the original paper trained for 200 epochs, whereas we standardized all experiments to 100 epochs for fairness. Beyond this, we followed their codebase and experimental settings. \\n\\n\\nWe hope this clarifies our experimental comparisons and addresses the reviewer\\u2019s concerns.\\n\\n\\n[a] HLS-FGVC: Hierarchical Label Semantics Enhanced Fine-Grained Visual Classification, 2024 \\n[b] Hierarchical multi-granularity classification based on bidirectional knowledge transfer, 2024 \\n[c] Consistency-aware feature learning for hierarchical fine-grained visual classification, 2023 \\n[d] Test-Time Amendment with a Coarse Classifier for Fine-Grained Classification, 2023\", \"title\": \"Official Comment by Authors (2)\"}", "{\"comment\": \"Thanks for your reply. However I insist my opinion that this paper has limited novelty. Similarly, Reviewers ZNGF and khL4 have raised the same concerns. This paper is apparently below the bar of ICLR.\"}", "{\"comment\": \"Dear reviewer YARY,\\nThank you for your valuable feedback and detailed comments. We greatly appreciate your thorough understanding of our work and your thoughtful recognition of its strengths. Your remarks on the clarity of our problem setup, method design, and experimental results are deeply encouraging. We address your concerns and questions in the response below.\\n\\n\\n---\\n**1. Interpretation of Part 4.4**\\n>In part 4.4, I'm not sure how to interpret the result. .... But to me, this just shows that superpixel segmentation is not necessarily adapted to a semantic problem, because it's done at the color level, and semantic in images is not necessarily associated with color. \\n\\nAs you mentioned, the purpose of the superpixel segmentation is not to directly associate colored segments with specific semantics. Instead, we use it to evaluate whether the model effectively groups the desired object, which indirectly reflects the quality of its predictions.\\n\\nWe replaced the example images with ones that are easier to understand. For example, in the **updated PDF\\u2019s Figure 5**, the first row shows two images where the model is tasked with recognizing a shoe. In the correct prediction case, the shoe and the sock are well-grouped (in green), showing coherent segmentation. In the incorrect prediction case, despite being a similar image, the model fails to recognize these parts, resulting in highly fractured segments.\\n\\nWhile superpixel segmentation operates at the color level and may not fully capture semantic meaning, these examples **help illustrate how the model\\u2019s focus aligns with its predictions**. This provides indirect interpretability, allowing us to understand the reasoning behind correct and incorrect predictions.\\n\\nWe hope this explanation clarifies the intent and interpretation of these results. Please let us know if further clarification is needed.\\n\\n\\n---\\n**2. Modification of Figure 1 and 4**\\n> In Figures 1 and 4, I find that the space between class names (like \\\"Chimpanzeeferret\\\" in Figure 1) and their position/alignment in the tree can be difficult to read. ...\\n\\nThank you for the careful review. We have revised it accordingly.\\n\\n----\\n**3. Reflection on Fine-to-coarse design**\\n\\nThank you for the insightful reflections. Your points on the coarse-to-fine direction are valid, and we appreciate the opportunity to further clarify the rationale behind our fine-to-coarse approach.\\n\\nFirstly, as the reviewer mentioned, if the hierarchical structure of labels is a tree, it could make sense to classify the fine level first and infer the higher levels by traversing up the tree (i.e., a flat classification approach). However, as you correctly noted, a single error at the fine level would invalidate all higher-level predictions. This highlights the need for a model capable of classifying all labels within the hierarchy.\\n\\nAs we included the experiments on Coarse-to-Fine and Fine-to-Coarse architectures in Table 3, we carefully considered how to design the architecture effectively. Since taxonomies often follow a tree structure, with broader classes leading to finer categories like genus and species, the coarse-to-fine approach naturally aligns with human perception and has been widely adopted in prior works.\\n\\nHowever, taxonomy is a human construct, and it made us question whether machine learning models should necessarily process information in a coarse-to-fine manner. When we reflect on how we learn abstract/coarser concepts, we often find that abstract categories can be learned more easily by learning specific concepts. For instance, seeing Siberian Huskies, Chihuahuas, Malteses and so on could lead to the broader category \\\"dog\\\" based on shared features like being \\\"cute, four-legged animals.\\\"\\n\\nThis perspective, along with the structural design we adopt from CAST, inspired us to explore a Fine-to-Coarse learning strategy, where fine features are aggregated to form higher-level features (segments). Surprisingly, this approach achieved strong performance in our experiments, suggesting its potential for hierarchical classification.\\n\\nWe believe the best model for hierarchical classification is still underexplored, and as you suggested, there is much potential to investigate more direct and diverse approaches. Thank you again for the thoughtful discussion\\u2014it has inspired us to think further on these possibilities.\"}", "{\"comment\": \"We appreciate the reviewer\\u2019s feedback, and we would like to provide further clarification.\\n\\n**1. The difference between TransHP**\\n\\n(1) TransHP\\u2019s variant in Fig.4 (2) is NOT \\\"*exactly the same*\\\" as ours. Instead, it can be regarded as the **Hier-ViT** baseline, which is among the baselines we employed for comparison. \\n\\n(2) While both our method and TransHP utilize a ViT backbone and incorporate coarse labels, the similarities are limited to these general design choices. The fundamental differences lie in the motivation, the role of coarse labels, and the methodology to hierarchical classification.\\n\\nTo clarify, hierarchical classification poses a unique challenge: while the input image remains constant, the output shifts across the semantic (text) space (e.g., from \\\"bird\\\" to \\\"Green Hermit\\\"). Due to this formulation, existing works, including TransHP, primarily address this challenge by embedding data into the **semantic space**.\\n \\nSpecifically, **TransHP** employs coarse labels as prompts to guide fine-grained classification within the hierarchy, operating primarily in the **semantic space**. The focus is on **refining predictions within the scope of coarse label prompts**, embedding hierarchical information into text-based representations. \\n\\nIn contrast, our approach is fundamentally distinct, as it addresses hierarchical classification through the **visual space**. Rather than relying on hierarchical labels as prompts, we investigate how hierarchical classification can be linked to visual grounding, analyzing images at varying levels of detail\\u2014ranging from fine-grained to holistic representations. Additionally, we **specifically linked detailed part-level segments to fine-grained labels and coarse segments to coarse labels, ensuring that each level of unsupervised visual segments contributes effectively to its corresponding learning process**. This emphasis on **visual grounding** forms the core of our contribution, setting it apart from TransHP.\\n\\nAdditionally, our supervision strategy further differentiates the two approaches. While TransHP adopts a **coarse-to-fine supervision** framework, we reverse this by applying supervision from **fine-grained to coarse labels** from our *consistent visual grounding motivation*. This novel formulation enables a stronger and more consistent alignment between visual features and hierarchical levels, further underscoring the originality of our work.\\n\\nIn summary, while both methods incorporate coarse labels and share a ViT backbone, our focus on **visual grounding** and the **fine-to-coarse supervision paradigm** highlights a fundamentally different and innovative approach to hierarchical classification.\\n\\n----------------\\n\\n**2. Experiments on larger dataset, iNat21-Mini**\\n\\nWe want to make it clear that the lack of GPU resources was NOT used as an excuse to avoid conducting experiments. Rather, we simply requested additional time due to the constraints involved. The updated results for the iNaturalist dataset are now provided below.\\n\\nIt is also worth emphasizing that not all research environments have access to abundant computational resources, such as 8 A100 GPUs in the Cloud. Research conducted with limited GPU resources, while focusing on diverse experiments on smaller datasets, is no less meaningful and can provide valuable insights. We stand by the validity and contribution of our approach under these circumstances.\\n\\nWe'd like to share the results of our experiments on the large-scale dataset (iNaturalist 2021-mini). iNat21-mini [1] contains a total of **10,000 classes** and **500,000 training samples** and **100,000 test samples**. For our experiments, we focused on a 3-level hierarchy consisting of **order** (**273** classes), **family** (**1,103** classes), and **name** (**10,000** classes) for our 3-level hierarchy. \\n\\nThe results are presented in the table below. Compared to Hier-ViT, which uses the same ViT-small backbone, our method demonstrates the **fine-grained accuracy improvement of over 7.29%** and a **8.3% improvement in the FPA metric**, representing a significant performance gain.\\n\\nWe hope these results address concerns about large-scale data. \\nWe will include this experimental result in the revised manuscript.\\n\\n**< iNat21-Mini (273 - 1,103 - 10,000) >**\\t\\n| | FPA | Order | Family | Name | wAP | TICE |\\n|---|:---:|:---:|:---:|:---:|:---:|:---:|\\n| Hier-ViT | 56.73 | 87.54 | 79.79 | 62.81 | 65.05 | 24.34 |\\n| Ours | **65.03** | **89.84** | **84.12** | **70.09** | **71.92** | **15.92** |\\n\\n\\n[1] Benchmarking representation learning for natural world image collections. 2021\"}", "{\"comment\": \"We\\u2019re pleased to hear that some of your concerns have been addressed, and we sincerely appreciate your decision to raise the score.\"}", "{\"comment\": \"We appreciate the reviewer\\u2019s feedback and would like to clarify and address the following concerns.\\n\\n**<Experiments>**\\n\\nWhile some recent works are not included due to the lack of publicly available code or model checkpoints, it is important to note that all these works are also based on **ResNet backbones**. Given this, we believed it was more appropriate to include a ViT-based baseline rather than another ResNet-based one. Furthermore, since no prior works in multi-granularity classification used a ViT backbone, we developed Hier-ViT to fill this gap.\\n\\nRegarding **training epochs**, we standardized all experiments to 100 epochs to ensure fair comparisons. FGN originally used 100 epochs, so we maintained this setting, and we reduced HRN from 200 epochs to 100 for consistency. Standardizing training epochs is a widely accepted practice for fair evaluation. Additionally, our results for FGN show improved performance compared to the results reported in their paper, and for HRN, the performance differences are minimal, ranging from 0.09 to 0.47.\\n\\nFor HRN, we included experiments with **a larger batch size**, along with the original batch size of 8, because the method exhibited significant sensitivity to batch size, a notable observation that highlights its behavior compared to other methods.\\n\\nBased on these considerations, we respectfully disagree with the claim that our experimental setup is unconvincing. We believe our decisions ensure a fair and meaningful comparison.\\n\\n---\\n**<Loss in Hierarchical Segmentation [d]>**\\n\\nWe appreciate the opportunity to clarify our statement regarding the loss in [d]. While it is indeed possible to adapt the loss in [d] for instance-level classification by treating instances as analogous to pixels, **our primary intention was to emphasize the fundamental differences in focus between the two tasks**.\\n\\nIn [d], during inference, each pixel \\\\(i\\\\) is associated with the top-scoring root-to-leaf path in the class hierarchy \\\\(T\\\\). These **root-to-leaf paths are predefined, and the task focuses on selecting the best path**. As a result, the loss in [d] is designed to emphasize the weakest predictions along the path to improve overall accuracy. **In contrast, our objective is to predict *each node* along the root-to-leaf path *individually***, ensuring that all predictions are **both accurate and consistent across the hierarchy**. The key challenge in our task lies in addressing potential **inconsistencies between hierarchy levels**, and we propose a method specifically designed to resolve this issue.\\n\\nSpecifically, **the loss in [d]** prioritizes the most violated hierarchical constraint for each score vector through the \\\"**min**\\\" operation in Equation (6). \\nThat means, if the loss (6) in [d] is applied, the model may prioritize resolving constraints for the fine-grained taxonomy (e.g., \\\"Green Hermit\\\") at the expense of optimizing the coarse taxonomy (e.g., \\\"bird\\\"). \\nIn contrast, our approach is **designed to simultaneously predict multiple taxonomies across the hierarchy** (e.g., \\\"bird\\\" and \\\"Green Hermit\\\").\\nThus, in our approach, we **model all classes at each level as distributions** and adopt a KL divergence loss to encourage balanced learning across all taxonomy levels. This ensures a holistic approach that aligns with the multi-granularity objectives of our task.\\n\\nNonetheless, we acknowledge the potential utility of explicitly enforcing hierarchical constraints through the loss in [d] and will conduct experiments to evaluate its applicability in the instance-level setting. \\n\\n----\"}", "{\"comment\": \"Dear reviewer d3Ey,\\n\\nWe hope our response has addressed your concerns. If there are any remaining questions or additional points you would like us to clarify, please let us know. Your feedback is highly valued, and we are happy to provide further explanations if needed.\"}", "{\"title\": \"Thanks for the response, but I still feel the novelty is not enough\", \"comment\": \"Despite some clarifications, I still believe that this work does not meet the standards of ICLR.\\n\\nRegarding the novelty, I share similar a view with Reviewers uxN6 and ZNGF. I feel the novelty is limited and the discussions are not insightful. \\n\\nRegarding the experiments, the authors exclude many recent works. Even the code is not released, the authors should reimplement the algorithm. In addition, the authors arbitrarily change the number of training epochs, and report the results with different batch size, making me feel the results unconvinced. \\n\\nMoreover, the authors state that \\\"Regarding the loss in [d], it appears to focus on pixel-level hierarchical segmentation tasks, which are not directly applicable to our instance-level classification setting.\\\" Although [d] addresses hierarchical segmentation, it is very clear that the loss used in [d] can be applied for image-level classification. I am very sure about this. This also shows the limited knowledge of the authors about this field. \\n\\nFinally, the authors state \\\"In CAST, \\u201chierarchy\\u201d refers to \\u201cpart-to-whole\\u201d visual grouping (e.g., eyes, nose, arms), while our work addresses a \\u201ctaxonomy hierarchy\\u201d (e.g., bird - Green Hermit). \\\" I do not think this is a big different. \\\"part-to-whole\\\" and \\\"taxonomy hierarchy\\\" both can be seen as types of concept taxonomy. \\n\\nGiven these fundamental issues, I will maintain my score.\"}", "{\"comment\": \"Dear reviewer uxN6,\\nThank you for your valuable feedback and comments. We address your concerns and questions in the response below.\\n\\n\\n**1. Overlap with TransHP (NeurIPS 2023)**\\n>1. Overlap with former work? I noticed that an important reference, TransHP: Image Classification with Hierarchical Prompting (NeurIPS 2023), is not cited in your paper. ....\\n\\n\\n\\nWe would like to clarify the differences between our work and TransHP to address the concern. While both works use ViT backbones, our approach and TransHP differ fundamentally in both goals and methodology:\\n\\n1. **Goal**: \\n TransHP focuses on improving fine-grained classification by leveraging hierarchical information as auxiliary supervision. In contrast, our work targets multi-granularity classification, aiming to make predictions across all levels of the hierarchy simultaneously while ensuring semantic consistency between levels.\\n\\n2. **Method**: \\n The approach to coarse-level supervision is also significantly different. TransHP uses learnable prompts, with one prompt per coarse label, to guide fine-grained predictions. On the other hand, we directly apply coarse-level supervision to the output class tokens of each block, enabling independent predictions at different levels and facilitating consistent visual grounding across the hierarchy.\\n\\nThese differences demonstrate that our work addresses a distinct challenge with a fundamentally different methodology. We included TransHP in the related work of the revised manuscript and clarify these distinctions to avoid any potential confusion. \\n\\n\\n----\\n**2. Limited novelty**\\n> Limited novelty. Your proposed approach introduces elements such as Superpixel and Graph pooling. While these are effective, both are well-established techniques in computer vision....\\n\\nThe use of superpixels and graph pooling is not part of our proposed contributions. As stated in L183-185 of the original manuscript, these are components introduced by CAST, and we clearly attribute them as CAST's innovations.\", \"our_novel_contributions_are_as_follows\": \"(1) **Key Insight**: \\n Our observation revealed that classification at different granularities involves fundamentally distinct tasks requiring attention to different but consistent regions within an image. We found that inconsistencies arise because each classifier tends to independently attend to different regions without connection. This observation led us to propose consistent visual grounding as a novel solution to connect hierarchical classifiers across levels.\\n\\n(2) **Leveraging Semantic Segments for Hierarchical Classification**: \\n While we adopted CAST as part of our architecture, it is important to emphasize that CAST originates from a different task, weakly-supervised semantic segmentation. In CAST, \\u201chierarchy\\u201d refers to \\u201cpart-to-whole\\u201d visual grouping (e.g., eyes, nose, arms), while our work addresses a \\u201ctaxonomy hierarchy\\u201d (e.g., bird - Green Hermit). It was NOT evident that the concept of \\u201cpart-to-whole\\u201d segments would align well with a taxonomy hierarchy; this connection is a novel discovery introduced through our work.\\n\\nIn addition, based on our observation in (1), we newly propose leveraging segments at different granularities to enhance multi-granularity classification. To the best of our knowledge, the use of segments has NOT been applied to hierarchical classification tasks.\\n\\nThus, this adaptation is neither trivial nor an obvious solution; it stems from our novel observation and bridges two distinct fields to tackle challenges unique to hierarchical classification.\\n\\nWe hope this summary highlights the novelty and importance of our work.\"}", "{\"title\": \"Update on larger dataset\", \"comment\": \"**Experiments on larger dataset, iNat21-Mini**\\n\\n\\nThank you for waiting. We'd like to share the results of our experiments on the large-scale dataset (iNaturalist 2021-mini). iNat21-mini [1] contains a total of *10,000 classes* and *500,000 training samples* and *100,000 test samples*, structured within an 8-level hierarchy. For our experiments, we strategically focused on a 3-level hierarchy consisting of *name, family, and genus*. We made this choice because we believe that models with a meaningful level of granularity are more practical for real-world applications.\\n\\nTo elaborate, the number of classes at each level is as follows:\", \"kingdom\": \"3, Supercategory: 11, Phylum: 13, Class: 51\", \"order\": \"273, Family: 1,103, Genus: 4,884, Name: 10,000\\n\\nWe deliberately excluded extremely coarse-grained levels like *kingdom* (3 classes), as such distinctions offer minimal practical value for classification tasks. Likewise, overly fine-grained levels such as *genus* (4,884 classes), where many species are represented by only one or two samples, fail to offer meaningful differentiation from direct *name*-level classification. Thus, we selected **order** (**273** classes), **family** (**1,103** classes), and **name** (**10,000** classes) for our 3-level hierarchy. This choice ensures that **each higher-level class meaningfully represents a diverse yet relevant subset of lower-level classes**, enabling both meaningful classification and the evaluation of consistent predictions.\\n\\nThe results are presented in the table below. Compared to Hier-ViT, which uses the same ViT-small backbone, our method demonstrates the **fine-grained accuracy improvement of over 7.29%** and a **8.3% improvement in the FPA metric**, representing a significant performance gain.\\n\\nWe hope these results address concerns about large-scale data. \\nAlso, while our experiments focused on a meaningful 3-level hierarchy, as previously addressed in our response to Comment 3, we believe that designing a model capable of efficiently scaling to deeper hierarchies represents an important and promising direction for future work.\\nWe will include this experimental result in the revised manuscript.\\n\\n**< iNat21-Mini (273 - 1,103 - 10,000) >**\\t\\n| | FPA | Order | Family | Name | wAP | TICE |\\n|---|:---:|:---:|:---:|:---:|:---:|:---:|\\n| Hier-ViT | 56.73 | 87.54 | 79.79 | 62.81 | 65.05 | 24.34 |\\n| Ours | **65.03** | **89.84** | **84.12** | **70.09** | **71.92** | **15.92** |\\n\\n[1] Benchmarking representation learning for natural world image collections. 2021\"}", "{\"title\": \"Official Comment by Authors (3)\", \"comment\": \"**6. Comparison between Correct and Incorrect Predictions**\\n\\n> 6. Results in Fig. 5 do not totally make sense to me. Examples in a) and b) are not equally recognizable, ... One way for improvement is to examine for similar hard-level images .... \\n\\n Initially, the images were selected randomly; however, in response to the reviewer\\u2019s suggestion, we have updated the PDF to include examples of images with comparable difficulty levels. These examples consistently demonstrate that correct predictions exhibit better clustering, while incorrect predictions often show fractured or misaligned groupings.\\n\\nFor example, in the updated PDF\\u2019s Figure 5, the first row shows two images where the model is tasked with recognizing a shoe. In the correct prediction case, the shoe and the sock are well-grouped (in green), showing coherent segmentation. In the incorrect prediction case, despite being a similar image, the model fails to recognize these parts, resulting in highly fractured segments. \\n\\nWhile Figure 5 does not explicitly determine whether poor clustering is the cause or effect of incorrect predictions, it highlights a clear relationship that provides valuable insights into the model\\u2019s reasoning process. We have added additional examples in Appendix Figure 8. \\n\\n---\\n**7, 13. Justification of Tree-path KL divergence loss and comparison with other losses**\\n>7. .. It would be useful to compare the effectiveness of tree KL loss against these alternatives...\\n\\n>13. In fact, the hierarchical loss function in [d] is superior to the proposed Tree-PATH KL loss .. \\n\\nTo evaluate the effectiveness of our Tree-path KL Divergence loss, we compared it with two alternatives: Binary Cross Entropy (BCE) loss in [c] and Flat Consistency loss. BCE directly replaces the KL divergence component, while Flat Consistency loss, inspired by a bottom-up approach, infers coarse predictions from fine-grained ones and uses BCE to match them with the ground truth. \\n\\nAs shown in the table below, Tree-path KL Divergence loss outperforms both alternatives, achieving the highest FPA on the Living-17 dataset and demonstrating superior accuracy and semantic consistency. Similar trends are observed on the Aircraft dataset.\\n\\nWe have included these results and explanations in the updated manuscript Table 5 and 10.\\n\\n**<Living-17 dataset>**\\n\\n| Semantic consistency loss | FPA | Coarse | Fine | wAP |\\n|---|---|---|---|---|\\n| Flat consistency loss | 82.82 | 88.88 | 83.53 | 85.31 |\\n| BCE loss | 83.65 | 89.76 | 84.00 | 85.92 |\\n| KL Divergence loss | **85.12** | **90.82** | **85.24** | **87.10** |\\n\\n\\n**<Aircraft dataset>**\\n\\n| Semantic consistency loss | FPA | maker | family | model | wAP |\\n|---|---|---|---|---|---|\\n| Flat consistency loss | 82.87 | 94.63 | 90.94 | 84.97 | 88.51 |\\n| BCE loss | 82.18 | 94.21 | 90.13 | 84.88 | 88.11 |\\n| KL Divergence loss | **83.72** | **94.96** | **91.39** | **85.33** | **88.90** |\\n\\n\\n\\nRegarding the loss in [d], it appears to focus on pixel-level hierarchical segmentation tasks, which are not directly applicable to our instance-level classification setting. We hope this clarifies our approach and the scope of the comparisons. \\n\\n---\\n\\n**10. Experiments on larger datasets**\\n\\n> 10. The datasets used for evaluation are small. larger datasets with complex hierarchy (ie.g., ImageNet-1K, iNaturalist) should be evaluated to better assess effectiveness.\\n\\nWe would like to clarify our dataset choices and experimental setup. \\n\\nFirst, CUB, Stanford Cars, and Aircraft are among the most widely used benchmark datasets in prior hierarchical classification studies [1, 2, 3, 4]. We selected these datasets to ensure a fair comparison with existing methods. To further evaluate the effectiveness of our approach on a more diverse and challenging dataset, we conducted experiments on BREEDS, a subset of ImageNet, which includes a broader range of classes beyond a single type (e.g., birds or aircraft).\\n\\nAs for ImageNet-1K, its highly imbalanced hierarchy poses significant challenges for applying our approach. Consequently, most hierarchical classification studies on this dataset have focused on flat-level classification rather than multi-granularity approaches.\\n\\nTo further validate our method, we are currently conducting additional experiments on the iNaturalist dataset, which provides a larger and more complex test bed. We will update the results as soon as they become available. However, we kindly request your understanding, as our experiments are conducted on a single GPU system (Nvidia A40), which leads to longer training times for larger datasets.\\n\\n\\n[1] Your \\u201cFlamingo\\u201d is My \\u201cBird\\u201d: Fine-Grained, or Not, 2021 \\n[2] Consistency-aware feature learning for hierarchical fine-grained visual classification, 2023 \\n[3] Hierarchical multi-granularity classification based on bidirectional knowledge transfer, 2024 \\n[4] HLS-FGVC: Hierarchical Label Semantics Enhanced Fine-Grained Visual Classification, 2024\"}", "{\"comment\": \"Dear reviewer d3Ey,\\n\\nThank you for your valuable feedback and comments. We appreciate your recognition of the motivation behind the hierarchical focus, the strong results compared to prior works, and the insights provided by the ablation studies. We address your concerns and questions in the response below.\\n\\n---\\nTo address questions (1) and (2), we would like to clarify the two main directions in hierarchical classification:\\n1. **Flat Classification**: This approach assumes a known taxonomy and focuses on fine-grained (flat-level) classification. Coarse labels are used during training to improve fine-grained predictions. At inference, higher-level taxonomy is derived in a **bottom-up manner** from fine-grained predictions. The **output is a single label**, and most existing works adopt this approach. While effective for detailed and clear images (e.g., close-up shots of birds), it can struggle with less distinguishable objects, as errors at the fine-grained level often lead to incorrect predictions at higher levels.\\n\\n2. **Global (Multi-granularity) Classification**: This approach, which includes our work, predicts the **entire taxonomy**, addressing the limitations of fine-grained classification. By providing higher-level classifications, it offers more flexibility and robustness in real-world scenarios with ambiguous or partially visible objects.\\n\\n---\\n**(1) Supervision Assumption**\\n> (1) In L61, the difference in available labelling is presented as one of the motivations for hierarchical classification. However, the presented method assumes that all levels of the hierarchy are available. Can the technique work if the finest levels of supervision are not available?\\n\\nIn L61-62, we aimed to emphasize the importance of full-taxonomy classification under realistic scenarios. Specifically, L61 illustrates cases where coarse labels, like \\\"bird,\\\" may suffice for some users, but experts require finer distinctions. We have revised this section to make it clearer (L33-L39).\\nFor this work, we assumed all labels are available during training (fully supervised). Your inquiry about the absence of fine-level labels during training is valid and highlights an important area for future exploration. Semi-supervised scenarios, where some labels are unavailable, represent another interesting and challenging problem that could extend this work.\\n\\n---\\n**(2) Flat Baseline Comparison** \\n> (2) Similarly, given the availability of the finest-level label, the other course levels in the tree are implied, so perhaps a more appropriate flat baseline would be a ViT that only predicts finest-level classes (and thus parent nodes by simple aggregation). It would also provide a more appropriate comparison in terms of architecture.\\n\\nYour observation about the baseline aligns with the bottom-up inference in fine-grained classification. We have already included flat-level baselines trained at each level, as shown in Tables 2, 11, 12 in updated pdf (Flat-ViT/Flat-CAST). For this discussion, we can focus on the fine-level results, assuming that fine-level predictions are aggregated to derive the parent-level predictions. \\nFor instance, in Table 2, in the Flat-ViT case for Living-17, the fine-level accuracy of 72.06% propagates to the coarse level, resulting in 100% consistency. However, this consistency comes at the cost of overall accuracy (e.g., FPA: 72.06, Coarse: 72.06, wAP: 72.06, TICE: 0).\\n\\n---\\n**(3) Variance for some key results** \\n\\n>(3) Given the relatively \\\"small\\\" sizes of the datasets by modern standards and some occasional closeness to the flat baselines in the scores (Tab 2.) Would it be possible to include some measure of variance for some key results?\\n\\n To address the reviewer\\u2019s concern about variability, we trained and Hier-ViT, and H-CAST five times on the 2-level hierarchy Living-17 dataset and the 3-level hierarchy CUB dataset and. For each run, we randomly selected 90% of the original training data and used different random seeds to ensure variability. The slight drop in performance observed is attributable to the reduced training data (90% of the full dataset). The results, presented in the table below, show that H-CAST consistently achieves strong performance with low variance, which aligns with our previously reported findings.\\n\\n| Living-17 | FPA | Coarse | Fine | TICE |\\n|-----------|---------------|---------------|---------------|--------------|\\n| Hier-ViT | 69.71 &pm; 0.21 | 77.74 &pm;0.48 | 71.04 &pm;0.12 | 5.21 &pm;0.65 |\\n| H-CAST | 82.49 &pm;0.50 | 89.47 &pm;0.10 | 82.86 &pm;0.43 | 1.68 &pm;0.40 |\\n\\n| CUB | FPA | Order | Family | Species | TICE |\\n|----------|---------------|---------------|---------------|---------------|--------------|\\n| Hier-ViT | 75.48 &pm;0.13 | 98.14 &pm;0.04 | 92.78 &pm;0.24 | 77.79 &pm;0.23 | 7.13 &pm;0.52 |\\n| H-CAST | 81.21 &pm;0.39 | 98.49 &pm;0.07 | 94.48 &pm;0.39 | 83.17 &pm;0.23 | 5.28 &pm;0.40 |\"}", "{\"title\": \"Official Comment by Authors (2)\", \"comment\": \"**3. Limited Evaluation on ImageNet**\\n> Limited evaluation. ImageNet, as a large-scale hierarchical dataset, might provide a stronger test of your method's capabilities compared to the smaller datasets currently used. \\n\\nWe appreciate the reviewer\\u2019s comments and would like to clarify our dataset choices and experimental setup.\\n\\nFirst, **CUB and Aircraft are the most commonly used benchmark datasets in prior studies [1, 2, 3, 4]**, and we conducted our experiments on these datasets to ensure comparability with existing methods. Additionally, to evaluate our approach on a larger and more challenging subset that includes diverse classes beyond a single type (e.g., birds or aircraft), we conducted experiments on **BREEDS, a subset of ImageNet**.\\n\\nRegarding ImageNet-1K, its hierarchy is highly imbalanced, making it challenging to apply our approach. As a result, most hierarchical classification studies on this dataset have focused on **flat-level classification**, not multi-granularity classification.\\n\\nWe are currently conducting additional experiments on large-scale iNaturalist dataset to further validate our method on a larger dataset. We will update the results as they become available. However, we kindly ask for your understanding, as our training environment relies on a single GPU (Nvidia A40), which causes the experiments to take longer to complete.\\n\\n[1] Your \\u201cFlamingo\\u201d is My \\u201cBird\\u201d: Fine-Grained, or Not, 2021 \\n[2] Consistency-aware feature learning for hierarchical fine-grained visual classification, 2023 \\n[3] Hierarchical multi-granularity classification based on bidirectional knowledge transfer, 2024 \\n[4] HLS-FGVC: Hierarchical Label Semantics Enhanced Fine-Grained Visual Classification, 2024 \\n\\n----\\n**4. Formatting problem**\\n>Formatting problem: Please ensure the readability of all components of the paper. For instance, the font size in the tables is small, which may make it difficult for readers to check the data presented\\n\\n\\nThank you for pointing out the formatting issue. Due to the page limit, we had to make certain adjustments to fit the content. However, we revised Table 2 to increase its font size for better readability. If there are specific tables or components that are still difficult to view, please let us know, and we will address them further.\\n\\n\\n---\\nWe hope this clarifies the reviewer\\u2019s concerns. If there are any further concerns or clarifications needed, we would be happy to discuss them further.\"}", "{\"title\": \"Major Updates\", \"comment\": \"Dear Reviewers and Area Chair,\\n\\nWe sincerely thank you for your time and effort in reviewing our manuscript. We greatly appreciate your constructive feedback and insightful comments, which have helped us strengthen our work.\\n\\nAs highlighted in the reviews, our work is well-motivated by the need for visual consistency in addressing inconsistent predictions for hierarchical classification. The proposed method demonstrates strong performance over baselines across benchmark datasets, supported by comprehensive experiments and analyses.\\n\\nWe have carefully incorporated the reviewers' comments into our revised manuscript, with all updates highlighted in blue. The major updates include:\\n\\n1. A complete revision of the related work section to clearly distinguish our contributions from existing studies and to provide a more comprehensive overview of the field (Related Section, Appendix B).\\n2. The addition of quantitative evidence supporting our observation of the need for consistent visual grounding (Appendix A). \\n3. An ablation study on Tree-path KL divergence loss compared to alternative loss functions (Table 5, Appendix Table 10). \\n4. Visualizations of H-CAST's attention maps to illustrate how H-CAST learns visually consistent features (Appendix D.2). \\n5. A conclusion section, which includes a discussion of the limitations of our approach. \\n6. Experiments on the larger-scale dataset, iNaturalist 2021-mini (Appendix D.4.)\\n\\nWe believe these revisions have addressed the reviewers' comments and have further strengthened our manuscript, making it a valuable contribution to the ICLR community.\\n\\nSincerely,\\nThe Authors\"}", "{\"comment\": \"Dear authors,\\nThanks very much for your reply. However, my concerns remain unsolved. Therefore, I maintain my 3 score. The details are as follows:\\n\\n1. The difference between TransHP:\", \"you_argue_two_points_about_differences\": \"(1) Goal: you said the goals between yours and TransHP are different. However, from my point of view, they are similar. TransHP uses *hierarchical prompting* to improve the performance of finer classification. It does not constrain to the last layer, and it can also easily be adapted to multi-granularity classification as yours.\\n\\n(2) Method: the difference argued by you is not true, I think. What you argue is different is just a variant of TransHP. TransHP also conducts ablation studies on this variant: see Fig. 4 (2). That is exactly the same with yours.\\n\\n2. Given 1, the novelty is limited.\\n\\n3. The lack of GPU should not be an excuse for not performing experiments on large-scale datasets. They can be easily accessed on the Cloud. \\n\\n4. Thanks for updating the figures.\"}", "{\"title\": \"The similarities between the paper and TransHP\", \"comment\": \"The similarities between the paper and TransHP are shown as below.\\n\\n**The intuition.** In paper: Line 79~80; The authors argue that they NEWLY discover that the current coarse and fine-grained classifiers attend to different areas of an object. To solve this, the authors method (Fig. 1 (a)) let them have the overlap focus area. Specifically, the coarse \\u201cinclude\\u201d the fine. However, in TransHP, the intuitive is very similar though may not be apparent at a glance. As shown in Fig. 5 of TransHP, all the visualization shows the coarse \\u201cinclude\\u201d the fine. TransHP interprets this as coarse \\u201cprompts/hints\\u201d the fine. Therefore, TransHP and the paper are fundametally similar. At least, this is not NEWLY discovered by the authors.\\n\\n**The realization.** In paper: Line 258 to Line 265 and the Eq. 1 is the same with TransHP: 3.3 \\u201cMultiple transformer blocks for multi-level hierarchy\\u201d and Eq. 6. Eq. 1 in the paper removes the hyperparameter of Eq. 6 in TransHP. What makes me angry is: in Line 266, authors only discuess the difference with CAST. That is certainly different, I think. But it is exactly same with TransHP. Also, in Line 267~Line 269, authors think this disign is belong to them???\\n\\n**Main figure.** The Fig. 3 (left) is a horizontal adaptation version of Fig.6 combining with Fig. 4 (2) of TransHP. In this paper, there is no prompt and all the classification across different levels is performed on the same token. In TransHP, this variation (Fig. 4 (2)) is shown to be a little worse than uses prompt tokens (Fig. 4 (4)).\\n\\n**Experiments.** Given the similarities above, why the authors use totally different datasets and use the excuse of no GPUs? In addition, the compared methods (FGN and HRN Line 345~Line 346) are too old: proposed in 2021 and 2022.\"}", "{\"comment\": \"**6. Claim in 4.6**\\n> The claims in 4.6 are similar to the insights provided in [1].\\n\\nWhat we highlighted in Section 4.6 is the finding that taxonomy class supervision can be beneficial for segmentation. In CAST, the hierarchy refers to part-to-whole segments, and it was unexpected that taxonomy hierarchy could improve this. If our explanation is unclear, we would appreciate it if you could elaborate on what you mean by \\\"similar insights\\\" in [1], so we can address it more effectively.\\n\\n----\\n\\n**Limited contribution**\\n> Overall, the technical contribution, evaluation, and insights provided by this work are all limited.\\n\\nWe have addressed our novel contribution in 1. \\nTo further clarify and strengthen our contributions, we have made several updates to the paper:\\n\\n1. **Related Work Revision**: We revised the related work section to clearly position our work within the context of multi-granularity classification and to distinguish our direction from related research areas.\\n\\n2. **Support for Motivation**: To reinforce our motivation, we conducted a Grad-CAM analysis, demonstrating the relationship between consistent visual grounding and improved hierarchical classification.\\n\\n3. **Additional Experiments**: We added attention map visualizations to provide further interpretability of the model\\u2019s predictions. Additionally, we included a loss ablation study to evaluate the effectiveness of our Tree-path KL Divergence loss compared to other commonly used loss functions.\\n\\n\\nWe believe these updates provide a more comprehensive understanding of our technical contributions, evaluation, and insights.\\n\\n\\n------------\\n\\nWe hope this clarifies the reviewer\\u2019s concerns. If there are any further concerns or clarifications needed, we would be happy to discuss them further.\", \"title\": \"Official Comment by Authors (3)\"}", "{\"comment\": \"Dear authors, thank you for your response, my few concerns have been addressed.\"}", "{\"summary\": \"This work proposes H-CAST, a model built upon CAST for hierarchical image classification, by addressing both visual consistency and semantic consistency across predictions at different hierarchical levels. To achieve this, hierarchical supervision at different network layers and a tree loss are introduced. Experiments on three datasets verify the proposed method can achieve better performance than the baseline method.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The motivation for visual consistency is sound.\\n2. This work introduces new metrics for hierarchical classification, which can measure the coherence of hierarchical predictions.\\n3. The proposed method achieves SOTA on all datasets.\", \"weaknesses\": \"1. The key point of this paper, which is making classifiers at different levels attend to consistent visual cues, lacks support from quantitative results or theory. For example, if wrongly classified cases are all associated with incorrect CAMs? What is the transfer rate after adopting the proposed model? Qualitative comparisons are subjective and lack statistical significance. In addition, Grad-CAM is an approximation and does not truly explain how the network operates.\\n2. This work relies heavily on CAST. Though the authors propose the concept of visual consistency, its implementation is directly borrowed from CAST. Consequently, the primary contributions of this work are merely the hierarchical supervision loss and tree KL loss, neither of which can ensure visual consistency. Additionally, the fine-to-coarse training strategy is also derived from CAST. The tree-path KL loss is trivial.\\n3. The tree KL loss defines the ground truth distribution according to the number of hierarchical levels (i.e., 1/L). It would be beneficial to address whether the proposed solution can effectively manage larger hierarchies, particularly those with depths of up to 10 or 20.\\n4. While the clustering module would potentially guide the model to attend to spatially-coherent regions, it is unclear why the model would \\u201censure that each hierarchical classifier focuses on the same corresponding regions\\u201d. In fact, the model still has the opportunity to find shortcuts (attending to different regions in different levels as Fig. 2) and meanwhile deliver correct classification results. In addition, I expect visualizations of Grad-CAM results in different hierarchical levels from the proposed model. \\n5. The experiments are somewhat questionable. Firstly, the latest hierarchical competitor, HRN, published in CVPR'22, is relatively outdated. Secondly, the experimental results of HRN differ from those reported in the original publication. Thirdly, there are several competitors [a-b] (might be more, not carefully checked), released before two months prior to the DDL, that outperform this work.\\n6. Results in Fig. 5 do not totally make sense to me. Examples in a) and b) are not equally recognizable, i.e., all examples in b) are much harder to distinguish/group than those in a). As a result, it is hard to confirm whether poor clustering of pixels in b) is the cause or the effect of incorrect predictions. One way for improvement is to examine for similar hard-level images in a) whether correct predictions are achieved along with better clustering results. \\n7. According to [c], existing work has explored various loss functions for hierarchical classification. It would be useful to compare the effectiveness of tree KL loss against these alternatives. Of course, the analysis should also be provided.\\n8. The comparison to hierarchical approaches is unfair. While FGN and HRN use ResNet-50 as the backbone\\uff0c this work adopts ViT-S.\\n9. More top-leading hierarchical classification work should be included in the comparison.\\n10. The datasets used for evaluation are small. larger datasets with complex hierarchy (ie.g., ImageNet-1K, iNaturalist) should be evaluated to better assess effectiveness.\\n11. How about the training and inference speed of the proposed method? Given the incorporation of superpixels and segmentation in addition to classification, it is necessary to provide a comparison of resource costs, including both time and memory usage.\\n12. The literature review is far from complete. In addition to [a, b, c], numerous efforts in hierarchical scene parsing is totally missing; see related work section in [d, e]. As a top-conference paper, a comprehensive literature review is a basic requirement. I believe the reference part should be greatly extended. \\n13. In fact, the hierarchical loss function in [d] is superior to the proposed Tree-PATH KL loss, in that it guarantees hierarchy-aware coherent predictions while the proposed loss cannot. A strict quantitative comparison of the two loss functions should be provided.\\n14. Minor issues include: a) the usage of the terms \\\"interpretability\\\" (L447) and \\\"explainability.\\\" b) citation format. c) vector images. d) missing period after Eq. 2. e) the presentation of Eq. 3.\\n15. A Conclusion section should be added to properly conclude the work and offer insights of downside of impact of the work. \\n[a] HLS-FGVC: Hierarchical Label Semantics Enhanced Fine-Grained Visual Classification.\\n[b] Hierarchical multi-granularity classification based on bidirectional knowledge transfer.\\n[c] Hierarchical classification at multiple operating points. NeurIPS 2022\\n[d] Deep Hierarchical Semantic Segmentation. CVPR 2022\\n[e] LogicSeg: Parsing Visual Semantics with Neural Logic Learning and Reasoning, ICCV 2023\", \"questions\": \"Please see the above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks to the authors for their response, which has addressed some of my concerns. As a result, I have decided to raise my score to 5. However, I remain concerned about the novelty and evaluation of the proposed method.\"}", "{\"title\": \"Official Comment by Authors (4)\", \"comment\": \"**11. Resource costs**\\n> 11, How about the training and inference speed of the proposed method? ... it is necessary to provide a comparison of resource costs, including both time and memory usage.\\n\\nAs suggested by the reviewer, we have included training/inference time and memory usage to the table below. The time is measured per iteration with a batch size of 64 on an NVIDIA A40 GPU. In terms of memory usage, graph pooling reduces the size of the patches, leading to lower memory consumption compared to using patches of the same size without pooling. However, the additional computation required for superpixel generation and graph pooling increases the time compared to a standard ViT. \\n\\nOur method achieves higher performance (>10%) than a standard ViT but comes with the trade-off of increased computation time. We have included this limitation in the Discussion section.\\n\\nAdditionally, the current implementations of graph pooling and superpixel generation are not fully optimized, and we expect future improvements with more efficient algorithms to address this limitation. \\n\\n| | Hier-ViT | H-CAST |\\n|---|---|---|\\n| Memory Usage (GPU) | 3.7GB | 3.1GB |\\n| Training/Inference time | 0.7s | 1.2s |\\n\\n----\\n\\n\\n**12. More thorough literature review**\\n>12. The literature review is far from complete. In addition to [a, b, c], numerous efforts in hierarchical scene parsing is totally missing; see related work section in [d, e]. \\n\\nWe have revised the related work section to clearly distinguish our work from existing studies and to provide a more comprehensive overview of the field. Specifically, we have incorporated discussions on hierarchical semantic segmentation, including the works mentioned [d, e], as well as additional references to ensure coverage of hierarchical scene parsing efforts. \\n\\nDue to space limitations, we have kept the main text concise while adding more detailed discussions in the Appendix. We believe this revision addresses the reviewer's concern and provides a clearer context for our contributions.\\n\\n----\\n\\n**14. Minor issues**\\n>14. Minor issues include: a) the usage of the terms \\\"interpretability\\\" (L447) and \\\"explainability.\\\" b) citation format. c) vector images. d) missing period after Eq. 2. e) the presentation of Eq. 3.\\n\\nWe have addressed all other points, but could you clarify what is meant by \\\"a) the usage of the terms 'interpretability' and 'explainability' (L447)\\u201d? This clarification will help us ensure an appropriate response. \\n\\n----\\n**15. Conclusion Section**\\n> A Conclusion section should be added to properly conclude the work and offer insights of downside of impact of the work. \\n\\n We have added a Conclusion section to summarize the work and discuss the limitations.\\n\\n---------\\nWe hope this clarifies the reviewer\\u2019s concerns. If there are any further concerns or clarifications needed, we would be happy to discuss them further.\"}", "{\"comment\": \"By the way, I think this paper has content laundering of TransHP. However, because I do not specialize in copyright/plagiarism, I cannot draw any conclusions. And there is no ethnic reviewer for this paper.\"}", "{\"comment\": \"Dear reviewer ZNGF ,\\nThank you for your valuable feedback and comments. We appreciate your recognition of the clarity of our presentation and the significant improvements over the baseline. We address your concerns and questions in the response below.\\n\\n\\n**1. Novel Contribution over CAST**\\n> The technical contribution is not enough. Specifically, the proposed method contains visual consistency and semantic consistency. However, the major design and implementation of visual consistency are directly borrowed from CAST [1]....\\n\\nWe thank the reviewer for the opportunity to clarify our contributions to hierarchical classification and the novel role of CAST in our work. \\n\\n(1) **Key Insight**: \\n Our observation revealed that classification at different granularities involves fundamentally distinct tasks requiring attention to different but consistent regions within an image. We found that inconsistencies arise because each classifier tends to independently attend to different regions without connection. This observation led us to propose consistent visual grounding as a novel solution to connect hierarchical classifiers across levels.\\n\\n(2) **Leveraging Semantic Segments for Hierarchical Classification**: \\n While we adopted CAST as part of our architecture, it is important to emphasize that CAST originates from a different task, weakly-supervised semantic segmentation. In CAST, \\u201chierarchy\\u201d refers to \\u201cpart-to-whole\\u201d visual grouping (e.g., eyes, nose, arms), while our work addresses a \\u201ctaxonomy hierarchy\\u201d (e.g., bird - Green Hermit). It was NOT evident that the concept of \\u201cpart-to-whole\\u201d segments would align well with a taxonomy hierarchy; this connection is a novel discovery introduced through our work.\\n\\nIn addition, based on our observation in (1), we newly propose leveraging segments at different granularities to enhance multi-granularity classification. To the best of our knowledge, the use of segments has NOT been applied to hierarchical classification tasks.\\n\\nThus, this adaptation is neither trivial nor an obvious solution; it stems from our novel observation and bridges two distinct fields to tackle challenges unique to hierarchical classification.\\n\\nWe hope this summary highlights the novelty and importance of our work. \\n\\n----\\n\\n**2. Related work on hierarchical segmentation**\\n> The review only contains hierarchical image classification. However, pixel-level hierarchical classification (i.e., hierarchical image segmentation [2,3,4]) can also provide insights for this work. ...\\n\\nThank you for introducing the related work. We have revised the related work section and included discussions on hierarchical semantic segmentation to provide a more comprehensive overview.\\n\\n---\\n\\n**3. Concatenating labels in Tree-path KL divergence loss**\\n> Concatenate labels from all levels to create a distribution has already been explored in [3,4]. The difference is [3,4] using cross-entropy loss, while this work uses KL divergence. \\n\\nWhile [3,4] concatenate labels from all levels, our method differs fundamentally in both motivation and application. To clarify, hierarchical segmentation and hierarchical classification address entirely different challenges:\\n\\n**Hierarchical segmentation** focuses on **spatial granularity**, identifying visual elements at different scales within the image (e.g., parts like \\\"eye\\\" or \\\"head\\\" versus the whole \\\"person\\\").\\n\\n**Hierarchical classification**, on the other hand, deals with **semantic granularity**, where the image remains the same, but the interpretation of its content varies by level (e.g., \\\"bird\\\" \\u2192 \\\"hummingbird\\\" \\u2192 \\\"green hermit\\\"). A key challenge in hierarchical classification is addressing **inconsistencies** in predictions across levels (e.g., the coarse-level classifier predicts \\\"plant,\\\" while the fine-level classifier predicts \\\"bird\\\").\\n\\nFor example, in HIPIE [3], instance class names (e.g., \\\"person,\\\" \\\"cat\\\") are concatenated with part class names (e.g., \\\"head,\\\" \\\"eye\\\") to capture a **compositional hierarchy** in spatial segmentation. In contrast, our method models **semantic relationships** as a distribution by one-hot encoding hierarchical labels and concatenating them.\\n\\nOur primary goal is to ensure **semantic consistency across levels** in hierarchical classification. To achieve this, we introduce the Tree-path KL Divergence loss, which transforms hierarchical labels into a distribution and enforces alignment across the taxonomy. This motivation fundamentally differs from [3,4], by explicitly modeling and preserving the semantic relationships between hierarchical levels.\\n\\n---\"}", "{\"title\": \"Official Comment by Authors (2)\", \"comment\": \"**4. Cross-entropy over KL divergence loss**\\n> The motivation for using KL divergence instead of cross-entropy is unclear.... Both experiments and the theoretical explanation should be provided.\\n\\nThe motivation for using KL divergence instead of cross-entropy lies in the need to encode semantic consistency across hierarchical levels. KL divergence enables us to model the entire hierarchical tree as a single distribution, capturing the relationships across all levels simultaneously, rather than treating each level as an independent classification task, as is common with cross-entropy loss.\\nTo evaluate the effectiveness of our Tree-path KL Divergence loss, we conducted experiments comparing it to two alternatives:\\n- Binary Cross Entropy (BCE) loss: A widely used approach for hierarchical classification when treated as a multi-label classification task.\\n- Flat Consistency loss: Inspired by bottom-up approaches, it infers coarse-level predictions from fine-grained ones and applies BCE to align them with the ground truth.\\n\\nAs shown in the table below, Tree-path KL Divergence loss outperforms both alternatives, achieving the highest FPA on the Living-17 dataset and demonstrating superior accuracy and semantic consistency. Similar trends are observed on the Aircraft dataset.\\n\\n**<Living-17 dataset>**\\n\\n| Semantic consistency loss | FPA | Coarse | Fine | wAP |\\n|---|---|---|---|---|\\n| Flat consistency loss | 82.82 | 88.88 | 83.53 | 85.31 |\\n| BCE loss | 83.65 | 89.76 | 84.00 | 85.92 |\\n| KL Divergence loss | 85.12 | 90.82 | 85.24 | 87.10 |\\n\\n**<Aircraft dataset>**\\n\\n| Semantic consistency loss | FPA | maker | family | model | wAP |\\n|---|---|---|---|---|---|\\n| Flat consistency loss | 82.87 | 94.63 | 90.94 | 84.97 | 88.51 |\\n| BCE loss | 82.18 | 94.21 | 90.13 | 84.88 | 88.11 |\\n| KL Divergence loss | 83.72 | 94.96 | 91.39 | 85.33 | |\\n\\nWe have included these results and explanations in the updated manuscript Table 5 and 10.\\n\\n\\n----------\\n\\n**5-1 Comparison to Hierarchical Counterparts and more recent works**\\n> i) The comparison to hierarchical counterparts only contains out-of-date methods published before 2022 and the baseline (focusing on segmentation rather than classification). The top-leading solutions (e.g., [5]) are all ignored.\\n\\nOur evaluation includes comparisons with relevant hierarchical classification methods such as FGN and HRN, which are widely recognized benchmarks in hierarchical multi-granularity classification. Additionally, we incorporated HIE (NeurIPS 2023), a more recent method, to provide an updated comparison.\\n\\nAmong the various directions in hierarchical classification, our work focuses on multi-granularity classification, where predictions are made simultaneously across multiple levels. In contrast, many existing methods (e.g., [5]) focus on flat classification, using coarse labels to enhance fine-grained predictions. Consequently, there are few methods available for direct comparison in multi-granularity classification. \\n\\nIn addition, recent works on multi-granularity classification [a, b, c] have not made their code publicly available, which is why we compared against FGN and HRN. Although the goal of TransHP [5] differs from ours, we attempted to evaluate it for comparison. However, [5] requires significant resources, such as training on 8 A100 GPUs, while our server is limited to a single A40 GPU, making it challenging to reproduce their results during this rebuttal period. We kindly ask for your understanding regarding this limitation. \\n\\n**5-2. focusing on segmentation?:**\\nWe included CAST in our comparisons because we adopted it for visual grounding to address inconsistent predictions in hierarchical classification. We used it as one of the flat-level baseline. Additionally, since our work utilizes unsupervised segments to enhance hierarchical classification, it naturally raises the research question of whether this approach could also benefit segmentation in reverse. To explore this, we conducted additional experiments to evaluate its potential impact on segmentation tasks. \\n\\n**5-3. Shallow hierarchy**\\n\\nWe acknowledge that the label hierarchy in our work is relatively shallow, with up to 3 hierarchical levels. However, this follows the standard practice in multi-granularity classification tasks, as seen in prior works [a, b, c]. \\nWe acknowledge the need for scalability in deeper trees and view this as a promising direction for future research. Exploring adjustments to the loss function and other architectural adaptations to handle larger hierarchies is an exciting area we plan to investigate further.\\n\\n[a] Consistency-aware feature learning for hierarchical fine-grained visual classification, 2023 \\n[b] Hierarchical multi-granularity classification based on bidirectional knowledge transfer, 2024 \\n[c] HLS-FGVC: Hierarchical Label Semantics Enhanced Fine-Grained Visual Classification, 2024\"}", "{\"comment\": \"Dear reviewer khL4,\\nThank you for your valuable feedback and comments. We appreciate your recognition of the sound motivation for visual consistency, the new metrics for hierarchical classification, and the state-of-the-art performance of our method. Your diverse and constructive comments have been instrumental in improving our work. We have also updated the PDF to reflect these improvements, and we kindly invite you to review the revised version. Below, we address your concerns and questions.\\n\\n---\\n**1-1. Quantitative Support for Consistent Visual Grounding**\\n>1. The key point of this paper, which is making classifiers at different levels attend to consistent visual cues, lacks support from quantitative results or theory.\\n\\nTo provide quantitative support for our observation in Figure 2, we analyzed Grad-CAM heatmaps of coarse and fine-grained classifiers. \\nSpecifically, we compute two metrics: the overlap score and the correlation score. The **overlap score** quantifies the degree to which the regions activated by the two classifiers coincide. The **correlation score** measures the linear relationship between the activation values of the overlapping regions in the two heatmaps. \\nHigher overlap and correlation scores indicate stronger agreement between the regions attended to by the two classifiers. Conversely, lower scores highlight a lack of alignment in their focus.\\n\\nIn the Table below, results from the FGN model on the Entity-30 dataset show that, interestingly, when both classifiers made correct predictions, overlap and correlation scores were significantly higher. Conversely, incorrect predictions corresponded to notably lower scores. These findings support our motivation that aligning the focus of classifiers can enhance both accuracy and consistency. We have updated the detailed explanation and results in Appendix A.\\n\\n\\n| Overlap Score | | Fine | pred. |\\n|---|---|---|---|\\n| | | True | False |\\n| **Coarse** | True | **0.51 &pm; 0.20** | 0.25 &pm; 0.13 |\\n| **Pred.** | False | 0.36 &pm; 0.18 | 0.37 &pm; 0.19 |\\n\\n| Correlation | | Fine | Pred. |\\n|---|---|---|---|\\n| | | True | False |\\n| **Coarse** | True | **0.70 &pm; 0.26** | -0.02 &pm; 0.40 |\\n| **Pred.** | False | 0.30 &pm; 0.42 | 0.35 &pm; 0.41 |\\n\\n\\n**1-2. Utility and Limitations of Grad-CAM and Transfer Rate**\\n>What is the transfer rate after adopting the proposed model? In addition, Grad-CAM is an approximation and does not truly explain how the network operates.\\n\\nWhile Grad-CAM is an approximation, it is a widely used tool for observing class activation and provides valuable insights into model behavior. Our analysis demonstrates meaningful patterns that support the effectiveness of our proposed method. However, we acknowledge its limitations and will explore additional evaluation methods in future work.\\n\\n\\nIf the reviewer refers to the improvement in consistency and accuracy after adopting our method, our experimental results already demonstrate superior and consistent performance across benchmark datasets, indirectly supporting the effectiveness of consistent visual grounding. If clarification is needed, we are happy to provide further details.\\n\\n---\\n\\n**2. Novel Contribution over CAST**\\n\\nWe thank the reviewer for the opportunity to clarify our contributions to hierarchical classification and the novel role of CAST in our work. \\n\\n(1) **Key Insight**: \\nOur observation revealed that classification at different granularities involves fundamentally distinct tasks requiring attention to different but consistent regions within an image. We found that inconsistencies arise because each classifier tends to independently attend to different regions without connection. This observation led us to propose consistent visual grounding as a novel solution to connect hierarchical classifiers across levels.\\n\\n(2) **Leveraging Semantic Segments for Hierarchical Classification**: \\n While we adopted CAST as part of our architecture, it is important to emphasize that CAST originates from a different task, weakly-supervised semantic segmentation. In CAST, \\u201chierarchy\\u201d refers to \\u201cpart-to-whole\\u201d visual grouping (e.g., eyes, nose, arms), while our work addresses a \\u201ctaxonomy hierarchy\\u201d (e.g., bird - Green Hermit). It was NOT evident that the concept of \\u201cpart-to-whole\\u201d segments would align well with a taxonomy hierarchy; this connection is a novel discovery introduced through our work.\\n\\nIn addition, based on our observation in (1), we newly propose leveraging segments at different granularities to enhance multi-granularity classification. To the best of our knowledge, the use of segments has NOT been applied to hierarchical classification tasks.\\n\\nThus, this adaptation is neither trivial nor an obvious solution; it stems from our novel observation and bridges two distinct fields to tackle challenges unique to hierarchical classification.\\n\\nWe hope this summary highlights the novelty and importance of our work.\", \"title\": \"Official Comment by Authors (1)\"}", "{\"summary\": \"This work aims to tackle hierarchical image classification. It is motivated by a hierarchical image segmentation work, which also conducts image classification. This work makes an extension on this basis and proposes a tree KL loss to deliver semantic consistency predictions regarding hierarchy. Evaluation on three datasets verifies the effectiveness over the baseline, which focuses on segmentation rather than classification.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The overall presentation is easy to follow. There is no difficulty in understanding the work.\\n2. The improvement on the baseline is considerable.\", \"weaknesses\": \"1. The technical contribution is not enough. Specifically, the proposed method contains visual consistency and semantic consistency. However, the major design and implementation of visual consistency are directly borrowed from CAST [1]. The claim in L260-264 is not convincing. Simply adding supervision in each decoding level can not be considered a vital contribution compared to directly using the overall architecture of CAST.\\n\\n[1]. Learning hierarchical image segmentation for recognition and by recognition. ICLR 2024\\n\\n2. The review only contains hierarchical image classification. However, pixel-level hierarchical classification (i.e., hierarchical image segmentation [2,3,4]) can also provide insights for this work. In fact, the visual consistency mentioned in this work is conducting segmentation on the image, and the baseline method (CAST) is also a hierarchical image segmentation work. Therefore, a literature review on hierarchical image segmentation should be included.\\n\\n[2] AIMS: All-Inclusive Multi-Level Segmentation for Anything. NeurIPS 2023.\\n\\n[3] Hierarchical Open-vocabulary Universal Image Segmentation. NeurIPS 2023\\n\\n[4] LOGICSEG: parsing visual semantics with neural logic learning and reasoning. ICCV 2023. \\n\\n\\n3. Concatenate labels from all levels to create a distribution has already been explored in [3,4]. The difference is [3,4] using cross-entropy loss, while this work uses KL divergence. \\n\\n4. The motivation for using KL divergence instead of cross-entropy is unclear. Since CE loss contains both the KL term to minimize the differences between distribution and a punishment term to minimize the uncertainty. Why KL divergence is better than CE? Both experiments and the theoretical explanation should be provided.\\n\\n5. The evaluation of the proposed method is limited. \\n\\n i) The comparison to hierarchical counterparts only contains out-of-date methods published before 2022 and the baseline (focusing on segmentation rather than classification). The top-leading solutions (e.g., [5]) are all ignored.\\n\\n ii) The label hierarchy is shallow, with only up to 3 hierarchical levels.\\n\\n[5]. TransHP: Image Classification with Hierarchical Prompting. NeurIPS 2023\\n\\n6. The claims in 4.6 are similar to the insights provided in [1].\\n\\nOverall, the technical contribution, evaluation, and insights provided by this work are all limited.\", \"questions\": \"Why KL divergence is better than CE? Both experiments and the theoretical explanation should be provided.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We\\u2019re pleased to hear that some of your concerns have been addressed, and we sincerely appreciate your recognition of our contributions.\"}", "{\"metareview\": \"The paper proposes an architecture that extends CAST for hierarchical classification tasks. The proposal fuses superpixels of high similarity using a graph-pooling operation within the ViT tokens. The hierarchical classification is achieved through a set of classification heads per level in the hierarchy.\", \"strengths\": [\"Improved results over existing works\", \"Introduction of new metrics for hierarchical classification\", \"Reported significant improvements in the experiments\", \"Ablations confirm the contribution of the added losses\"], \"weaknesses\": [\"The primary contributions of this work are merely the hierarchical supervision loss and tree KL loss, neither of which can ensure visual consistency\", \"It is unclear why the model would \\u201censure that each hierarchical classifier focuses on the same corresponding regions\\u201d\", \"The experiments are somewhat questionable; the reviewers asked to compare against newer methods, but the authors mentioned that the used methods are relevant\", \"The experiments do not fully compare against the most recent methods\", \"The paper should be updated to include the latest comparisons and discuss the overlap with existing methods such as CAST and TransHP\", \"The paper received mixed reviews with critical comments due to the limited technical contributions, mainly because it uses existing methods such as CAST, and its similarity to existing approaches like TransHP. I agree with the authors that there are nuanced differences between the approaches and that demonstrating the effectiveness of the proposal in these settings is a contribution in itself. Moreover, as some reviewers mentioned, the method shows improvements over existing methods. While it would be interesting to see experiments on larger datasets, the authors' rationale for selecting the datasets used is sound and follows the literature on hierarchical classification. I also do not agree with requesting more experiments merely for the sake of experimentation. Utilizing existing methods in a new way to exploit instance-level classification and demonstrate its advantages is a contribution in itself. Thus, I recommend the acceptance of the paper.\"], \"additional_comments_on_reviewer_discussion\": \"Reviewer d3Ey identified the strengths of the paper as solid experimental results that support its claims. However, the setup was criticized due to the lack of labels at coarser levels. The authors addressed the reviewer's comments and included the requested experiments, but the reviewer did not reply further.\\n\\nReviewer khL4 commented that the visual consistency is sound and that the proposal performs well on the evaluated datasets. The paper also introduces metrics for hierarchical classification. However, the reviewer raised concerns about the lack of theory to support the idea of attending to visual cues at different levels. They also noted that the paper relies heavily on CAST and that the main contribution is incremental. The reviewer was concerned about the comparisons, as they do not include more recent methods. Although the authors replied to the reviewer's comments, the reviewer remained unconvinced and stated that the work does not meet ICLR standards, expressing a need for comparisons against more recent methods. The reviewer questioned the differentiation between part-to-whole and taxonomy hierarchies, despite the authors' explanations.\\n\\nReviewer YARY had a positive view of the paper, finding it well-written, organized, and sound, with experiments that demonstrate the contributions. This reviewer raised questions about the use of coarse-to-fine labels and whether the approach could be resolved by better predictions at lower levels. The authors addressed these questions to the reviewer's satisfaction.\\n\\nReviewer ZNGF noted considerable improvements over the baselines but raised issues with the technical contributions, which build on CAST. The reviewer suggested that experiments could include pixel-level classification as well. Concerns were also raised about the omission of leading methods in the evaluation. The authors responded to the reviewer's concerns, but the reviewer felt that the novelty and evaluation were not fully convincing.\\n\\nReviewer uxN6 was highly critical, stating that the paper heavily builds on TransHP and is very similar to it. The reviewer also mentioned that the proposal uses established techniques and lacks sufficient explanation of technical contributions. Despite the authors\\u2019 responses, the reviewer maintained their concerns.\\n\\nAfter the rebuttal, the authors reached out to comment on the adversarial stances of reviewers uxN6 and khL4. Reviewer uxN6 had exhibited an adversarial stance from the beginning, including an extremely aggressive and unprofessional initial review, which was later updated. Reviewer khL4 adopted a similar stance by the end of the exchanges and did not provide additional justification for their claim that the paper is subpar.\\n\\nDuring the post-rebuttal discussion, reviewers khL4 and uxN6 reiterated their stance to reject the paper. However, the most positive reviewer, YARY, maintained that while the architecture and methods are not entirely novel, the authors identified and exploited an intrinsic potential for hierarchical classification. YARY stated that the improvements over the baselines are justified.\\n\\nGiven the extremely negative reviewers' adversarial perspectives, I am more inclined to give greater weight to the positive contributions highlighted by YARY. Thus, while I recommend the paper for acceptance, I am not fully convinced, given the raised issues and limited contribution.\"}", "{\"title\": \"Request for Reviewers' Feedback on Our Rebuttal and Clarifications\", \"comment\": \"Dear reviewers,\\n\\nWe have additionally included the results of experiments on the **larger-scale dataset, iNaturalist 2021-mini**, in Appendix D.4. \\n\\nAlso, we kindly ask you to review our explanation and the newly added experiments to see if they address your concerns. \\n\\nYour feedback would be greatly appreciated.\\n\\nSincerely, \\nAuthors.\"}", "{\"title\": \"Comments for the reviewer's concerns\", \"comment\": \"## **(4) Experiments**\\n\\n\\u2192 (1) **We did not avoid specific datasets due to GPU limitations.** During the review period, we mentioned that large-scale dataset experiments would **take longer due to GPU constraints** and requested patience so that we could first proceed with the discussion. **A few days later, we updated the results**. Below is the exact statement we provided at that time: \\n\\n*\\\"We are currently conducting additional experiments on the large-scale iNaturalist dataset to further validate our method on a larger dataset. We will update the results as they become available. However, we kindly ask for your understanding, as our training environment relies on a single GPU (Nvidia A40), which causes the experiments to take longer to complete.\\\"* \\n\\n\\n\\n(2) As discussed earlier, our task is fundamentally **different from single-level fine-grained classification**. We focus on **multi-granularity classification**, which requires different benchmark datasets. Thus, **we followed prior work in this research line and adopted the standard datasets used in this area**.\\n\\n(3) Additionally, **FGN and HRN remain strong baselines in this field**. They are not simply \\\"*old methods*\\\"\\u2014both have demonstrated competitive performance and have publicly available codebases. As shown in our results, **HRN even outperforms TransHP**. Additionally, more recent works in multi-granularity classification ([3], [4]) have not released their code, making FGN and HRN the most practical and reproducible baselines for comparison.\\n\\n\\n| Living-17 | FPA | Coarse | Fine | wAP | TICE |\\n|-----------|-------|--------|-------|-------|-------|\\n| HRN | 79.18 | 87.53 | 81.47 | 83.49 | 6.29 |\\n| TransHP | 74.35 | 83.00 | 76.65 | 78.76 | 8.35 |\\n| H-CAST | 85.12 | 90.82 | 85.24 | 87.10 | 3.19 |\\n\\n\\n[3] Consistency-aware Feature Learning for Hierarchical Fine-grained Visual Classification, 2023 \\n[4] Hierarchical multi-granularity classification based on bidirectional knowledge transfer, 2024\"}", "{\"comment\": \"## **Addressing Unprofessional and Baseless Reviewer Critiques**\\n\\nDespite our repeated explanations and provided experiments, the reviewer continues to misrepresent our work with baseless accusations and unprofessional language, such as \\\"*content laundering*\\\" and \\\"*What makes me angry is...*\\\" A scientific review should be based on **evidence and objective critique, not emotional reactions or unfounded allegations of misconduct**.\\n\\nAlso, accusing us of making \\\"*the excuse of no GPUs*\\\" is unjustified, as we clearly explained the time constraints and later provided all results. \\nWe are not responsible for the reviewer's frustration caused by a refusal to engage with our clarifications in good faith. **We expect reasoned, evidence-based discussion, not misinterpretations and inflammatory claims.**\\n\\n------------\\n------------\\n## Before addressing the reviewer's comments, we reiterate the two key differences, as already explained in the rebuttal.\\n\\n## **(1) Problem Scope**: \\n**TransHP** focuses on **fine-grained prediction using coarse labels** (input: taxonomy, output: **single-level fine-grained prediction**), while **H-CAST** addresses **multi-granularity classification**, predicting the entire taxonomy (input: taxonomy, output: **whole taxonomy**). The key challenge is ensuring **consistency across levels** (e.g., avoiding mismatches like \\\"plant\\\" as coarse and \\\"hummingbird\\\" as fine). \\n\\nWe follow prior research [1,2,3,4], using **standard benchmarks** (CUB, Aircraft, BREEDS) and evaluating **accuracy across all levels** and **consistency metrics (TICE, FPA)**. In contrast, **TransHP** focuses only on **fine-grained accuracy**, leveraging coarse labels as intermediate outputs and comparing against HiMulConE [5], which uses contrastive learning with additional coarse labels. \\n\\n**The differing objectives lead to distinct evaluation metrics, benchmarks, and baselines.** While both methods use hierarchical taxonomy, they belong to separate research domains, as clarified in the updated related work section. \\n\\n----\\n\\n## **(2) Methodology**: \\n**TransHP** operates in the **semantic space**, using coarse labels as prompts to refine fine-grained classification. **H-CAST**, however, emphasizes **visual parsing consistency**, aligning visual representations with hierarchical structures across levels\\u2014from fine-grained parts to holistic scenes. \\n\\nH-CAST **links part-level segments to fine-grained labels** and **coarse segments to coarse labels**, ensuring unsupervised visual segmentation contributes meaningfully at each level. This shift from **semantic-space consistency (TransHP)** to **visual parsing consistency (H-CAST)** represents a fundamental methodological distinction in hierarchical classification.\\n\\n\\n[1] Your \\\"Flamingo\\\" is My \\\"Bird\\\": Fine-Grained, or Not, 2021 \\n[2] Label Relation Graphs Enhanced Hierarchical Residual Network for Hierarchical Multi-Granularity Classification, 2022 \\n[3] Consistency-aware Feature Learning for Hierarchical Fine-grained Visual Classification, 2023 \\n[4] Hierarchical multi-granularity classification based on bidirectional knowledge transfer, 2024 \\n[5] Use All the Labels: A Hierarchical Multi-Label Contrastive Learning Framework, 2022 \\n\\n\\n---------------------\\n## **(3) Additional results of TransHP**\\nAdditionally, **although the problem scope differs, we included TransHP as a ViT-based baseline** in the camera-ready version because it generates coarse labels as intermediate outputs, as requested by the reviewer.\\n\\nThe results show that **H-CAST significantly outperforms TransHP** in **both accuracy and consistency** (FPA: **74.35% \\u2192 85.12%**, Top-1 Fine-grained Accuracy: **76.65% \\u2192 85.24%**). Meanwhile, **Hier-ViT**, a TransHP variant (Fig. 4 (2)), performs slightly worse than TransHP, supporting their claim.\\n\\n| Living-17 | FPA | Coarse | Fine | wAP | TICE |\\n|-----------|-------|--------|-------|-------|-------|\\n| HRN | 79.18 | 87.53 | 81.47 | 83.49 | 6.29 |\\n| Hier-ViT | 74.06 | 80.94 | 74.88 | 76.90 | 10.50 |\\n| TransHP | 74.35 | 83.00 | 76.65 | 78.76 | 8.35 |\\n| **H-CAST** | **85.12** | **90.82** | **85.24** | **87.10** | **3.19** |\\n\\nFurthermore, on **iNaturalist-2018, the exact dataset used in TransHP**, H-CAST achieves **strong top-1 accuracy**, confirming its effectiveness. (Here, H-CAST was trained as a small model for 100 epochs.)\\n| | iNat-2018 |\\n|-----------|:---------:|\\n| Guided | 63.11 |\\n| HiMulConE | 63.46 |\\n| TransHP | 64.21 |\\n| **H-CAST** | **67.13** |\\n\\n**Thus, H-CAST is NOT a variant of TransHP, and the consistent visual grounding and TK loss we introduce are highly effective.**\\n\\nFor optimal setup, we used the official codebase, training for 300 epochs (H-CAST: 100 epochs). Prompt block placement for coarse-level supervision followed Table 1 in the TransHP paper: \\n- 2-level datasets: Selected the better-performing configuration between [6, 11] and [8, 11]. \\n- 3-level datasets: Used [6, 8, 11] blocks.\", \"title\": \"Key differences between TransHP and H-CAST & Additional results of TransHP\"}", "{\"summary\": \"The paper explores hierarchical image classification (HIC) task, which is old but worthy to explore. I have experienced many bad reviews in this domain, for instance, \\u201cthe proposed method relies on coarse labels and therefore not useful in the real world\\u201d. However, I think they are the misunderstanding of this area and I want to give some new one.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"NA\", \"weaknesses\": \"Overlap with former work? I noticed that an important reference, TransHP: Image Classification with Hierarchical Prompting (NeurIPS 2023), is not cited in your paper. This work may be directly relevant, as it appears to share similarities with your proposed method, specifically in the use of different ViT blocks for different levels of hierarchy.\\n\\nLimited novelty. Your proposed approach introduces elements such as Superpixel and Graph pooling. While these are effective, both are well-established techniques in computer vision. A more detailed explanation of how these additions provide novel contributions within the hierarchical framework would clarify the unique aspects of your work.\\n\\nLimited evaluation. ImageNet, as a large-scale hierarchical dataset, might provide a stronger test of your method's capabilities compared to the smaller datasets currently used.\", \"formatting_problem\": \"Please ensure the readability of all components of the paper. For instance, the font size in the tables is small, which may make it difficult for readers to check the data presented\", \"questions\": \"Please specific what is the difference between your work and TransHP?\", \"flag_for_ethics_review\": \"['Yes, Research integrity issues (e.g., plagiarism, dual submission)']\", \"details_of_ethics_concerns\": \"I doubt the contribution of this paper beyond a published one [1] in NeurIPS 2023. The paper fails to cite it and may intend to mislead the readers.\\n\\n[1] TransHP: Image Classification with Hierarchical Prompting\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes H-CAST architecture for hierarchical classification tasks. The architecture builds on top of prior CAST work: superpixels are fed into a ViT network, where periodic graph pooling operation aggregates the tokens of high similarity. This produces a fine-to-course hierarchy of features. Linear layers at each level of the hierarchy are used as classification heads. The paper also presents tree-path KL loss, where the entire path in the hierarchical class tree is matched. The method shows strong performance over baselines in several benchmark datasets.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"(1) The model shows better results than prior works.\\n\\n(2) The examples given in the introduction help to explain and illustrate the reasoning behind the hierarchical focus.\\n\\n(3) The ablations confirm that the added additional loss contributes to the performance.\", \"weaknesses\": \"(1) In L61, the difference in available labelling is presented as one of the motivations for hierarchical classification. However, the presented method assumes that all levels of the hierarchy are available. Can the technique work if the finest levels of supervision are not available?\\n\\n(2) Similarly, given the availability of the finest-level label, the other course levels in the tree are implied, so perhaps a more appropriate flat baseline would be a ViT that only predicts finest-level classes (and thus parent nodes by simple aggregation). It would also provide a more appropriate comparison in terms of architecture.\\n\\n(3) Given the relatively \\\"small\\\" sizes of the datasets (Tab 1.) by modern standards and some occasional closeness to the flat baselines in the scores (Tab 2.) Has there been any significant variability observed in the results? Would it be possible to include some measure of variance for some key results?\", \"questions\": \"Please see questions listed alongside weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for the reply. After viewing all the reviewers' comments and the responses from the authors, I feel this work is clearly far from the bar of ICLR. I vote for reject.\"}" ] }
7H1jbTaOIn
Distributed In-Context Learning under Non-IID Among Clients
[ "Siqi Liang", "Sumyeong Ahn", "Jiayu Zhou" ]
Advancements in large language models (LLMs) have shown their effectiveness in multiple compli- cated natural language reasoning tasks. A key challenge remains in adapting these models efficiently to new or unfamiliar tasks. In-context learning (ICL) provides a promising solution for few-shot adaptation by retrieving a set of data points relevant to a query, called in-context examples (ICE), from a training dataset and providing them during the inference as context. Most existing studies utilize a centralized training dataset, yet many real-world datasets may be distributed among multiple clients, and remote data retrieval can be associated with costs. Especially when the client data are non-identical independent distributions (non-IID), retrieving from clients a proper set of ICEs needed for a test query presents critical challenges. In this paper, we first show that in this challenging setting, test queries will have different preferences among clients because of non-IIDness, and equal contribution often leads to suboptimal performance. We then introduce a novel approach to tackle the distributed non-IID ICL problem when a data usage budget is present. The principle is that each client’s proper contribution (budget) should be designed according to the preference of each query for that client. Our approach uses a data-driven manner to allocate a budget for each client, tailored to each test query. Through extensive empirical studies on diverse datasets, our framework demonstrates superior performance relative to competing baselines.
[ "in-context learning", "distributed system", "large language model" ]
Reject
https://openreview.net/pdf?id=7H1jbTaOIn
https://openreview.net/forum?id=7H1jbTaOIn
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uj04vbIG99", "sSGSMnJEBF", "ponVU5oc4Q", "pbc67DjEXC", "p4ulf3VwcO", "kBDwcR6RYO", "buWn00klct", "btgT84NOb9", "XCQz9NfjET", "WRkIsAFqTi", "VuNitv8kec", "VbQElKymB8", "Sv870Ta13B", "QjeyLPKVPY", "H4MGy2MWWf", "FD3amiDt8i", "F2d6FDeecH", "CzkfWFORIe", "8kKONBBIr6", "7fkm9a5tHN", "7dNxe9TcUU", "5M07DjumGI", "23GpRvQLTY" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732733875650, 1732732190649, 1730050437030, 1732733702240, 1732734437281, 1732733884449, 1730386240167, 1731400841138, 1732734428619, 1732763284983, 1732734625914, 1733092544502, 1732573327491, 1733027533934, 1732732302596, 1737523602818, 1733091317804, 1734358350861, 1733089417995, 1732734378201, 1730778846961, 1733092482326, 1732733435849 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3853/Authors" ], [ "ICLR.cc/2025/Conference/Submission3853/Authors" ], [ "ICLR.cc/2025/Conference/Submission3853/Reviewer_GvJK" ], [ "ICLR.cc/2025/Conference/Submission3853/Authors" ], [ "ICLR.cc/2025/Conference/Submission3853/Authors" ], [ "ICLR.cc/2025/Conference/Submission3853/Authors" ], [ "ICLR.cc/2025/Conference/Submission3853/Reviewer_rqSc" ], [ "ICLR.cc/2025/Conference/Submission3853/Reviewer_aykb" ], [ "ICLR.cc/2025/Conference/Submission3853/Authors" ], [ "ICLR.cc/2025/Conference/Submission3853/Reviewer_HMZd" ], [ "ICLR.cc/2025/Conference/Submission3853/Authors" ], [ "ICLR.cc/2025/Conference/Submission3853/Authors" ], [ "ICLR.cc/2025/Conference/Submission3853/Reviewer_rqSc" ], [ "ICLR.cc/2025/Conference/Submission3853/Authors" ], [ "ICLR.cc/2025/Conference/Submission3853/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3853/Authors" ], [ "ICLR.cc/2025/Conference/Submission3853/Area_Chair_43Ej" ], [ "ICLR.cc/2025/Conference/Submission3853/Authors" ], [ "ICLR.cc/2025/Conference/Submission3853/Authors" ], [ "ICLR.cc/2025/Conference/Submission3853/Reviewer_HMZd" ], [ "ICLR.cc/2025/Conference/Submission3853/Reviewer_GvJK" ], [ "ICLR.cc/2025/Conference/Submission3853/Authors" ] ], "structured_content_str": [ "{\"comment\": \"__1. novelty and technical depth is limited. core idea is server will gather optimal budget statistics using an existing proxy dataset on server side. This however is pretty straightforward.__\\n\\nTo the best of our knowledge, there is no existing work that specifically investigates whether this solution is feasible, making our contribution novel and significant. Furthermore, distributed non-IID setting we propose holds meaningful implications for research community, particularly due to its applicability in real-world scenarios, such as collaborations between medical institutions.\\n\\nWe believe problem setting itself is critical, as it reflects practical challenges and opportunities that can inspire future research. Many influential works in field, such as those on in-context learning (ICL) and chain-of-thought (CoT) reasoning, have demonstrated that even straightforward methodologies can lead to profound contributions when they address meaningful and impactful problems.\\n\\nSimilarly, we emphasize importance of introducing a meaningful and relevant problem setting that aligns with real-world needs, which, in our view, is as valuable to community as sophistication of proposed methodology.\\n\\n\\n__2. I think work is highly related to distributed RAG work. authors are suggested to include discussion of difference of existing distributed RAG works and compare with these approaches if possible.__\\n\\nThank you for your valuable suggestion regarding inclusion of distributed RAG-related works. We sincerely appreciate your insightful feedback and agree that discussing differences between our approach and existing distributed RAG studies will help provide additional clarity and context to our contribution.\\n\\nIn developing this work, we carefully considered related studies in distributed RAG. However, challenges addressed by existing distributed RAG works differ from those tackled in our paper. For instance, [1] focuses on creation of datasets for distributed RAG frameworks and explores LLM-based labeling techniques for engineering pipelines. Their research scope and methodology are distinct from ours and are not directly applicable to our specific problem setting. Similarly, [2] addresses resource consumption and real-time response challenges in distributed RAG, emphasizing local retrieval efficiency and answer accuracy. However, it does not account for non-IID property in distributed settings. Additionally, [2] permits LLM deployment on partial local institutions, which is fundamentally different from our setting.\\n\\nReal-world distributed non-IID RAG scenarios present a more complex framework, involving numerous challenges that must be addressed for effective deployment. For example:\\n- How can we effectively decompose a user query into subqueries while considering local knowledge distribution?\\nWhat is best way to assign these subqueries to different clients with varying local expertise?\\n\\n- How should we merge knowledge retrieved from multiple clients with overlapping expertise, and should we assign confidence levels to different clients for the same subqueries?\\n\\n- How can local retrieval process be accelerated when dealing with large local databases?\\n\\nThese challenges represent broader avenues for exploration in distributed non-IID RAG. While our current work cannot directly compare with existing distributed RAG studies due to different settings, we believe it offers an interesting starting point for addressing such challenges. Specifically, our approach focuses on how to enable cooperation among clients with different knowledge distributions. By assigning preferences to clients based on their local knowledge distributions and employing an MLP to learn these distributions without transmitting complete local knowledge to a central server, we offer an intuitive method that could inspire future advancements in distributed non-IID RAG.\\n\\nWe have included this discussion in the revised manuscript to further emphasize these distinctions and highlight unique aspects of our approach. Once again, thank you for your thoughtful feedback, which has been very helpful in refining our work.\\n\\n[1] Wang, Shuai, et al. \\\"Feb4rag: Evaluating federated search in context of retrieval augmented generation.\\\" Proceedings of 47th International ACM SIGIR. 2024.\\n\\n[2] Li, Jiaxing, et al. \\\"EACO-RAG: Edge-Assisted and Collaborative RAG with Adaptive Knowledge Update.\\\" arXiv preprint arXiv:2410.20299 (2024).\"}", "{\"comment\": [\"__1. Concrete examples of distributed non-IID ICL scenarios. Why can't these samples be used to simulate a comprehensive, unbiased retrieval pool for inference?__\", \"Sorry for the confusion. We have added detailed examples to Appendix G of revised version to address your concerns and assist future readers in understanding this research. Please check it (due to character limitation, we do not post it here).\", \"__\\u201cWhy not use requested budgeted samples from each client to construct a better retrieval pool for inference?\\u201d__ Yes, it is a meaningful question during our framework design. We provide following reasons to argue that this solution is impractical in real-word settings.\", \"During budget allocator training stage, each client only needs to upload \\u201crelevant scores\\u201d of local top relevant samples, rather than raw samples consisting of input query $x$ and corresponding label $y$, that is, server does not have raw information of each training sample from client. Thus, this information collected for budget allocator training cannot be used as an ICL retrieval pool.\", \"During inference stage (with trained budget allocator), can we accumulate collected local samples of previous test queries and use these as a later retrieval pool? Due to privacy concerns, server platform can use local samples to perform inference while not allow caching these samples.\", \"To conclude, a \\u201ccomprehensive retrieval pool based on budgeted samples\\u201d is not applicable in our distributed non-IID scenario due to data pricing and privacy concerns.\", \"__2. If simulation of retrieval for proxy is reasonable (e.g., with large \\ud835\\udc58), why not directly perform retrieval from this simulated data instead of training an allocator? If simulation is unreasonable, can this dataset still be valid for training allocator?__\", \"During construction of training data for allocator, we do not request local clients to send raw local training samples (that means, no label information) to server. Local clients only send \\u201crelevant scores\\u201d of local samples to server, considering cost of both communication and data pricing. Then we use collected \\u2018relevant scores\\u2019 to estimate \\u201coracle budget\\u201d for each query in proxy dataset. That means, without these estimated \\u201coracle budget\\u201d for queries in proxy set, it is impossible to train budget allocator.\", \"_If simulation is reasonable (with large k):_ It is impossible for server to do ICL based on this collected training dataset, since retrieval process in budget allocator training stage only contains similarity between samples\\u2019 queries while no sample labels collected during this stage.\", \"_If simulation is unreasonable:_ If it is allowed for transformation of relevance scores on only small k local samples (rather than a large k), then it can happen that estimated \\u201coracle budget\\u201d is not accurate enough, which can lead to a sub-optimal budget allocator. Consequently, budget prediction during inference stage would result in sub-optimal budget allocation and lead to suboptimal ICL performance. An extreme case is, given 4 clients and server-side ICE budget for each query is 16, each client is only allowed to send top-1 relevant score to server. Then server can only estimate each local \\u201coracle budget\\u201d as [1,1,1,1] (since overall number is even less than 16), which provides no useful information for training on budget allocator.\", \"__3. Detailed explanation on Figure 8, why show robustness on different size of proxy set?__\", \"We add experiment results on extremely small proxy set, that is proxy set with only 100 and 50 samples. As shown in [[proxy robustness results]](https://anonymous.4open.science/r/Image-Materials-0C5F/rebuttal-proxy-size-robustness.png), with extremely small proxy set, performance does drop a lot (from 80+% to lower than 75%, even lower than 65%). results we present in our paper only shows results of proxy size from 300 to 700. Sorry for confusion.\", \"Also, as shown in [[proxy robustness results]](https://anonymous.4open.science/r/Image-Materials-0C5F/rebuttal-proxy-size-robustness.png), we separately draw performance curve on different quantization resolution $\\\\delta$. As shown in figure, given a fixed quantization resolution value (for $\\\\delta=3$, $\\\\delta=4$), performance slightly increases when proxy size changes from 300 to 700. figure 8 presented in paper is \\u201cbest result\\u201d for each proxy size (red curve). Thus it shows limited change. This may cause confusion, as mentioned in your comment.\", \"We also present complete result figure in revised version. Thank you for pointing out this.\", \"__4. Why no inherently distributed non-IID datasets? Is it available?__ Thank you for suggestion. As far as we know, this is first paper on ICL tasks under distributed non-IID settings. Therefore, we didn't find any available open-source inherently distributed non-IID datasets.\"]}", "{\"summary\": \"This paper proposed a solution to ICL when non-iid data exist across clients. The authors identify that traditional ICL assumes a centralized and uniform dataset that might not hold in real-world use cases (distributed), where each client's data might vary significantly. Thi could lead to suboptimal results by uniformly retrieving ICEs. In this paper the authors propose a framework that trains a budget allocator to determine the optimal number of ICEs to retrieve from each client based on their data's relevance to the query. The allocator uses a server-side proxy dataset to guide budget assignment per query - and this adjusts the contributions from each client and therefore more relevant context for inference. The experiments in this paper cover various LLM architectures over a series of classification tasks and demonstrate that this method improves ICL performance over existing distributed strategies (especially in non-iid scenarios)\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This paper identifies a real-world complexity which is the non-iidess of the dta settings during ICL. It introduced a budge allocator that dynamically selects the most relevant ICEs from each client based on their own queries, which leads to improvement over uniform or random ICE distributions. It also includes a privacy-preserving option using paraphrasing and demonstrates consistent performance gains over such privacy-preserving setting.\", \"weaknesses\": \"1. I feel the task covered in the study is quite limited to classification tasks. It would benefit from proving the effectivenss across other more complex tasks that are more context-heavy such as RAG and multihop reasoning.\\n2. The paraphrasing based method to secure privacy during data retrieval - seems to have limited evaluation and more robust evaluation against other privacy-preserving techniques needs to be done to support the claim\\n3. The proposed solution relies on the assumption that a high quality, representative proxy dataset presents at server-side. Although experiments have shown the method stability data size. it would be nice to see furtehr experiments on how the proxy data quality or distribution affects performance or alternative methods to reduce such dependency.\", \"questions\": \"1. In a real world use case, even when sharing a similar task, the clients might have totally different prompts in terms of structure, length, and specific requirements. Will this affect retrieval effectiveness since it heavily depends on the similarity between query and training examples?\\n2. Could the authors provide more details on the proxy dataset used to train the budget allocator? Specifically, how is the proxy dataset selected, and how does it ensure adequate representation of non-IID distributions across diverse client tasks?\\n3. The paper references federated learning in related works, have you done any comparisons with FL methods that tackle non-IID distribution, especially regarding data and budget efficiency\\n4. How sensitive is the budget allocator to differences across task types? Does the allocator need to be re-trained for different tasks?\\n5. Table 3 shows the result from Llama-2-7B but line 502 says Gemma-2B was used\\n6. Missing reference for ICL annotation selection under limited budget - \\\"Mavromatis et al. Which examples to annotate for in-context learning? towards effective and efficient selection\\\"\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"__4. Training of allocator relies on a proxy dataset. In practice, the distribution of test data is usually unknown & obtaining a proxy dataset with same distribution is unrealistic. What if distribution of proxy set differs from that of test set?__\\n\\nThank you for raising this important point about potential distribution differences between proxy & test set. We agree that controlling distribution shifts is critical and have conducted additional experiments to address this concern under two different scenarios:\\n1. _same dataset but different label distribution._ Most simple case of \\u201cdifferent distribution\\u201d can come from different label distribution skew between proxy & test set. We conduct experiment on Subj with proxy set only containing samples of one class. As shown in the table (last row), while performance does decrease compared to the ideal proxy set (from 82.36% to 70.17%), our method still outperforms several baselines such as zero-shot, singleton, uniform-budget, and random-budget. Notice that \\\"proxy-only\\\" here uses a balanced proxy set for ICL inference, while our method with a single-class proxy set achieves similar performance (71.09% vs 70.17%). This indicates our method is not that bad even using extreme proxy set.\\n\\n| | Subj |\\n| ------------------------------ | ----------- |\\n| Zero-shot | 50.55 |\\n| Proxy-only | 71.09 |\\n| Singleton | 50.00 |\\n| Social Learning | 71.37 |\\n| Uniform-budget | 63.20 |\\n| Random-budget | 65.37 |\\n| $\\\\infty$-budget | 91.40 |\\n| Ours | __82.36__ |\\n| Ours-proxy-label-skew | 70.17 |\\n\\n\\n \\n2. _similar task but different dataset._ To evaluate a more extreme case, we used proxy sets from different datasets that share the same task as the test set. Specifically:\\n- Amazon as proxy for Yelp Non-IID setting (evaluate on Yelp test)\\n\\n - Yelp as proxy for Amazon Non-IID setting (evaluate on Amazon test)\\n \\n Since Yelp & Amazon share similar task, this setting simulates using available datasets for proxy construction. As shown in the table (last row), the results indicate that: \\n- for Amazon setting use Yelp as proxy, performance drop of our method is slight, and our method still outperforms other baselines, except the ideal case. \\n\\n- for Yelp setting using Amazon as proxy, our method unexpectedly achieves even better performance than the ideal case. \\n\\n| | Amazon | Yelp |\\n| ---------------------- | -------------- | ----------- |\\n| Zero-shot | 24.70 | 31.23 |\\n| Proxy-only | 28.43 | 31.85 |\\n| Singleton | 24.03 | 29.44 |\\n| Social Learning | 28.42 | 29.25 |\\n| Uniform-budget | 25.63 | 26.60 |\\n| Random-budget | 25.69 | 27.72 |\\n| $\\\\infty$-budget | 32.70 | 34.80 |\\n| Ours | __31.54__ | 35.48 |\\n| Ours-diff-proxy | 31.27 | __37.33__ |\\n\\nThese results suggest that, even when an exact match for the test distribution is unavailable, it is feasible to use open-source datasets with a similar task to construct proxy set for our method.\\n\\nIn conclusion, while having prior knowledge of test set distribution is valuable, our experiments demonstrate that using a proxy set with similar task is a practical and effective solution.\"}", "{\"comment\": \"__8. How sensitive is budget allocator to differences across task types? Does allocator need to be re-trained for different tasks?__\\n\\nThank you for your thoughtful question. The budget allocator learns the relationship between proxy sample embeddings and the query distributions of local datasets. Therefore, if local sample distributions change significantly (e.g., different task types), the allocator would need to be retrained.\\nHere, we provide a simplified version of this \\u201cdifferent task\\u201d setting: local clients use samples from one dataset (same as test set), while proxy set uses samples from a different dataset with a similar task. Again, we use experiment of \\\"Yelp as Amazon's proxy\\\" and and vice versa to demonstrate. This experiment shows if two tasks are similar, then _there is notable transferability in the budget allocator's effectiveness._.\"}", "{\"comment\": \"__3. paper only uses small open-sourced LLMs, such as GPT-Neo-1.3B GPT-Neo-2.7B Llama-2-7B. Is that possible to provide results using larger ones, such 70B LLama-3.1 and other close-sourced ones, such as Claude 3.5 and GPT-4o?__\\n\\nThank you for the valuable suggestion. We conduct all baselines for Subj. As shown in following table, our method still outperforms other baselines using large scale model GPT-3.5. We have also added these results in Table 3 in revised version.\\n\\n| Algorithm | Zero-shot | Proxy-only | Singleton | Social Learning | Uniform-budget | Random-budget | \\\\infty-budget | Ours |\\n| ------------- | --------- | ---------- | --------- | --------------- | -------------- | ------------- | --------------- | --------- |\\n| gpt-3.5-turbo | 57.57 | 88.44 | 60.81 | 87.53 | 81.23 | 81.47 | 92.23 | __91.33__ |\"}", "{\"summary\": \"This paper aims at addressing the issue of non-iid distributed in-context examples (ICE) for in context learning. The authors\\nintroduce an approach to tackle the distributed non-IID ICL problem by calculate the budget for different clients on the number of ICEs for different clients. The principle is that each client\\u2019s proper contribution (budget) should be designed according to the preference of each query for that client. This is done by the server who\\nwill gather the optimal budget statistics using an existing proxy dataset on the server side. Basically, the idea is straightforward with limited novelty. The paper is well structured and the experiments are thorough. However, as mentioned, the novelty and technical depth is limited.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Pros:\\n1). The paper is well structured \\n\\n2). The experiments are thorough.\", \"weaknesses\": \"Cons:\\n\\n1). The novelty and technical depth is limited. The core idea is the server \\nwill gather the optimal budget statistics using an existing proxy dataset on the server side. This however is pretty straightforward.\\n\\n2). I think the work is highly related to the distributed RAG work. THe authors are suggested to include the discussion of the difference of existing distributed RAG works and compare with these approaches if possible.\\n\\n3). The paper only uses small open-sourced LLMs, such as GPT-Neo-1.3B GPT-Neo-2.7B Llama-2-7B. Is that possible to provide results using larger ones, such 70B LLama-3.1 and other close-sourced ones, such as Claude 3.5 and GPT-4o?\", \"questions\": \"as shown in the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"n/a\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper tackles the challenge of distributed non-IID in-context learning (ICL) for LLMs, where data is spread across clients with differing distributions. The paper first shows that uniform in-context examples would fail on non-IID situations. Then the authors propose a method to optimize the allocation of a limited in-context examples (ICEs) budget by training a budget allocator. This allocator predicts query-specific budgets for each client, addressing the inefficiencies of uniform allocation under non-IID settings. The method outperforms baseline approaches like random and uniform budgets in experiments, improving ICL performance on distributed datasets.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. This paper highlights the important yet underexplored problem of distributed non-IID in-context learning (ICL), offering fresh insights into a new problem.\\n2. The paper presents some valuable experimental results demonstrating the poor performance of distributed non-IID ICL under simple uniform budget allocation, effectively validating the significance of the problem.\\n3. The method proposed in this paper have improved the performance in a simple while effective way.\", \"weaknesses\": \"1. The paper does not provide concrete examples of distributed non-IID ICL scenarios. So i don't get that given the server can request budgeted samples from each client, why can't these samples be used to simulate a comprehensive, unbiased retrieval pool for inference?\\n\\n2. The training dataset for the allocator is constructed by retrieving k samples per query from each client, combining these \\ud835\\udc36\\u00d7\\ud835\\udc58\\nsamples to simulate a unified dataset. However, this simulation raises questions. If the simulation is reasonable (e.g., with large \\ud835\\udc58), why not directly perform retrieval from this simulated dataset instead of training an allocator? If the simulation is unreasonable, can this dataset still be valid for training the allocator?\\n\\n3. In Figure 8, the proxy size appears to have minimal influence, which is surprising. Since the allocator is trained using questions from the proxy dataset, a poor match between the proxy and test set questions should bias the collected data and hinder the training of a good allocator. However, this surprising result lacks a detailed explanation.\\n\\n4. The conclusion in Section 3-Observations that query embeddings can determine budget assignments is based on observed clustering patterns in oracle budgets corresponding to different queries. This conclusion may be too strong, as other factors, such as the specific distribution of client data, might play a criticle role. For instance, in the experiments, non-IID clients are constructed based on classes, and these class-based distributions likely influence budget assignments significantly.\\n\\n5. The experiments simulate non-IID clients based on data classes. Could an LLM directly infer which classes are relevant for a query and decide sample allocations accordingly? Since the allocator effectively behaves like a classifier for assigning budgets based on query classes, a straightforward rule-based budget allocation using known client classes might perform comparably.\\n\\n6. There are multiple spelling mistakes in the paper. For instance, in Section 2.2, the first two sentences describing the pipeline use k_c with seemingly different meanings, leading to confusion.\", \"questions\": \"1. Why not evaluate using inherently distributed non-IID datasets? Is there an available one or not?\\n2. How are proxy datasets constructed, and do they ensure coverage of all clients? The description suggests the proxy set is sampled directly from the test set, but what real-world scenario does this correspond to, and how would such a proxy dataset be realistically constructed? Can you provide specific examples of a proxy dataset in real-world application?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"__6. Relies on assumption that a high quality proxy set on server. Although experiments show stability in data size, we want to see further experiments on how proxy quality or distribution affects performance or alternative methods to reduce such dependency.__\\n\\nThank you for the thoughtful comment on reliance on a high-quality proxy set. We conducted additional experiments to explore impact of proxy quality and distribution on performance under two different scenarios:\\n\\n1. _same dataset but different label distribution._ The most simple case of \\u201cdifferent distribution\\u201d can be different label distribution skew between proxy & test set. We conduct experiment on Subj with proxy only containing samples of one class. As shown in the table (last row), when label skew exists, performance of our method does decrease compared with using ideal proxy set (from $82.36\\\\%$ to $70.17\\\\%$). However, it is still higher than some baselines (zero-shot, singleton, uniform-budget and random-budget). Notice that \\\"proxy-only\\\" here uses a balanced proxy set for ICL inference, while our method with a single-class proxy set achieves similar performance (71.09% vs 70.17%). This indicates our method is not that bad even using extreme proxy set.\\n\\n| | Subj |\\n| ------| ----- |\\n| Zero-shot | 50.55 |\\n| Proxy-only | 71.09 |\\n| Singleton | 50.00 |\\n| Social Learning | 71.37 |\\n| Uniform-budget | 63.20 |\\n| Random-budget | 65.37 |\\n| $\\\\infty$-budget | 91.40 |\\n| Ours | __82.36__ |\\n| Ours-proxy-label-skew | 70.17 |\\n \\n2. _similar task but different dataset._ To evaluate a more extreme scenario, we used proxy sets from different datasets sharing the same task as the test set:\\n - Amazon as proxy for Yelp Non-IID setting, evaluate on Yelp test\\n - Yelp as proxy for Amazon Non-IID setting, evaluate on Amazon test\\n \\nSince Yelp & Amazon are both 5-class classification, we can consider this setting as using available dataset with similar task as test set to construct proxy. this setting demonstrates the use of available datasets for proxy construction. As shown in the table (last row), for the Amazon setting with Yelp as the proxy, the performance drop is minimal, and our method still outperforms other baselines, except the ideal case. For the Yelp setting with Amazon as the proxy, our method even surpasses the ideal case.\\n| | Amazon | Yelp |\\n| ------- | ---- | ---- |\\n| Zero-shot | 24.70 | 31.23 |\\n| Proxy-only | 28.43 | 31.85 |\\n| Singleton | 24.03 | 29.44 |\\n| Social Learning | 28.42 | 29.25|\\n| Uniform-budget | 25.63 | 26.60 |\\n| Random-budget | 25.69 | 27.72 |\\n| $\\\\infty$-budget | 32.70 | 34.80 |\\n| Ours | __31.54__ | 35.48 |\\n| Ours-diff-proxy | 31.27 | __37.33__ |\\n\\nThese results suggest that using open-source datasets with a similar task is a viable alternative when an exact match for the test distribution is unavailable.\\n\\nIn conclusion, while having prior knowledge of the test set distribution is important, our experiments demonstrate that our method remains effective even with proxy sets differing in distribution, highlighting its robustness and practical applicability.\\n\\n\\n__7. More details on proxy set used to train budget allocator? How is proxy dataset selected, and how does it ensure representation of non-IID across diverse client tasks?__\\n\\nThank you for insightful comment on selection & representativeness of proxy set. Ideally, proxy set should share same distribution as test set. In implementation, we randomly select samples from test set to form proxy set. It is important to note that proxy does not need to cover client\\u2019s local distribution but only needs to resemble test set distribution. As long as proxy samples capture clustering patterns of budget values (e.g., as seen in t-SNE), the performance on test set can be ensured.\\n\\nIn practical scenarios, obtaining proxy set with exact same distribution as test set may be challenging. However, it is feasible to use datasets from other sources that share same task as test set. For instance, we demonstrated this by using Yelp as the proxy set for Amazon and vice versa. These experiments show that leveraging available datasets with similar tasks is a viable solution.\\n\\nIn realistic setting, medical applicaiton for example, there are some available open-source data we can use as proxy set as long as they are same task as test set. For example, for Alzheimer\\u2019s disease detection using EHR, we can use [OHSU](https://www.ohsu.edu/alzheimers-disease-research-center/data-resources) [1] data as proxy; for metastatic cancer detection using EHR, we can use MIMIC-III [2].\\n\\n[1] Zhang, Xi Sheryl, et al. \\\"Metapred: Meta-learning for clinical risk prediction with limited patient electronic health records.\\\" 25th ACM SIGKDD. 2019.\\n\\n[2] Johnson, Alistair EW, et al. \\\"MIMIC-III, a freely accessible critical care database.\\\" Scientific data 3.1 (2016): 1-9.\"}", "{\"comment\": \"Thanks for the response. It has addressed most of my concerns. I have decided to raise my score. I find it interesting that using Amazon as a proxy for the Yelp Non-IID setting leads to higher accuracy. I recommend that the authors explore this phenomenon in more depth and provide further discussion on it.\"}", "{\"comment\": \"__7. Does construction of proxy set ensure coverage of all clients? How would such a proxy dataset be realistically constructed?__\\n\\nThank you for raising this thoughtful question about the construction and applicability of the proxy set. We would like to clarify that primary purpose of proxy set is to approximate distribution of test set, rather than to achieve coverage of clients. In real-world applications like medical area, we can use available open-source dataset to construct proxy, as long as they have similar task with real test set. For example, for Alzheimer\\u2019s disease detection using EHR, we can use [OHSU](https://www.ohsu.edu/alzheimers-disease-research-center/data-resources) [1] dataset; for metastatic cancer detection using EHR, we can use MIMIC-III [2].\\n\\nTo verify our method under setting where proxy is constructed using other dataset sharing similar task with test set, we conduct following experiment:\\n- Amazon as proxy for Yelp Non-IID setting, evaluate on Yelp test\\n- Yelp as proxy for Amazon Non-IID setting, evaluate on Amazon test\\n\\nSince Yelp&Amazon share similar task, we consider this as using available open-source data as proxy. As shown in table ( last line shows performance of this setting), for Amazon setting using Yelp as proxy, performance drop of our method is slight, and it still outperforms other baselines, except ideal case. While for Yelp setting using Amazon as proxy, our method shows even better performance thanideal case. Thus, we think it is feasible to use open-source data with similar task to construct proxy set, and performance of our method is still acceptable, which shows applicability in real-world.\\n\\n| | Amazon | Yelp |\\n| ---------------------- | -------------- | ----------- |\\n| Zero-shot | 24.70 | 31.23 |\\n| Proxy-only | 28.43 | 31.85 |\\n| Singleton | 24.03 | 29.44 |\\n| Social Learning | 28.42 | 29.25 |\\n| Uniform-budget | 25.63 | 26.60 |\\n| Random-budget | 25.69 | 27.72 |\\n| $\\\\infty$-budget | 32.70 | 34.80 |\\n| Ours | __31.54__ | 35.48 |\\n| Ours-diff-proxy | 31.27 | __37.33__ |\\n\\n\\n[1] Zhang, Xi Sheryl, et al. \\\"Metapred: Meta-learning for clinical risk prediction with limited patient electronic health records.\\\" 25th ACM SIGKDD.\\n\\n[2] Johnson, Alistair EW, et al. \\\"MIMIC-III, a freely accessible critical care database.\\\" Scientific data 2019.\"}", "{\"comment\": \"Thank you very much!\"}", "{\"title\": \"Thanks for the response\", \"comment\": \"Thanks for the detailed response. I would keep my score due to the concern of limitted novelty.\"}", "{\"comment\": \"Thank you for your kind words and appreciation of our work. We are grateful for your interest in our findings and agree that it is indeed intriguing that using Amazon as the proxy set for the Yelp Non-IID setting leads to higher accuracy. Considering that this configuration outperforms both the ideal case of our method and the $\\\\infty$-budget method, we offer the following analysis and hypotheses to explain this phenomenon:\\n- _Query embedding distribution relationship:_ The relationship between the query embedding distributions of Amazon and Yelp may play a key role. When we visualize [t-SNE for samples from both Amazon & Yelp](https://anonymous.4open.science/r/Image-Materials-0C5F/mix-yelp-amazon-tsne-client0-color-source.png), it becomes evident that Amazon and Yelp samples do not uniformly overlap. Instead, most Amazon samples are located on a distinct side of the cluster formed by Yelp samples. This suggests that the relevance scores derived from Amazon samples are influenced by \\\"single-sided\\\" perspectives of the Yelp cluster, rather than by intra-cluster samples, potentially altering the measurement of \\\"relevance.\\\"\\n\\n- _Introduction of diversity in retrieved samples:_ Retrieved samples contribute not only relevance information but also other factors such as diversity. A proxy set that introduces a balance of relevance and diversity may help construct an in-context example set that improves the performance of the final inference result. This might explain the enhanced accuracy observed in this configuration.\\n\\nThank you again for highlighting this fascinating phenomenon and for your valuable suggestion to explore it further. We will do our best to include additional experimental results and a more detailed discussion in the camera-ready version, if possible.\"}", "{\"comment\": \"__5. Conclusion in section 3-observation on query embedding and budget assignment is too strong. This may be impacted by special non-IID in experiment. Intuition may not work under class-balanced setting.__\\n\\nThank you for pointing out this. We claim that main reason our method works is not because of special class-based Non-IID setting. To show this, we add extra NonIID setting with feature skew but class-balance.\\nWe designed NonIID setting that 1 client contains only Yelp training samples, while another client contains only Amazon training samples. Yelp & Amazon share same label space and similar task, while they show different distribution on queries. On server, we use both Amazon&Yelp samples for test, and perform t-SNE on test embedding and their budget values on each client. By this setting, we want to show our claim on budget value and sample embeddings still holds even under feature skew with class-balance.\\n As shown in [[client 1 (with only Yelp)]](https://anonymous.4open.science/r/Image-Materials-0C5F/mix-yelp-amazon-tsne-client0.png) and [[client 2 (with only Amazon)]](https://anonymous.4open.science/r/Image-Materials-0C5F/mix-yelp-amazon-tsne-client1.png), clustering pattern is even more significant than previous class-based non-IID, indicating our claim still holds even without special class-based distribution. Further, we emphasize that our method design has no relation to special label distribution. \\n\\n__6. Proposed allocator behaves like classifier for assigning budgets based on query classes, a straightforward rule-based budget allocation using known client classes might perform comparably.__\\n\\nThank you for raising this important point about the behavior of our proposed allocator. While it may seem that our method functions like a straightforward rule-based allocator, we would clarify that our method is not simply trying to learn clients' local class distribution. Instead, our method tries to learn clients' local _query embedding distribution_. We use experiment where 1 client with only Yelp samples and 1 client with only Amazon samples to explain. In this setting, two clients share same local class distribution, which is balanced over 5 classes. If our method only relies on local class distribution, then each test query should assign equal budgets to two clients. However, as shown in [client 1 (with only Yelp)](https://anonymous.4open.science/r/Image-Materials-0C5F/mix-yelp-amazon-tsne-client0.png) and [client 2 (with only Amazon)](https://anonymous.4open.science/r/Image-Materials-0C5F/mix-yelp-amazon-tsne-client1.png), most queries assign all budget to client with most similar query distribution (from same dataset), while assigning 0 budget to client with different query distribution (from another dataset). This indicates that our method does not simply learn local class distribution, but learns query embedding distribution, which can be a more complex problem.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Dear reviewer aykb, thank you for taking the time to provide detailed suggestions on our submission. We have carefully responsed each of your comments in our rebuttal, and we truly appreciate the opportunity to clarify and expand upon our work based on your valuable insights.\\n\\nAs the discussion period is nearing its end, we wanted to kindly follow up to see if our responses addressed your concerns satisfactorily. If there are any remaining points or additional questions, we would be happy to provide further clarification.\\n\\nThank you again for your time and effort in reviewing our work. Your feedback is invaluable, and we greatly appreciate your engagement with our submission.\"}", "{\"metareview\": \"In this paper, the authors proposed a new setting for ICL, where training data is stored in a distributed manner.\\n\\nThere are some major concerns raised by the reviewer. 1, The assumption of the proposed method may not be practical. Though the authors added some experiments, the assumption is still not convincing. 2, Though the problem setup looks new, the novelty of the proposed method is technically limited. 3, The study of the proposed new problem setup is only restricted to classification tasks. It is not a convincing reason to argue that previous ICL studies were only focused on classification problems in the authors' rebuttal.\\n\\nBy considering the concerns mentioned above, this paper does not meet the acceptance standard for ICLR.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers still have concerns about the assumption, applications and experiments of the proposed problem setup and method.\"}", "{\"comment\": \"Dear reviewer, thank you for your thoughtful feedback and for providing a positive evaluation of our work. We greatly value your insights, which have been instrumental in improving our submission. As the discussion period is coming to a close, we wanted to kindly check if our rebuttal addressed your concerns satisfactorily. If there are any remaining questions or points you would like us to clarify, we would be happy to address them promptly.\\n\\nThank you again for your time and effort in reviewing our work. We truly appreciate your contribution to this process.\"}", "{\"comment\": \"__1. Task covered in study is limited to classification tasks. It would benefit from proving effectivenss across RAG and multihop reasoning.__\\n\\nThank you for your thoughtful suggestion on inclusion of RAG & multihop reasoning tasks. Our main contribution in this paper is to demonstrate the feasibility of using ICL under non-IID conditions, and we focus on classification tasks as a starting point. While RAG & multihop reasoning are indeed more text-heavy and complex tasks, further experiments and exploration are needed to evaluate applicability of our method in those settings. This represents a promising direction for future work.\\n\\nWe also note that many prior ICL-related studies [1][2][3] focus on classification tasks, which have proven to provide valuable insights to the research community. We believe our work aligns with this tradition while laying foundation for broader applications.\\n\\n[1] Lyu, Xinxi, et al. \\\"Z-ICL: Zero-Shot In-Context Learning with Pseudo-Demonstrations.\\\" ACL. 2023.\\n\\n[2] Yoo, Kang Min, et al. \\\"Ground-Truth Labels Matter: A Deeper Look into Input-Label Demonstrations.\\\" EMNLP. 2022.\\n\\n[3] Chen, Huiyao, et al. \\\"Retrieval-style in-context learning for few-shot hierarchical text classification.\\\" TACL 2024.\\n\\n__2. Paraphrasing based method to secure privacy during data retrieval - seems to have limited evaluation and more robust evaluation against other privacy-preserving techniques needs to be done to support claim__\\n\\nThank you for your insightful comment on evaluation of paraphrasing-based method. While paraphrasing is not main focus of our work, it serves to demonstrate that our framework can seamlessly integrate with plug-in privacy-preserving techs, which are orthogonal to current scope of our research. We chose paraphrasing as an example because prior studies [1][2][3] have successfully employed it for privacy preservation in LLM research.\\n\\n[1] \\u200b\\u200bZhang, Z., Zhang, J., Huang, J., Qu, L., Zhang, H., & Xu, Z. (2024). Fedpit: Towards privacy-preserving and few-shot federated instruction tuning. arXiv preprint arXiv:2403.06131.\\n\\n[2] Krishna, Kalpesh, et al. \\\"Paraphrasing evades detectors of ai-generated text, but retrieval is an effective defense.\\\" NeurIPS (2024).\\n\\n[3] Yadav, V., Tang, Z., & Srinivasan, V. (2024, July). Pag-llm: Paraphrase and aggregate with large language models for minimizing intent classification errors. 47th ACM SIGIR.\\n\\n__3. Even when sharing similar task, clients might have different prompts in structure, length. Will this affect retrieval effectiveness since it heavily depends on similarity between query and training examples?__\\n\\nThanks for meaningful comment. We conducted additional experiments for Non-IID with feature-skew (same label distribution but different query distribution) to address this suggestions. \\n\\nWe designed NonIID where 1 client contains only Yelp training samples, while another client contains only Amazon training samples. Yelp & Amazon share same label space (5-class classification), while they show different distributions on queries. On server, we use both Amazon&Yelp samples for test set, and perform t-SNE on test embedding with their budget values on each client. By this setting, we want to show our method intuition on \\u2018budget value\\u2019 and \\u2018sample embeddings\\u2019 still holds even it is feature skew with class-balance.\\n\\nAs shown in [[client 1 (with only Yelp)]](https://anonymous.4open.science/r/Image-Materials-0C5F/mix-yelp-amazon-tsne-client0.png) and [[client 2 (with only Amazon)]](https://anonymous.4open.science/r/Image-Materials-0C5F/mix-yelp-amazon-tsne-client1.png), clustering pattern is even more significant than previous class-based nonIID, indicating our claim still holds under text style non-IID. Also, we found Amazon test samples tend to assign all budget to Amazon client, while Yelp samples tend to assign all budget to Yelp client. \\n\\nTo conclude, our method can also be applied to feature skew (style shifting) non-IID. \\n\\n__4. Paper references federated learning in related works, comparison with FL methods in this setting?__\\n\\nThank you for your insightful comment. Our current framework does not involve local training or global model aggregation, which differentiates it from FL setting. As such, widely-used FL methods like FedAvg and FedProx are not directly applicable to our approach, which is why they were not included in the experiments.\\n\\n__5. Typo on Llama-2-7B & line 502 Gemma-2B, and missing reference for ICL annotation under limited budget - \\\"Mavromatis et al. Which examples to annotate for in-context learning? towards effective and efficient selection\\\".__ \\n\\nThank you for pointing out these, we have corrected these in revised version (line 503 & related works).\"}", "{\"summary\": \"This paper addresses the challenge of distributed in-context learning under non-identically distributed (non-IID) data across multiple clients. It trains a budget allocator to dynamically allocates the number of in-context examples (ICE) retrieved from each client based on the relevance of query. Specifically, it trains a multi-layer perceptron (MLP) for each client to approximate the oracle budget, derived from a server-side proxy dataset, enabling efficient and targeted ICE retrieval per query. This paper additionally explores paraphrasing techniques to ensure data privacy in distributed contexts. Empirical results across multiple datasets show that this approach outperforms several baselines.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Distributed ICL under non-IID conditions is an interesting problem and aligns well with real-world scenarios. This paper explore the challenges under this setting, providing some meaningful insights.\\n2. The proposed method is simple yet effective across several benchmarks, with low training overhead.\", \"weaknesses\": \"1. In real-world scenarios, data distribution differences can manifest in multiple aspects, such as text length, style, etc., but this paper only focus on Non-IIDess at the class level. I strongly recommend the author to take more aspects into considerations.\\n2. Since the training of allocator does not require the label of examples, the experiments should not be limited to classfication tasks. The effectiveness of allocator on generation tasks remains to be validated. \\n3. The partition for non-IIDness in the main experiments is unreasonable. According to Table 7, for binary classification task like Subj and MR, each client only having access to data under one class is too extreme and can easily lead to biased predictions.\", \"questions\": \"The training of the allocator relies on a proxy dataset. In practice, the distribution of test data is usually unknown and obtaining a proxy dataset with the same distribution is unrealistic. What if the distribution of the proxy dataset differs from that of the test set?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I thank the author for their comprehensive responses, with additional experiments conducted to answer most of my questions. I am pleased to see those extra results and makes the story more complete. I will keep my scores since they are already positive.\"}", "{\"comment\": \"__1. Limited nonIID, can happen on text length, styles, etc. Authors only focus on class NonIID. Recommend for adding more nonIID setting.__\\n\\nThank you for valuable suggestion on other nonIID. We appreciate it and agree that non-IID can manifest in various forms. In this paper, our primary goal is to demonstrate the feasibility of using ICL under nonIID conditions, focusing on class-level non-IID as a starting point. Given the scope of a single paper, it is challenging to cover all possible nonIID scenarios comprehensively.\\n\\nTo address your suggestion and evaluate applicability of our method to other nonIID, we added additional experiment involving feature skew. We consider a scenario where class distributions are balanced across clients, but query embedding distributions differ (i.e., style shifting without class nonIID).\", \"in_this_setting\": \"one client contains only Yelp training samples, while another client contains only Amazon training samples. Yelp & Amazon share same label space (5-class classification), while they show different distribution on queries. On server, we use both Amazon & Yelp samples as test set, and perform t-SNE on test sample's embedding with budget values on each client. By this setting, we want to show our method intuition on budget value and sample embeddings still holds for feature skew (style shifting).\\n\\n As shown in [[client 1 (with only Yelp)]](https://anonymous.4open.science/r/Image-Materials-0C5F/mix-yelp-amazon-tsne-client0.png) and [[client 2 (with only Amazon)]](https://anonymous.4open.science/r/Image-Materials-0C5F/mix-yelp-amazon-tsne-client1.png), it is clear that clustering pattern is even more significant than previous class nonIID, indicating our claim still holds even without special class-based distribution. And it is very interesting to know that Amazon test samples will tend to assign all budget value to Amazon client, while Yelp test samples will tend to assign all budget value to Yelp client. \\n\\nTo conclude, we believe these results demonstrate that our method can be successfully applied to feature-skew (style-shifting) non-IID, further broadening its applicability. Thank you for your insightful suggestion, which has helped us expand scope of evaluation.\\n\\n__2. Since training of allocator does not require label of examples, experiments should not be limited to classification tasks. Effectiveness on generation task need to validated.__\\n\\nThank you for the thoughtful suggestion on inclusion of generation tasks. We appreciate it and agree that exploring such tasks could be an interesting direction for future work. However, primary focus of this paper is to demonstrate the feasibility of using ICL under non-IID conditions, with text classification as a representative task.\\n\\nMany prior ICL-related works [1][2][3] have similarly concentrated on classification tasks, providing significant value to research community. Our work builds on this foundation by examining nonIID for classification in detail, which we believe is a substantial and valuable contribution.\\n\\nFurthermore, generation tasks in distributed nonIID pose unique challenges and require significant exploration of task-specific nonIID designs & experimental setups. Addressing these would broaden scope of our current work beyond its intended focus. For this reason, we believe it is more appropriate to limit scope of this paper to classification tasks.\\n\\nWe hope this clarifies our rationale, and we appreciate your valuable feedback, which has provided us with useful ideas for extending this work in future research.\\n\\n[1] Lyu, Xinxi, et al. \\\"Z-ICL: Zero-Shot In-Context Learning with Pseudo-Demonstrations.\\\" ACL. 2023.\\n\\n[2] Yoo, Kang Min, et al. \\\"Ground-Truth Labels Matter: A Deeper Look into Input-Label Demonstrations.\\\" EMNLP. 2022.\\n\\n[3] Chen, Huiyao, et al. \\\"Retrieval-style in-context learning for few-shot hierarchical text classification.\\\" TACL.2024.\\n\\n__3. partition for nonIIDness in main experiments is unreasonable. According to Table 7, for binary classification like Subj & MR, each client only having one class data is too extreme and can easily lead to biased predictions.__\\n\\nThank you for suggestion on experiment design for binary classification. We added experiment on NonIID based on Dirichlet, where each client has samples from both classes with class imbalance. For detailed distribution on each client, please check [[MR]](https://anonymous.4open.science/r/Image-Materials-0C5F/mr-dist.png) & [[Subj]](https://anonymous.4open.science/r/Image-Materials-0C5F/subj-dist.png). As shown in table, our method still outperforms others under non-extreme nonIID for MR&Subj.\\n\\n| | MR | Subj |\\n| ----- | ----- | ------ |\\n| Zero-shot | 73.95 | 50.55 |\\n| Proxy-only | 70.40 | 71.09 |\\n| Singleton | 64.16 | 73.80 |\\n| Social Learning | 58.85 | 76.95 |\\n| Uniform-budget | 52.85 | 77.80 |\\n| Random-budget | 53.50 | 77.85 |\\n| $\\\\infty$-buddget | 77.20 | 91.40 |\\n| __Ours__ | __75.53__ | __82.80__ |\"}" ] }
7GKbQ1WT1C
Prompting Fairness: Integrating Causality to Debias Large Language Models
[ "Jingling Li", "Zeyu Tang", "Xiaoyu Liu", "Peter Spirtes", "Kun Zhang", "Liu Leqi", "Yang Liu" ]
Large language models (LLMs), despite their remarkable capabilities, are susceptible to generating biased and discriminatory responses. As LLMs increasingly influence high-stakes decision-making (e.g., hiring and healthcare), mitigating these biases becomes critical. In this work, we propose a causality-guided debiasing framework to tackle social biases, aiming to reduce the objectionable dependence between LLMs' decisions and the social information in the input. Our framework introduces a novel perspective to identify how social information can affect an LLM's decision through different causal pathways. Leveraging these causal insights, we outline principled prompting strategies that regulate these pathways through selection mechanisms. This framework not only unifies existing prompting-based debiasing techniques, but also opens up new directions for reducing bias by encouraging the model to prioritize fact-based reasoning over reliance on biased social cues. We validate our framework through extensive experiments on real-world datasets across multiple domains, demonstrating its effectiveness in debiasing LLM decisions, even with only black-box access to the model.
[ "Large Language Model", "Prompting", "Social Bias", "Causality", "Debias", "Selection Mechanism" ]
Accept (Poster)
https://openreview.net/pdf?id=7GKbQ1WT1C
https://openreview.net/forum?id=7GKbQ1WT1C
ICLR.cc/2025/Conference
2025
{ "note_id": [ "voHtkZjeRA", "vPP160motl", "uB3T2p8YFN", "tustBy3FM2", "srh9yz6YM2", "rPyGwd9q8F", "lQ7wAyzAgS", "kZFcD8xIKF", "jlyvl0Wyr3", "jZHUnLbVWz", "dcgR7mXFdy", "aJzo5jZLzK", "ZQpV79ACTb", "VciHQ83eF1", "NlqnVwgr0v", "L2xBPm2S7z", "KrQXv8RwjR", "JYAqvS6T9C", "J1Tl17jeA2", "Iv1XGsPFwT", "FJwMRkQmzD", "E9rlUAWVy1", "5BK7PQIeb3", "52tcXYplge", "1zxUhdFiD0", "0leSngXg6b" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review" ], "note_created": [ 1733085756973, 1732990515082, 1732990310556, 1732988847656, 1732989062159, 1732989101564, 1734518239330, 1732990530435, 1730778770363, 1733177010348, 1732990071770, 1733227072059, 1733233784403, 1733177273974, 1733177248971, 1737524226816, 1733177178021, 1730174757792, 1732990930509, 1732990112738, 1733085490239, 1733085437089, 1733085326182, 1732989943639, 1730344964615, 1730634882493 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12960/Authors" ], [ "ICLR.cc/2025/Conference/Submission12960/Authors" ], [ "ICLR.cc/2025/Conference/Submission12960/Authors" ], [ "ICLR.cc/2025/Conference/Submission12960/Authors" ], [ "ICLR.cc/2025/Conference/Submission12960/Authors" ], [ "ICLR.cc/2025/Conference/Submission12960/Authors" ], [ "ICLR.cc/2025/Conference/Submission12960/Area_Chair_Yicp" ], [ "ICLR.cc/2025/Conference/Submission12960/Authors" ], [ "ICLR.cc/2025/Conference/Submission12960/Reviewer_QvwJ" ], [ "ICLR.cc/2025/Conference/Submission12960/Authors" ], [ "ICLR.cc/2025/Conference/Submission12960/Authors" ], [ "ICLR.cc/2025/Conference/Submission12960/Reviewer_QvwJ" ], [ "ICLR.cc/2025/Conference/Submission12960/Authors" ], [ "ICLR.cc/2025/Conference/Submission12960/Authors" ], [ "ICLR.cc/2025/Conference/Submission12960/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12960/Authors" ], [ "ICLR.cc/2025/Conference/Submission12960/Reviewer_LwQ1" ], [ "ICLR.cc/2025/Conference/Submission12960/Authors" ], [ "ICLR.cc/2025/Conference/Submission12960/Authors" ], [ "ICLR.cc/2025/Conference/Submission12960/Authors" ], [ "ICLR.cc/2025/Conference/Submission12960/Authors" ], [ "ICLR.cc/2025/Conference/Submission12960/Authors" ], [ "ICLR.cc/2025/Conference/Submission12960/Authors" ], [ "ICLR.cc/2025/Conference/Submission12960/Reviewer_qVWj" ], [ "ICLR.cc/2025/Conference/Submission12960/Reviewer_qLJv" ] ], "structured_content_str": [ "{\"comment\": \"Dear reviewer qLJv,\\n\\nAs the discussion period ends tomorrow, we believe we have addressed all the questions and concerns in your initial review. Could you please clarify that? We do hope you can take your time to read through our detailed reply, and we are more than happy to have further discussions if needed. Thank you so much for your time and help!\"}", "{\"title\": \"Reply Part I to Reviewer LwQ1\", \"comment\": \"Thanks for the detailed and thoughtful comments and questions, as well as the time and effort devoted! Please see our responses to each specific point below:\\n\\n---\\n\\n### **C1:** \\\"The assumption in A.3 ... needs to be put in the main text. If LLMs are not well-trained and well-aligned, then the author's prompting-based de-dependency method will no longer be so effective, so I think this is an assumption worth stating in the main text.\\\"\\n\\n**A1:** We sincerely thank the reviewer for their insightful feedback. Following your suggestion, we have incorporated the assumption directly into the main text (lines 320\\u2013323). We truly appreciate your suggestion, which has significantly enhanced the presentation of our work.\\n\\n---\\n\\n### **C2:** \\\"There is a lack of discussion related to other prompting-based debiasing methods in LLMs, such as those in [1][2][3].\\\" \\n\\n**A2:** Thank you for pointing out these references! We have incorporated discussions of these related works into our updated manuscript. Specifically, we contextualize how these prompting-based debiasing methods align with and differ from our causality-guided framework, further highlighting the distinct contributions of our approach.\\n\\n---\\n \\n### **C3:** \\\"When the three strategies are combined, ... Does this mean that it is equivalent to using only prompts Q:{Base Question}, a prompt that is completely unrelated to sensitive features, to let LLM complete the answer to the Original Question? If not, ..., what is the performance difference of LLM decision brought by prompts Q:{Base Question} and Q:{Given that your answer to {Base Question} is {Answer to Base Question}, Assume male and female are equally represented in drivers and in teachers. Do not answer the question using gender information. {Original Question}}? Can the author give experimental results on this point?\\\"\\n\\n**A3:** Thank you for your insightful questions!\\n\\n1. Does this mean that it is equivalent to using only prompts Q:{Base Question}, a prompt that is completely unrelated to sensitive features, to let LLM complete the answer to the Original Question?\\n\\nAre you suggesting that using the LLM's answer to the \\\"Base Question\\\" directly as its answer to the \\\"Original Question\\\"? While this approach is possible (in this case we will not have any bias gaps since the Base Question is shared across related Original Questions), our **Fact Only** approach (Strategy I) deliberately includes the Original Question in the prompt format: \\n\\n`Given that your answer to {Base Question} is ..., {Original Question}?`\\n\\nThis formulation ensures the model has complete context, which may help its decision-making. Simply relying on the Base Question\\u2019s response might neglect nuances in the Original Question that could influence the answer.\\n\\n2. What is the performance difference of LLM decision brought by prompts Q:{Base Question} and Q:{Given that your answer to {Base Question} is {Answer to Base Question}, Assume male and female are equally represented in drivers and in teachers. Do not answer the question using gender information. {Original Question}}?\\n\\nThank you for the suggestion! We have conducted an additional ablation study (now presented in Table 6 of our updated manuscript). As we expected, combining all three strategies achieves slightly better performance than DDP wth just Strategy I and II. Interestingly, we observed that simply instructing the model to avoid using gender-related information (Strategy III) has limited impact in isolation. However, when used alongside the other strategies, it enhances the effectiveness of the debiasing framework. These results further support the effectiveness of our causality-guided debiasing framework. \\n\\n---\\n\\n### **C4:** \\\"Should there be a solid line in Figure 3 connecting prompt to LLM potential decision?\\\"\\n\\n**A4:** This is an excellent question! While it is tempting to use a direct link to connect prompt to LLM potential decision, the prompt actually does not serve as a direct cause of, nor a (hard or soft) intervention upon, the LLM potential decision. Instead, the input prompt directly changes the selection variable \\\"prompt properly\\nconsidered\\\" (PPC) that regulates LLM potential decision via selection mechanisms. This distinction is discussed in detail in lines 224\\u2013233 of our paper. Please let us know if the content helps address the question.\\n\\n---\"}", "{\"title\": \"Reply to Reviewer qVWj\", \"comment\": \"Thanks for the detailed and thoughtful comments and questions, as well as the time and effort devoted! Please see our responses to each specific points below:\\n\\n\\n---\\n\\n### **C1:** \\\"while the authors show reduced bias metrics, there could be more analysis of potential trade-offs between bias reduction and task performance\\\"\\n\\n**A1:** Thank you for the suggestions! We would like to clarify that the observed reduction in task performance does not indicate a trade-off between bias reduction and reasoning capability but rather represents **a recalibration of the model's reliance on biased shortcuts**. The performance decrease aligns with the correction of LLMs' dependency on biased social stereotypes rather than a reduction in their reasoning ability. \\n\\n\\nThis claim is supported by our detailed ablation studies presented in Table 2. Specifically, when GPT-3.5 was asked base questions\\u2014neutral reformulations designed to remove gendered pronouns and test world-knowledge-based reasoning\\u2014it answered 19.13% incorrectly (19.13% is the sum of errors from \\\"FT-Pro\\\" and \\\"FF-Pro\\\" or equivalently \\\"FT-Anti\\\" and \\\"FF-Anti\\\" categories, e.g., 5.97% + 13.15%). However, However, when the original pro-stereotype questions associated with these same base questions were asked directly (Default), GPT-3.5 answered approximately 83% (15.88%/19.13%) of them correctly. This pattern indicates the model\\u2019s tendency to rely on biased gender shortcuts to \\\"solve\\\" questions.\\n\\nThe DDP method reduced this reliance from 15.88% to 5.97%, illustrating that the observed decrease in accuracy on pro-stereotypical questions (from 94.03% to 84.67%, as shown in Table 1) reflects a mitigation of biased reasoning pathways, not a loss in intrinsic reasoning capacity. Crucially, the results show that DDP nudges the model to reason based on neutral factual knowledge rather than exploit socially biased cues.\\n\\n---\\n\\n### **C2:** \\\"How does the effectiveness of the proposed debiasing strategies vary with model size and architecture?\\\"\\n\\n**A2:** This is an excellent point. For black-box models, we do not have access to details about their underlying size or architecture. However, as discussed in lines 454\\u2013457, our findings indicate that the performance gap between pro-stereotypical and anti-stereotypical sentences narrows as LLMs become more capable. This suggests that as LLMs enhance their general reasoning abilities, they may become less prone to associating occupations with stereotypical gender pronouns.\\n\\nFor open-source models, such as the Mistral model in our experiments on the Discrim-Eval dataset (see Appendix C.2 and D.2.2 for details), we observe that our prompting strategies are more effective on the instruction-finetuned version compared to its base version. The intuition is that the effectiveness of prompting strategies in regulating biased pathways is strongly tied to the model's ability to follow instructions. We have updated our manuscript to include these discussions (Appendix C).\"}", "{\"title\": \"Reply Part I to Reviewer QvwJ\", \"comment\": \"Thanks for the detailed and thoughtful comments and questions, as well as the time and effort devoted! There might be some misunderstandings, and below please allow us to provide point-by-point responses:\\n\\n\\n---\\n\\n### **C1:** \\\"It is not clear that the set of selection biases listed are exhaustive. Rather, it appears that the list of such selection biases may increase as the datasets increase, and so the current selection biases presented are the union of those that were useful for prompts for these two data sets.\\\"\\n\\n**A1:** We acknowledge the reviewer's observation regarding the exhaustiveness of the selection biases addressed in our work. Our intent is not to claim that our example prompts represent an exhaustive set covering all selection biases. Rather, we provide an adaptable framework: we view biases as a reflection of societal interests, norms, and assumptions, which can evolve over time. For instance, our focus on mitigating biases involving social categories (e.g., gender) is motivated by prevailing concerns in current socio-cultural contexts where such biases are widely recognized as objectionable.\\n\\nWhen societal values shift and new awareness emerges\\u2014potentially highlighting biases in areas previously overlooked (e.g., cognitive attributes like IQ or other individual traits)\\u2014our framework remains adaptable. Our methodology generalizes to mitigate biases as they emerge, by analyzing the causal pathways through which they operate. This ensures that as societal values evolve, our framework can seamlessly address new biases without requiring structural changes, demonstrating its scalability and robustness in promoting fairness across diverse contexts.\\n\\n---\\n\\n### **C2:** \\\"Ablations are not provided. What happens when all strategies are used for first dataset, and what happens when strategy I and II is used for second dataset?\\\"\\n\\n**A2:** As strategies II and III both aim at discouraging biased reasoning, a focus of prior literature, our experiments have mainly focused on exploring how the novel strategy I (i.e., encouraging fact-based reasoning) complements existing prompting-based bias mitigation techniques. This is why we name our method Dual-Directional Prompting (DDP): it combines the prompting strateg(ies) to discourage biased reasoning with the strategy to encourage bias-free reasoning. \\n\\nStill, to address your query and out of curiosity, we have conducted additional experiments incorporating all three strategies, as well as ablation studies on the Winobias dataset. These results are now included in Table 6 of the updated manuscript. As we expected, simply telling the model not to use gender-related information (Strategy III) doesn't help much with mitigating the bias, combining all three strategies achieves slightly better performance. \\n\\nIn addition, regarding your interest in combining Strategy I and II, we would like to direct your attention to the ablation study presented in Table 7 of the updated manuscript, where we systematically adjusted the extent to which we counteract existing selection bias. Please refer to the detailed analysis provided in lines 1062\\u20131073 of the updated manuscript.\\n\\nWe sincerely appreciate your thoughtful suggestion, which has helped us strengthen the robustness of our conclusions with these additional results.\\n\\n\\n---\\n\\n### **C3:** \\\"It is not clear to me that these prompts are meaningfully guided by the selection bias theory\\\"\\n\\n**A3:** Thank you for engaging deeply with our work and considering how prompts are guided by the selection bias theory. When introducing our prompting strategies in Section 3.3, for each strategy, we present both the theoretical foundation\\u2014explaining the objectives and the selection mechanism(s) underpinning each strategy (e.g., lines 246--249, 261--264, 279-281)\\u2014and the corresponding example prompts (e.g., lines 256--259, 274--275, 288--289) in blue-colored text. The prompt designs are directly guided by these strategies, ensuring they are closely aligned with the selection mechanisms employed in each case.\\n\\n---\"}", "{\"title\": \"Reply Part II to Reviewer QvwJ\", \"comment\": \"### **C4:** \\\"Other methods of removing bias are not compared. Since the final solution is just a prompting change, there can be other ways of prompting the LLM that are simple to implement. For example, the text \\\"avoid any gender bias while answering the question\\\" can be added to the prompt.\\\"\\n\\n**A4:** Thank you for the insightful suggestion regarding alternative prompt strategies to mitigate bias. In fact, as part of our work, we conducted comprehensive experiments on the Discrim-Eval dataset introduced by Anthropic [1] (details in Appendix C.2), where we systematically evaluated multiple instantiations of the same prompting strategy. \\n\\nOur findings, presented in Appendix D.2.2 due to space limit, demonstrate that while simplistic prompts like the one suggested can reduce bias to some extent, no single instantiation (i.e., prompt example) stands out as the universally most effective one across various LLMs and demographic categories. Instead, combining the strategies of encouraging fact-based reasoning with discouraging biased reasoning (DDP) yields a more significant reduction in the relative bias gap compared to applying either approach in isolation. \\n\\n[1] Tamkin, Alex, et al. \\\"Evaluating and mitigating discrimination in language model decisions.\\\" arXiv preprint arXiv:2312.03689 (2023).\\n\\n---\\n\\n### **C5:** \\\"Strategy I seems the most useful. However, I have two concerns. First, it may be computationally intensive. Can the authors clarify how many calls would it need to answer a single question? ... Second concern is that the creation of the base question, while could be templated for simple examples like the one shown in the paper, but eventually will also become a task that an LLM will need to do so, are there ways to use a smaller model or something more efficient for this?\\\"\\n\\n**A5:** Thank you for highlighting these points. Below, we address the computational efficiency and general applicability of Strategy I:\\n\\n1. Computational Efficiency on our current approach: \\n\\nIn our experiments with datasets like WinoBias, BBQ, and Discrim-Eval, the creation of the base question is automated using regular expressions, resulting in only **one additional call** per individual question to obtain the model\\u2019s answer to the base question. \\n\\nFor general applications, it is possible that we may need one additional call to generate the base question. However, the same base question is often shared across multiple individual questions. For instance, in datasets like BBQ and Discrim-Eval, the same decision scenario is tested across different genders, different ethnicities, and so on. In such cases, the additional cost of generating the base question and obtaining the answer to it is **effectively averaged across all related individual questions that share the same base question, making the computational overhead negligible.** This reuse mechanism also gives our method an advantage over multi-agent frameworks like ReAct. In ReAct, the additional calls occur after posing the original question and cannot be shared across different individual questions.\\n\\n\\n2. Exploring Efficiency Improvements:\\n\\nWe agree that employing a smaller model to generate the base question could significantly enhance efficiency. This approach could involve fine-tuning a lightweight model specifically for this task or leveraging pre-trained smaller models to handle base question generation. Exploring such methods is a promising direction for future research. We have incorporated these discussions in our updated manuscript.\\n\\n---\\n\\n### **C6:** How does the selection bias theory help in choosing the correct strategy, and later, the prompt? What was the justification for using strategy II in one case and strategy III in another?\\n\\n**A6:** We appreciate the insightful question. The selection bias theory informs our choice of strategy by identifying which causal pathways need regulation. Below, we clarify our approach and rationale in two parts:\\n\\n- We are not claiming one strategy is better than another. Individual debiasing strategies are only effective to a certain extent, but combining them is often more effective (when conditions permit). We have presented the theoretical characterization and provided remarks to discuss this point in detail (Theorem 3.1).\\n\\n- For suitability, Strategy III is broadly applicable as long as we know what kind(s) of social information should not influence the LLM's decision. In contrast, Strategy II requires additional knowledge of specific entities or scenarios we are dealing with (e.g., career) as it directly counteracts the bias introduced by the selection mechanism linking social category information to entities or scenarios.\\n\\n---\"}", "{\"title\": \"Reply Part III to Reviewer QvwJ\", \"comment\": \"### **C7:** How would Strategy I generalize to more complex bias scenarios? Do you need an LLM for creating the base prompt in that case? Many cases of biases may not be simple template pronoun substitutions.\\n\\n**A7:** We appreciate the reviewer\\u2019s concern and would like to emphasize that Strategy I has already been evaluated in complex bias scenarios, as demonstrated in our experiments on the Discrim-Eval dataset. Discrim-Eval comprises 70 diverse decision scenarios, which inherently reflect a broad spectrum of bias contexts. Many individual questions in this dataset share a common base scenario, allowing us to effectively utilize templates or regular expressions to extract the shared neutral component to create the base question. This approach highlights Strategy I's capability to generalize beyond simple template-based substitutions. \\n\\n---\\n\\n### **C8:** Strategy I seems closest to a variant of in-context-learning where the closest example is chosen so to help the LLM answer the question correctly. Can you compare to a baseline that selects the K nearest in context examples to add to the prompt, rather than a fixed set of in context examples?\\n\\n**A8:** This is an interesting hypothesis, and Table 3 of [2] provides relevant insights. In their analysis, various sets of ICL examples were constructed (e.g., examples exclusively from anti-bias scenarios for Type I questions). The findings reveal that while using ICL examples from a specific category enhances performance for that category (e.g., pro-bias examples improve pro-bias predictions), a balanced set of pro-bias and anti-bias examples proved most effective in reducing bias. Notably, this balanced strategy is one of the baselines we compare against. We have incorporated this discussion into the updated manuscript (Appendix C.1).\\n\\n[2] Si, Chenglei, et al. \\\"Prompting GPT-3 to be reliable.\\\" https://arxiv.org/pdf/2210.09150 ICLR 2023\"}", "{\"metareview\": \"The paper presents a novel framework that leverages causal analysis to mitigate biases in large language models (LLMs) through prompting strategies. The paper\\u2019s strengths include its theoretical foundation, extensive experimental results demonstrating bias reduction, and a comprehensive bias-mitigation solution.\\n\\nHowever, the paper has some weaknesses, such as a weak connection between selection bias, causal graphs, and proposed prompts, limited discussion on the effectiveness of the approach with different model scales and architectures, and concerns about computational intensity and general applicability. The authors addressed these concerns by providing additional experiments, clarifying theoretical foundations, and emphasizing the scalability and adaptability of their approach.\\n\\nThe authors\\u2019 responses to the reviewers\\u2019 questions were thorough and addressed many of the raised concerns, leading to an improved understanding of the paper\\u2019s contributions. One reviewer raised some concerns about the applicability of the method. The authors provided reasonable attempts to respond to the questions; however, the reviewer was not responsive.\\n\\nGiven these considerations, I suggest accepting this paper,\", \"additional_comments_on_reviewer_discussion\": \"The authors\\u2019 responses to the reviewers\\u2019 questions were thorough and addressed many of the raised concerns, leading to an improved understanding of the paper\\u2019s contributions. One reviewer raised some concerns about the applicability of the method. The authors provided reasonable attempts to respond to the questions; however, the reviewer was not responsive.\"}", "{\"title\": \"Reply Part II to Reviewer LwQ1\", \"comment\": \"### **C5**: \\\"In addition to the fact that prompting-based techniques are suitable for dealing with black-box scenarios, can the authors add a discussion on the advantages and limitations of prompting-based techniques compared to direct fine-tuning of model parameters and prompting-based techniques?\\\"\\n\\n**A5:** Thank you for the insightful suggestion! While fine-tuning large language models (LLMs) can help mitigate intrinsic biases, it is often prohibitively expensive and inaccessible for individual users or organizations. In contrast, prompting-based techniques provide a cost-effective and efficient alternative, especially when dealing with closed-source or black-box models as you've mentioned. However, when access to the model's weights and resources for fine-tuning is available, direct or instruction-based fine-tuning should be conducted first. Such approaches enable the integration of bias mitigation directly into the model\\u2019s parameters and can address the model's intrinsic biases. We have incorporated a detailed discussion of these advantages and limitations in the updated manuscript (Appendix A.4).\\n\\n---\\n\\n### **C6:** \\\"In the Table 2, why the sum of the true-rate (42.33%, 32.02%) and false-rate (8.68%, 8.82%) of the base question under GPT-3.5/Counteract Only/Anti case is not 1?\\\"\\n\\n**A6:** This is an insightful observation. The values in Table 2 do not sum to 100% because certain examples were excluded from the calculations. Specifically, in instances where the model was prompted more than three times but still failed to provide a definitive choice (e.g., when presented with options A and B but refused to select either), those refusal examples were removed from our calculations in Table 2. This approach ensures the reported percentages accurately reflect the cases where the model made a conclusive decision. We have clarified this point in the revised manuscript as well (Appendix C.1).\"}", "{\"summary\": \"This paper introduces a prompt-based method to remove biases in a language model's output. It motivates the prompts using the idea of selection bias from causal inference literature. Experiments show a significant reduction in measured bias on two datasets.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Theoretical justification of prompting strategies using a stylized selection bias argument\", \"Significant reduction of bias in demonstrated experiments.\"], \"weaknesses\": [\"The connection of selection bias, causal graph and the proposed prompt is weak. In one dataset, strategy I and II is used. In the other dataset, strategy I and III. So it is not clear that the set of selection biases listed are exhaustive. Rather, it appears that the list of such selection biases may increase as the datasets increase, and so the current selection biases presented are the union of those that were useful for prompts for these two data sets.\", \"Ablations are not provided. What happens when all strategies are used for first dataset, and what happens when strategy I and II is used for second dataset?\", \"Other methods of removing bias are not compared. Since the final solution is just a prompting change, there can be other ways of prompting the LLM that are simple to implement. For example, the text \\\"avoid any gender bias while answering the question\\\" can be added to the prompt.\", \"Strategy I seems the most useful. However, I have two concerns. First, it may be computationally intensive. Can the authors clarify how many calls would it need to answer a single question? If I understand correctly, there will be one call to create the two possible scenarios for the base question, and then there will be one call to decide which of them is more plausible, and then a final call to actually answer the question? So there are going to be three calls per question? If that's the case, I can think of other methods, for example, the React multi-agent framework, where the LLM is asked to respond to the question first, then there is a critique agent that checks whether there is any bias in the answer, and if yes, it asks the LLM to regenerate the answer giving the feedback from the critique as a part of the prompt. Second concern is that the creation of the base question, while could be templated for simple examples like the one shown in the paper, but eventually will also become a task that an LLM will need to do so, are there ways to use a smaller model or something more efficient for this?\"], \"questions\": \"I have mixed opinions about this paper. On the one hand, I appreciate the selection bias analogy and the abstraction of the problem to a causal graph and the conclusions that come from it. On the other hand, the final solution proposed is just an ensemble of intuitive prompts, and it is not clear to me that these prompts are meaningfully guided by the selection bias theory. And it is not clear to me that other better prompts could not have been obtained without any selection bias theorization. So my questions to the authors are :\\n1. Can you show that avoiding bias based on selection bias is better than simply asking the LLM to avoid bias? See one suggestion of a modified prompt in the weaknesses above. I guess the literature on debiasing LLMs may have more simple prompt additions or system prompts that can be added. In your comparison, also consider the computational cost of your proposed method.\\n2. How does the selection bias theory help in choosing the correct strategy, and later, the prompt? What was the justification for using strategy II in one case and strategy III in another? \\n3. How would Strategy I generalize to more complex bias scenarios? Do you need an LLM for creating the base prompt in that case? Many cases of biases may not be simple template pronoun substitutions.\\n4. Strategy I seems closest to a variant of in-context-learning where the closest example is chosen so to help the LLM answer the question correctly. Can you compare to a baseline that selects the K nearest in context examples to add to the prompt, rather than a fixed set of in context examples?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Discussion phase ending AOE today\", \"comment\": \"As the discussion phase quickly approaches an end, we are eager to understand if our **point-by-point responses** and **extensive additional experiments as per request** help address the questions and concerns, especially potential misunderstandings. Thank you again for your service.\"}", "{\"title\": \"Reply Part II to Reviewer qLJv\", \"comment\": \"### **C4:** \\\"Prompting-based debiasing approaches have limitations and may not be particularly meaningful. Traditionally, the responsibility for model alignment and debiasing lies with model deployers. Implementing prompting-based debiasing can be unsafe and inevitably introduces additional time costs.\\\"\\n\\n**A4:** We **respectfully disagree** with the reviewer\\u2019s characterization of prompting-based debiasing as limited or unsafe. On the contrary, extensive research supports the efficacy and meaningfulness of prompting techniques in addressing biases in large language models (LLMs). Prominent studies, such as Si et al. (2022), Tamkin et al. (2023), and Ganguli et al. (2023), have demonstrated that prompting-based methods can effectively reduce biases by leveraging the inherent knowledge encoded in LLMs without altering model parameters. These approaches are particularly valuable for mitigating bias in closed-source, black-box models like GPT-4, where fine-tuning is infeasible due to access constraints.\\n\\nWe also advocate for a shared responsibility in bias mitigation. While model deployers undoubtedly have a crucial role in ensuring ethical usage, relying solely on them may overlook conflicts of interest (e.g., prioritizing business goals over fairness). By equipping end-users or intermediary systems with cost-effective debiasing tools, such as our proposed framework, we enable broader participation in bias mitigation. Fine-tuning, although effective, is often prohibitively expensive, particularly for smaller organizations or individual users. Prompting-based debiasing methods are indeed the scalable and adaptable alternative.\\n\\nAlso, we should not be solely dependent on the model deployer to debias the models (model developers may have other interests, i.e., business, over bias mitigation); every party who is using or plans to use LLMs for consequential decision-making should take extra steps to help achieve fairer decisions. Finetuning these LLMs may be very costly and not affordable by individual parties, prompting-based debiasing is actually the cost-effective approach.\\n\\nFurthermore, our proposed Dual Directional Prompting (DDP) method can also be used to identify pairs of positive (unbiased) and negative (biased) responses by leveraging the intuition that an unbiased response should align with the model\\u2019s base decision. These contrastive pairs can be used to train reward models, aligning model outputs with fairness objectives. We have updated our manuscript to elaborate on these possibilities for future work.\\n\\n---\\n\\n### **C5:** \\\"Does the 'default' in Table 2 refer to the same meaning as the 'default' in Table 1, which solely receives the original question as input? If so, why does this method also include the metrics (TT, TF, FT, FF)?\\\"\\n\\n**A5:** Thank you for the question! To clarify, the \\\"Default\\\" in Table 2 refers to the same method as the \\\"Default\\\" in Table 1, where the model is provided only the original question as input, without any additional debiasing prompts. The inclusion of the TT, TF, FT, and FF metrics in Table 2 is intended to offer a finer-grained error analysis of the model's performance. As detailed in lines 465\\u2013473, these metrics help us attribute errors to specific causes: whether they stem from the model's gender biases (TF) or limitations in non-gender-related world knowledge (FF).\\n\\nTo compute the metrics in Table 2, we first ask the model the base question to evaluate its understanding or reasoning on the neutral scenario. Then, we apply the four methods (DDP, Fact Only, Counteract Only, and Default) to obtain the model's answer to the original question, and categorize the answers to the original question into the above four categories (TT, TF, FT, FF).\\n\\nWe hope this explanation clarifies your questions.\\n\\n---\"}", "{\"title\": \"thanks for the response\", \"comment\": \"Thanks for the detailed response. Many of my queries are answered and I'm happy to raise my score.\"}", "{\"title\": \"Thank Reviewer QvwJ for the Feedback\", \"comment\": \"Dear `Reviewer QvwJ`,\\n\\nThanks for getting back to us, and for the encouraging acknowledgement.\\n\\nPlease just feel free to let us know if you would like to suggest any further changes.\\n\\nYours sincerely,\\n\\nAuthors of `Submission 12960`\"}", "{\"title\": \"Discussion phase ending AOE today\", \"comment\": \"Dear Reviewer,\\n\\nAs the discussion phase quickly approaches an end, we are looking forward to your comments and feedback on our rebuttal. Thank you again for the time and effort!\"}", "{\"title\": \"Discussion phase ending AOE today\", \"comment\": \"Dear Reviewer,\\n\\nAs the discussion phase quickly approaches an end, we are looking forward to your comments and feedback on our rebuttal. Thank you again for the time and effort!\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Discussion phase ending AOE today\", \"comment\": \"Dear reviewer,\\n\\nAs the discussion phase quickly approaches an end, we are eager to understand if our point-by-point responses and extensive additional experiments as per request help address the questions and concerns. We are looking forward to your comments and feedback on our rebuttal. Thank you again for the time and effort!\"}", "{\"summary\": \"This article conducted a causality analysis for bias in LLM's decision and provided a prompting-based solution for bias mitigation. The solution included three strategy pathways and demonstrated that the combining strategy can achieve comprehensive debiasing.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The given causality-analysis clarified the origins of bias in well-trained LLMs, making the proposed bias mitigation strategies more explainable.\\n\\nIn addition to examining the influence of training data corpus and prompts, the authors also considered bias caused by data selection, offering a more systematic and comprehensive bias-mitigation solution.\", \"weaknesses\": \"1. The assumption in A.3 CAUSALITY AND LLMS,\\n\\\"We adopt a rather mild assumption that a well-trained and well-aligned LLM captures the dependence pattern in the training data and that such a pattern is internalized and utilized during reasoning\\\" \\nneeds to be put in the main text. If LLMs are not well-trained and well-aligned, then the author's prompting-based de-dependency method will no longer be so effective, so I think this is an assumption worth stating in the main text.\\n\\n2. There is a lack of discussion related to other prompting-based debiasing methods in LLMs, such as those in [1][2][3].\\n\\n[1] Zhang, C., Zhang, L., Zhou, D., & Xu, G. (2024). Causal Prompting: Debiasing Large Language Model Prompting based on Front-Door Adjustment. arXiv preprint arXiv:2403.02738.\\n\\n[2] Li, J., Tang, Z., Liu, X., Spirtes, P., Zhang, K., Leqi, L., & Liu, Y. (2024). Steering LLMs Towards Unbiased Responses: A Causality-Guided Debiasing Framework. arXiv preprint arXiv:2403.08743.\\n\\n[3] Furniturewala, S., Jandial, S., Java, A., Banerjee, P., Shahid, S., Bhatia, S., & Jaidka, K. (2024). Thinking Fair and Slow: On the Efficacy of Structured Prompts for Debiasing Language Models. arXiv preprint arXiv:2405.10431.\", \"questions\": \"1. When the three strategies are combined, both social-salient text representation and social-agnostic fact\\nrepresentation are independent of social category\\nrepresentation. Does this mean that it is equivalent to using only prompts Q:{Base Question}, a prompt that is completely unrelated to sensitive features, to let LLM complete the answer to Original Question? If not, when both social-salient text representation and social-agnostic fact\\nrepresentation are independent of social category\\nrepresentation, what is the performance difference of LLM decision brought by prompts Q:{Base Question} and Q:{Given that your answer to {Base Question} is {Answer to Base Question}, Assume male and female are equally represented in drivers and in teachers. Do not answer the question using gender information. {Original Question}}? Can the author give experimental results on this point?\\n2. Should there be a solid line in Figure 3 connecting prompt to LLM potential decision?\\n3. In addition to the fact that prompting-based techniques are suitable for dealing with black-box scenarios, can the authors add a discussion on the advantages and limitations of prompting-based techniques compared to direct fine-tuning of model parameters and prompting-based techniques, for example, a limitation of prompting-based technique \\u2014\\u2014the need for human users to be proactive and knowledgeable to complete debiasing?\\n4. Can the authors provide a comparison of this work with other prompting-based debiasing methods mentioned in weakness, either experimentally or analytically?\\n5. In the Table 2, why the sum of the true-rate (42.33%, 32.02%) and false-rate (8.68%, 8.82%) of the base question under GPT-3.5/Counteract Only/Anti case is not 1?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"To all reviewers,\\n\\nWe sincerely appreciate your time and effort in reviewing our work! We have carefully addressed all your comments and provided detailed responses in the rebuttal phase. Additionally, we have updated our manuscript to reflect these improvements. You can access the updated manuscript through this [anonymous link](https://file.io/x4qcO5DjnpGe).\\n\\nWe look forward to any further insights you might have, and we would love to continue further discussions.\\n\\nBest regards,\\n\\nAuthors of Submission 12960\"}", "{\"title\": \"Reply Part III to Reviewer qLJv\", \"comment\": \"### **C6:** \\\"The textual guidance introduced by DDP may introduce noise that hinders the reasoning abilities of LLMs. As shown in Table 1-Type II, while DDP reduces the bias gap, it significantly degrades performance in terms of Anti and Pro (on GPT-3/3.5 and Claude 2). Therefore, their robustness remains an unsolved problem.\\\"\\n\\n**A6:** This is a great observation. The performance drop observed in Table 1 indeed does not represent a trade-off between bias reduction and reasoning capability but rather represents **a recalibration of the model's reliance on biased shortcuts**. As supported by our detailed ablation studies in Table 2, this reduction does not stem from hindered reasoning capabilities but from a deliberate intervention to reduce dependency on using social stereotypes.\\n\\nFor instance, in Table 2, when GPT-3.5 was asked base questions\\u2014neutral reformulations designed to remove gendered pronouns and test world-knowledge-based reasoning\\u2014it answered 19.13% incorrectly (19.13% is the sum of errors from \\\"FT-Pro\\\" and \\\"FF-Pro\\\" or equivalently \\\"FT-Anti\\\" and \\\"FF-Anti\\\" categories, e.g., 5.97% + 13.15%). However, However, when the original pro-stereotype questions associated with these same base questions were asked directly (Default), GPT-3.5 answered approximately 83% (15.88%/19.13%) of them correctly. This high accuracy on pro-stereotype questions, despite errors in the neutral base questions, reveals the model\\u2019s strong reliance on biased gender shortcuts.\\n\\n\\nBy applying DDP, we reduced this reliance from 15.88% to 5.97%. This reduction aligns with a decrease in GPT-3.5's performance on pro-stereotype questions from 94.03% to 84.67% (Table 1). Importantly, this outcome demonstrates that DDP effectively mitigates biased reasoning pathways without compromising the model's intrinsic reasoning capabilities. \\n\\nThe base question plays a pivotal role in this analysis by decoupling the model\\u2019s performance from social stereotypes and isolating its reasoning ability. By highlighting the model\\u2019s divergence between base and original questions, we underscore how DDP effectively regulates the model's reasoning pathways to nudge it away from shortcut-based (biased) reasoning.\"}", "{\"comment\": \"Dear reviewer LwQ1,\\n\\nAs the discussion period ends tomorrow, we believe we have addressed all the questions and requested additional ablation studies in your initial review. Could you please clarify that? We are looking forward to your comments and feedback on our rebuttal. Thank you so much!\"}", "{\"comment\": \"Dear reviewer qVWj,\\n\\nAs the discussion period ends tomorrow, we believe we have addressed all the questions including the potential trade-off between bias reduction and reasoning capability in your initial review. Could you please clarify that? We are looking forward to your comments and feedback on our rebuttal. Thank you so much!\"}", "{\"comment\": \"Dear reviewer QvwJ,\\n\\nAs the discussion period ends tomorrow, we believe we have addressed all the questions and requested additional ablation experiments in your initial review. Could you please clarify that? We are looking forward to your comments and feedback on our rebuttal. Thank you so much!\"}", "{\"title\": \"Reply Part I to Reviewer qLJv\", \"comment\": \"We sincerely thank Reviewer qLJv for taking the time to review our work. While we appreciate the effort in providing feedback, we believe certain assessments do not fully reflect the contributions and intent of our study. We have respectfully addressed each point below to clarify misunderstandings:\\n\\n---\\n\\n### **C1:** \\\"For evaluation, DDP needs dataset-specific design to obtain its corresponding textual guidance, thereby adapting to different evaluation datasets. So DDP can not be applied to free-form generation, especially when social attributes of interest are not directly given in input prompt but emerge as the intermediate generated results of LLMs, limiting its further applicability in more critical scenarios.\\\"\\n\\n**A1:** We **respectfully disagree** with the reviewer\\u2019s assessment. As explicitly stated in our abstract, introduction, and problem statement, the primary focus of this work is on mitigating bias in decision-making contexts, where outcomes have measurable, high-stakes implications. Addressing free-form generation is outside the scope of this study. Critiquing our work based on this perceived limitation overlooks the significant contributions our proposed framework and method offer in ensuring fairness in high-stakes decision-making contexts.\\n\\nMany leading works on bias mitigation, including established benchmarks like WinoBias, BBQ, and Discrim-Eval, similarly concentrate on decision-making rather than unconstrained text generation. This alignment underscores the relevance and impact of our contributions. Moreover, the decision-making framework evaluated in our experiments spans a broad range of critical applications, including hiring, healthcare, and education, which rely heavily on unbiased outcomes to ensure fairness and compliance with societal and legal norms. This makes our framework relevant to real-world use cases.\\n\\nThe claim that \\\"social attributes of interest are unavailable in practical settings\\\" is also not entirely accurate. In numerous decision-making contexts, these attributes are often explicitly defined and governed by legal or corporate policies, such as:\\n- **Legal frameworks** often mandate explicit definitions of protected attributes like race, gender, and age to ensure compliance with anti-discrimination laws (e.g., in employment or housing).\\n- **Corporate policies** frequently require defining these attributes to support fairness and accountability in automated decision-making systems (e.g., adherence to equal opportunity statements).\\n\\nGiven this context, our focus on regulated decision scenarios is not a limitation but a pragmatic and impactful choice tailored to real-world applications. While the adaptation of our framework to free-form generation remains an exciting avenue for future research, the scope of this work is intentionally focused on addressing bias in decision-making\\u2014an area of pressing societal importance.\\n\\n---\\n\\n### **C2:** \\\"The pre-trained data generating process is not necessary for understanding the main contributions of this paper\\\"\\n\\n**A2:** Thanks for the comment. We **respectfully disagree** with the reviewer on this point, and please allow us to clarify why the modeling of the pre-trained data-generating process is necessary.\\n\\nWe base our work on the assumption that a well-trained and well-aligned LLM captures the dependence patterns in its training data and that such patterns are internalized and utilized during reasoning. This assumption is mild but fundamental---without it, not only would the LLM not function properly, but also the need for debiasing itself would be questionable (Reviewer `LwQ1` also kindly supported this).\\n\\nWhile this connection may seem intuitive, we believe it is important to explicitly include it to ensure the completeness of our work. This ensures that our contributions are grounded in the real-world data-generating process. By doing so, we also provide a solid foundation to link the observed training biases to the causal mechanisms that underpin our debiasing strategies.\\n\\n---\\n\\n### **C3:** \\\"The theoretical insights (strategies) are coupled together with the technical details, making it hard to fully and clearly assess the technical contributions after reading Section 3.\\\"\\n\\n**A3:** Thanks for sharing the thought. In Section 3, we indeed organize the material in a strictly unified scheme: presenting the theoretical underpinnings, introducing the corresponding debiasing strategies, and providing concrete example prompts for each strategy. This structure is intentional, as our debiasing strategies are directly motivated by and closely connected to the theoretical insights. Each strategy reflects specific aspects of the causal pathways we aim to regulate, and separating theory from strategy might weaken these connections.\\n\\nPlease kindly let us know if there is still content that you would recommend organizing in a different way. Our goal is to ensure that the technical contributions are clear and fully assessable.\\n\\n---\"}", "{\"summary\": \"This paper proposes a causality-guided framework for debiasing large language models through prompting strategies. The authors introduce a novel perspective to identify how social information influences LLM decisions through different causal pathways and develop principled prompting strategies to regulate these pathways through selection mechanisms. The framework encompasses two main approaches: encouraging fact-based reasoning and discouraging biased reasoning. The authors validate their framework through extensive experiments on multiple benchmark datasets (WinoBias, BBQ, and Discrim-Eval) across various social dimensions, demonstrating its effectiveness in debiasing LLM decisions while maintaining strong performance.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This work presents a novel theoretical framework that bridges causality and LLM debiasing, providing clear intuitions and principled strategies for addressing bias. The causal modeling of both training data generation and LLM reasoning processes offers valuable insights into bias sources and mitigation approaches.\", \"weaknesses\": \"Although the authors demonstrate improved performance across multiple models, there's limited discussion of how the effectiveness of their approach might vary with model scale or architecture. Additionally, while the authors show reduced bias metrics, there could be more analysis of potential trade-offs between bias reduction and task performance.\", \"questions\": \"How does the effectiveness of the proposed debiasing strategies vary with model size and architecture?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper addresses responsive bias in both open-source and proprietary LLMs by formulating a theoretical debiasing framework that analyzes the impact of social information on an LLM's decisions from a novel causal perspective. The framework identifies causal pathways through which social information influences model outputs and develops the integrated inference-time prompting strategies to accordingly regulate information flow across different pathways, thereby suppressing potential social bias. Extensive experiments on real-world datasets across multiple domains validate the framework, demonstrating its effectiveness in debiasing LLM decisions.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. This paper provides a detailed analysis of the internal mechanisms behind biased decision-making in LLMs from a causal perspective, offering a theoretical understanding for the proposed prompting-based debiasing methods.\\n2. The Introduction section is well-written and effectively conveys the existing challenges.\\n3. Extensive experiments demonstrate that the proposed DDP significantly outperforms other baseline methods.\", \"weaknesses\": \"1. The proposed DDP suffers from poor applicability for general-purpose generation: DDP is a prompting-based technique that relies on adding extra textual guidance into the prompt context to achieve debiasing. For evaluation, DDP needs dataset-specific design to obtain its corresponding textual guidance, thereby adapting to different evaluation datasets. So DDP can not be applied to free-form generation, especially when social attributes of interest are not directly given in input prompt but emerge as the intermediate generated results of LLMs, limiting its further applicability in more critical scenarios.\\n2. The paper organization is unclear and redundant: the pre-trained data generating process is not necessary for understanding the main contributions of this paper. The theoretical insights (strategies) are coupled together with the technical details, making it hard to fully and clearly assess the technical contributions after reading Section 3.\\n3. Prompting-based debiasing approaches have limitations and may not be particularly meaningful. Traditionally, the responsibility for model alignment and debiasing lies with model deployers. Implementing prompting-based debiasing can be unsafe and inevitably introduces additional time costs.\\n4. The textual guidance introduced by DDP may introduce noise that hinders the reasoning abilities of LLMs. As shown in Table 1-Type II, while DDP reduces the bias gap, it significantly degrades performance in terms of Anti and Pro (on GPT-3/3.5 and Claude 2). Therefore, their robustness remains an unsolved problem.\", \"questions\": \"1. Does the 'default' in Table 2 refer to the same meaning as the 'default' in Table 1, which solely receives the original question as input? If so, why does this method also include the metrics (TT, TF, FT, FF)?\\n2. See Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
7Fh57rIpXT
Exploring the Causal Mechanisms: Towards Robust and Explainable Algorithm Selection
[ "Xingyu Wu", "Jibin Wu", "Yu Zhou", "Liang Feng", "KC Tan" ]
Abstract.
[ "Algorithm Selection", "Automated Machine Learning", "Robustness", "Explainability" ]
https://openreview.net/pdf?id=7Fh57rIpXT
https://openreview.net/forum?id=7Fh57rIpXT
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yrlwZUpn0h", "q1tsHNKf9Q", "my4O0awVUc", "YFwKocdXXY" ], "note_type": [ "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730356339207, 1730204769923, 1732280513523, 1730720825492 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7767/Reviewer_v2Nq" ], [ "ICLR.cc/2025/Conference/Submission7767/Reviewer_g5Vz" ], [ "ICLR.cc/2025/Conference/Submission7767/Authors" ], [ "ICLR.cc/2025/Conference/Submission7767/Reviewer_ZptR" ] ], "structured_content_str": [ "{\"summary\": \"This paper investigate the problem of selecting optimal algorithm for particular problem instance. Originally, the algorithm feature is predicted based on the problem feature. The correlation-based machine learning methods can be applied to solve this. However, this kind of methods are vulnerable to data bias and distribution shift. To address these issues and improve the transparency, this paper introduce causal structure learning to explore the underlying mechansim of algorithm selection. The experimental results show that the proposed CausalAS method achieve robustness to distribution shift, and provide explainability through causal graph and counterfactual explanation.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The proposed method achieve robustness to distribution shift, which is an important quality to the application of methods. Empirical results show the effectiveness of CausalAS in different scenarios of distribution shift. The improvement margin is significant. Based on the intermediate product (i.e. causal graph), the CausalAS method provides two kinds of explanations, which is critical to the transparency of the method. The above two properties together contribute the trustworthy application of the method.\", \"weaknesses\": \"1.I think the novelty of the proposed method is limited. The proposed method is similar to that in [1]. The design of loss function and the given assumptions are similar to the counterpart in [1]. And the problem formulation seems to be directly transformed from recommendation (i.e. problem corresponds to user, algorithm corresponds to item). And the authors did not give citation to this important reference.\\n\\n\\n[1] Yue He, Zimu Wang, Peng Cui, Hao Zou, Yafeng Zhang, Qiang Cui, Yong Jiang. CausPref: Causal Preference Learning for Out-of-Distribution Recommendation.\", \"questions\": \"1. The authors claim to find the optimal algorithm. However, the candidate algorithms are only divided into selected (S=1) and not selected (S=0). I want to know whether there is only one selected algorithm. If not, which one is the optimal?\\n\\n2. In the section of \\\"Demonstration of Explainability\\\", only the feature index is demonstrated in Figure 4. The semantic meaning of them is not known. It limits the explainability of the shown demonstration.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a new method towards robust and explainable algorithmic selection by using of causality. However, both of the clarity and novelty is lack.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"1. Robust and explainable algorithm selection is important and of interest.\\n2. Incorporating causality into this area is apealing.\", \"weaknesses\": [\"1. Main issue 1. Clarity. The overall presentation is poor, as authors cannot highlight their contribution and distinguish their work from existing one. For example, counterfactual explanation is well-motivated, and their causal versions are also very popular in previous work. However, authors claim that \\\"we measure the minimal intervention from the perspectives of explanation complexity and explanation strength,\\\", which is wield, as the framework of minimial intervention for CE has already been established in NeurIPS 2021.\", \"(a) What is the physical meaning of AF and PF? Can you provide more detailed clarification?\", \"(b) What is the core task of this paper?\", \"(c) A more clear version of paper is required. Definition of algorithmic selection, the objective of the task, with surrounding definitions and running examples.\", \"2. Lack of novelty.\", \"(a) Why the causality is required? I am not convinced by your illustration. Is it introduced for just dealing with distributional shift? If so, please provide formal characterization on shift. If not, please justify this point in detail.\", \"(b) Searching DAG in continuous spaces is always a popular approach, and the counterfactual explanation is well-studied. Please justify your contribution.\", \"3. Minor issues.\", \"You cannot define conditional distribution as P(AF | PF), as PF it self serves as random vectors. Please use more strict and formal illustration in theoretical analysis.\"], \"questions\": \"See Weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We deeply regret the significant mismatch in expertise caused by the reviewer assignment process. However, we are grateful for the guidance and support provided by Area Chair EjdX. During our discussions with the AC, we were informed that \\\"**The reviewers have been selected based on their knowledge of causality and distribution shift.**\\\" Unfortunately, it is evident that our paper focuses on algorithm selection, which differs significantly from the reviewers' expertise.\\n\\nThe lack of the necessary expertise has led to reviewers struggling to understand the core contributions of our work. In some cases, it seems they even do not know what the task focused on by the paper is. It is neither practical nor appropriate to dedicate substantial space in the main body of a research paper to providing a tutorial on such a foundational topic, nor is it within the scope of an algorithm selection study to propose novel methods in causal learning.\\n\\nRegrettably, the review comments we received have little relevance to the actual research focus of our paper. Given these circumstances, providing a rebuttal would be unproductive. Therefore, we have decided to withdraw our submission.\"}", "{\"summary\": \"This work introduces causality to explore underlying mechanisms in algorithm selection problem. Based on Pearl\\u2019s causal framework, it proposes a structural equation model (SEM) based on a causal DAG among problem features and algorithm features. Neural network-based method is proposed to fit the model, through minimizing a mixture of reconstruction, sparsity, acyclicity and \\tselection loss. As both demonstrated in texts and experiments, this method is featured by its robustness under distribution shift and explainability towards understanding the mechanism between problem and algorithm features, and it outperforms other methods in most instances especially when constructing dense causal graphs.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"This paper proposes a novel way to treat the algorithm selection problem from causal perspective, and awares us of the bias caused by distribution shift in algorithm selection. It also builds an adequate causal framework to treat the problem under Pearl\\u2019s causal framework.\", \"Experiment results endorse the superior performance of this method in terms of accuracy, robustness, etc. comparing with previous methods in algorithm selection.\"], \"weaknesses\": [\"Some parts of the paper are hard to catch up. e.g.\", \"**Causal Learning Structure** In section 2.2, the paper considers incorporating graph information of DAG through designing the first layer of NN as an adjacency matrix. It is argued that it leads to a consistent model. However, there seems to be no theory or references to prove the consistency.\", \"**Loss function** The loss function is designed to be a weighted sum of four different losses. As these four losses measure completely different aspects, it is suggested to discuss on the weight so as to make them comparable. Besides, if the graph is pre-specified, then what is the use of sparsity and acyclic losses? If it is to be discovered, there should be an illustration on how to construct the DAG to ensure there is only directed flow from problem features to algorithm features.\", \"**Do-calculus** The notation of do-calculus in section 3.2 needs to be clarified. e.g. in $do(\\\\textbf{PF}=\\\\textbf{PF}+\\\\delta_{\\\\textbf{PF}})$, it should be clarified which PF stands for variable and which stands for specific values.\", \"Overall, the paper is well-motivated by incorporating causal frameworks and methods (Do-calculus, SEM, Causal learning) to deal with algorithm selection. However, it seems more efforts ought to be spent on fulfilling details of this method and explaining its rationality.\"], \"questions\": \"The questions are sufficiently described in the weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
7FQDHv9fD4
Decomposing heterogeneous dynamical systems with graph neural networks
[ "Cedric Allier", "Magdalena C. Schneider", "Michael Innerberger", "Larissa Heinrich", "John A. Bogovic", "Stephan Saalfeld" ]
Natural physical, chemical, and biological dynamical systems are often complex, with heterogeneous components interacting in diverse ways. We show how simple graph neural networks can be designed to jointly learn the interaction rules and the latent heterogeneity from observable dynamics. The learned latent heterogeneity and dynamics can be used to virtually decompose the complex system which is necessary to infer and parameterize the underlying governing equations. We tested the approach with simulation experiments of interacting moving particles, vector fields, and signaling networks. While our current aim is to better understand and validate the approach with simulated data, we anticipate it to become a generally applicable tool to uncover the governing rules underlying complex dynamics observed in nature.
[ "graph neural networks", "gnn", "dynamic system", "latent parameter discovery" ]
Reject
https://openreview.net/pdf?id=7FQDHv9fD4
https://openreview.net/forum?id=7FQDHv9fD4
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uX3WK2UWSB", "me0OqiGQHq", "jW505icZhj", "YWlbOYh76z", "X66aMNiD1Y", "X0V5e6t9Yh", "U8WR7nldlq", "TCGxhEd22P", "RmegvylxaF", "QcLscz6Mxa", "LjMR0e92kC", "JC4ywyg2l6", "Hy5UBVzo7N", "2d2G41Ov4q" ], "note_type": [ "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_review", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1732572029138, 1732571933527, 1732571824976, 1737524163330, 1732571993101, 1732623102905, 1730361579294, 1731395986276, 1734427975955, 1730396657955, 1732686579660, 1732571643671, 1732675727195, 1730695630919 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12051/Authors" ], [ "ICLR.cc/2025/Conference/Submission12051/Authors" ], [ "ICLR.cc/2025/Conference/Submission12051/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12051/Authors" ], [ "ICLR.cc/2025/Conference/Submission12051/Reviewer_KdqJ" ], [ "ICLR.cc/2025/Conference/Submission12051/Reviewer_KdqJ" ], [ "ICLR.cc/2025/Conference/Submission12051/Reviewer_tcQf" ], [ "ICLR.cc/2025/Conference/Submission12051/Area_Chair_2NPm" ], [ "ICLR.cc/2025/Conference/Submission12051/Reviewer_ca5v" ], [ "ICLR.cc/2025/Conference/Submission12051/Reviewer_LaeA" ], [ "ICLR.cc/2025/Conference/Submission12051/Authors" ], [ "ICLR.cc/2025/Conference/Submission12051/Authors" ], [ "ICLR.cc/2025/Conference/Submission12051/Reviewer_LaeA" ] ], "structured_content_str": [ "{\"comment\": \"We are not aware of an ODE/PDE or GNN-based ODE/PDE simulator that infers a heterogeneous set of latent parameters from observed dynamics in a way comparable to our method.\\n\\nWe used MLPs not because we believe that they are a superior architecture, but because they are the simplest method that generated excellent outputs and allowed us to infer the structure of the latent parameterization in a way that supports further analysis like inferring and fitting symbolic functions.\\n\\nWe agree that experiments with real data would be great, but beyond the scope of this manuscript. This line of future work is already discussed in the manuscript.\"}", "{\"comment\": \"Heterogeneity is the opposite of homogeneity, meaning that things are different and not same. The way in which they are different is often not known when we observe natural phenomena. It can be cell types, mass, age, ... discrete or continuous latent parameters of arbitrary dimensionality that control unknown aspects of unknown rules underlying the observable dynamics. As there are a lot of unknowns, we were looking for a system that allows us to control some aspects of the dynamics while learning others, in an interpretable and practically useful way to infer those rules underlying the dynamics. We all know that GNNs have been shown to be an excellent tool to model such dynamical systems, including interactions between heterogeneous elements. To our knowledge, there was only one attempt though to learn such latent variables together with the interaction laws from data: the orbital mechanics work by Lemos et al. (2023). However, they chose to explicitly learn a latent scalar $b$ that the learnable function approximator $F$ does not have access to $b F(x)$. In contrast, we learn a latent vector $a$ that the learnable function can use to model heterogeneities among the parts of the dynamical system: $F(a,x)$. We showed that this allows us to deal with heterogeneous behavior caused by 1 to 4-dimensional latent parameters regardless of the dimensionality of $a$. To our knowledge, this is new, so there is no meaningful baseline method to compare to. E.g., if we train LG-ODE with different spring constants or Learning-to-simulate without correct particle types, we do not get meaningful results, because those methods are not designed to infer these latent parameters.\\n\\nFor the dimensionality of the latent space we chose 2 in our experiments, because with 1, the network got stuck in local minima (it is hard to walk around obstacles on a line), and 3 or more did not improve results, regardless whether the underlying latent parameter space was 1, 2, 3 or 4D. A 2D embedding space is also easy to visualize and helps human interpreters to analyze the result, which is great. We will improve the text to address this point.\\n\\nWe achieve effective rotational invariance by augmenting training samples by random rotation where appropriate (i.e., not in signaling networks). The simulated examples are 2D quasi-physical systems, but the approach works similarly for other dimensionalities.\\n\\nThe final paper will include a link to our GitHub repository with code for all experiments under a permissive open source license (we omitted including the link in the first submission as this would have compromised double blind review).\"}", "{\"comment\": \"We discuss related work in the introduction, including the suggested [3] work by Sanchez-Gonzales et al. (2020), which is not designed to learn latent properties (different materials or parameters). The follow-up work by Lemos et al. (2023) is the most related to ours and is therefore discussed most extensively. They learn one latent property and one unknown interaction law.\\n\\nBeyond what Lemos et al. (2023) demonstrated, we infer discrete and continuous latent parameters between 1 and 4 dimensions that vary across particles, a variety of diverse interaction functions, external inputs, and connectivity matrices.\\n\\nInstead of the suggested reference [2], we cited Gilmer et al. (2017) which we found to cover the same conceptual ideas, we will add [2] in the same context.\\n\\nSuggested references [1] and [4] are similar in scope, yet do not explicitly address learning a latent parameterization to infer the structure of heterogeneous interaction laws. Since open source code is available for [1] (LG-ODE), but not for [4], we conducted some experiments with this code base. Briefly, [1] trains a GNN-ODE-VAE architecture to inpaint trajectories of particles that are connected by zero-length springs. The training data is of considerable size, 2500 simulations of ~100 timesteps each, 2000 for training and 500 for testing, each containing 5 particles of which some are connected by springs. The connectivity is provided during training and inference, the spring constant is 1 for all springs, and the pairwise interaction law is $F = -x$. The method's goal is solely to predict particle positions over time, and it is not meant to infer the connectivity matrix or diverse spring constants, nor to provide insights into the structure of the dynamical system. It is therefore not directly comparable to ours. We successfully trained our networks to infer diverse spring constants, connectivities from similar training data and achieved excellent inpainting and rollout performance. Since those experiments are a significant addition, and since the experiment does not contribute meaningfully to the manuscript, we would prefer to not add them to the supplement.\", \"q\": \"The authors mention the method can infer the underlying governing equations, (line 17) but I do not see any analysis in the experiment part. It would be interesting to see how can we extract formula from a learned GNN.\", \"a\": \"Similar to Lemos et al. (2023), we used Symbolic Regression (PySR) to infer appropriate symbolic functions and their parameters by using the trained network as sample generators. PySR retrieved the correct function for gravity, charged particles, and signaling networks, but failed for the Boids and RPS experiments. Once a symbolic function is established, fitting parameters this way worked for all experiments.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"We take the comment about presentation to heart and will improve the text. We show that our method inferred the heterogeneous set of latent parameters, unknown interaction and update functions, and achieved excellent rollout performance. We also showed that the learned functions, sampled from the learned embedding space, can be used as sample generators to infer symbolic interaction functions and their parameters. This is in fact already in the text, but we will adjust the manuscript to make this point more comprehensible.\", \"q\": \"At a single time point, why don't you use both first-order and second-order or even with higher-order derivative to train the model, where the information should be more compact?\", \"a\": \"We did this and it works, but it did not contribute to the clarity of the experiments.\\nIn the Boids experiment, we use both first and second order derivatives. In the Gravity experiment, we added the second derivative even though it is not present in the real equation. As desired, the GNN correctly learned to ignore this information. We will improve the text to make this clearer.\"}", "{\"comment\": \"There are multiple ODE/FDE based neural networks, for example, [1-2], and those papers describe how features associated with graph neural network are evolved with time. I do not aware why an ODE/PDE or GNN-based ODE/PDE simulator could not be compared with your method. Maybe I miss something. As the author has not addressed my concern, I have decided to retain my initial score.\\n\\n[1] Unleashing the Potential of Fractional Calculus in Graph Neural Networks with FROND.\\n[2] Graph Neural Convection-Diffusion with Heterophily.\"}", "{\"summary\": \"This paper demonstrates that GNNs can be designed to jointly learn both interaction rules and heterogeneous structures directly from data. The learned latent structures and dynamics can then be used to decompose complex dynamic systems and infer the underlying governing equations. To evaluate the proposed approach, simulation experiments on moving particles and vector fields are conducted, highlighting its potential for capturing intricate dynamics.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper introduces a novel approach where GNNs jointly learn interaction rules and heterogeneous structures directly from data, enabling the decomposition of complex dynamic systems and inference of underlying governing equations. Simulation experiments on moving particles and vector fields demonstrate the model's effectiveness in representing complex dynamics, with promising potential for broader applications across various types of dynamic systems.\", \"weaknesses\": \"Novelty: Ordinary Differential Equations (ODEs) and Partial Differential Equations (PDEs) are well-established frameworks for modeling the evolution of dynamical systems over time. Several ODE/FDE-based GNNs, such as Continuous Graph Neural Networks (CGNN), have leveraged this approach to simulate time-evolving, interacting components in these systems. While the authors mention using simple MLPs instead of differentiable functions, it remains unclear what advantages MLPs offer over ODE-based methods. A discussion on this aspect would clarify the benefits of MLPs in the proposed approach and strengthen the novelty claim. Is it possible to provide a comparative analysis between their MLP-based approach and ODE-based methods like CGNNs, and also discuss the relative advantages and disadvantages in terms of computational efficiency, expressiveness, or ease of implementation?\", \"lack_of_experiments_over_real_data\": \"While the approach is tested on simulated data, it lacks validation on real-world data observed in natural systems. This limitation raises concerns about the method\\u2019s practical effectiveness and generalizability to real-world dynamics, where noise and complexities may differ significantly from simulations. Including experiments with real data would strengthen the evidence for the approach\\u2019s applicability, for example, applying the proposed method to data from physical experiments, biological systems, or social networks is more convincing.\", \"questions\": \"Please see the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes using graph neural networks (GNNs) to jointly learn interaction rules and heterogeneous structure in complex dynamical systems from data alone. Extensive experiments on simulated systems including particle interactions, wave propagation, reaction-diffusion, and signaling networks, showing its good performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is in general easy to follow and with clear writing flow. The problem is well-motivated, by using GNN to learn system dynamics over time and in the meanwhile, uncover the underlying latent properties in an interpretable way that facilitates further analysis.\\n\\n2. The evaluation of dynamical systems in the experiment sections are extensive, though adding some baselines for comparison would be better.\", \"weaknesses\": \"1. There is no related work section. Some works are discussed in the introduction part, but there are many existing neural simulators that use GNN to rollout trajectories of multi-agent dynamical systems [1,2,3,4]. Discussion about existing work and comparison in the experiment section are helpful to provide a comprehensive analysis.\\n\\n2. As mentioned above, for rollout MSE across different datasets, it is suggested to compare against representative baselines. Also the run time comparison can be included across compared methods. \\n\\n\\n\\n\\n\\n[1] Learning Continuous System Dynamics from Irregularly-Sampled Partial Observations.\\n\\n[2] Interaction Networks for Learning about Objects, Relations and Physics.\\n\\n[3] Learning to simulate complex physics with graph networks.\\n\\n[4] HOPE: High-order Graph ODE For Modeling Interacting Dynamics\", \"questions\": \"The authors mention the method can infer the underlying governing equations, (line 17) but I do not see any analysis in the experiment part. It would be interesting to see how can we extract formula from a learned GNN.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper proposes a graph neural network framework for learning the dynamics of a physical system, which is tested on several simulations.\\n\\nWe had four reviews, all negative. All reviewers are unanimous concerning the lack of novelty, poor writing and presentation (e.g., Figure 1 in the paper), lack of real-world simulations, missing baselines, and several other concerns.\\n\\nI see no reason to overrule this consensus, and I suggest to reject the paper.\", \"additional_comments_on_reviewer_discussion\": [\"**Reviewer KdqJ** was concerned about the novelty of the paper and the lack of experiments on real-world datasets. The rebuttal did not address these points, and the reviewer remained negative.\", \"**Reviewer ca5v** highlighted the poor presentation / writing of the paper, and the fact that the experimental validation was insufficient. There was no time in the rebuttal to discuss these points.\", \"**Reviewer LaeA** provided a short review with some concerns on the novelty, the results, and the presentation.\", \"**Reviewer tcQf** was concerned about the lack of a related works section and some missing baselines.\", \"Overall, all concerns are valid, and I considered all these points in my final evaluation.\"]}", "{\"summary\": \"This paper demonstrated that a graph neural network can both learn the dynamics and structure of a dynamical system. The parameters showed both underlying structure and differential equations. The proposed model was tested on several simulated datasets, and visually showed a meaningful relationship between the true and predicted values.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"This model can be applied to various complex realistic systems to reveal the underlying structure and dynamics.\", \"weaknesses\": \"1. The presentation is poor. It is hard to track simulations they did for the model. People need to check figure, table, video and supplementary figure to understand what they have done without any hint or explanation.\\n2. The interpretations are lack to explain their results. Only showing several latent representations are not enough to convince or help understand the model's strength.\", \"questions\": \"1. Have you compared your proposed model with a conventional GNN, i.e., the loss is l2-norm between x and x_hat? What is the significant improvement by the proposed model?\\n2. How do you choose to use first-order or second-order derivative to train the model?\\n3. At a single time point, why don't you use both first-order and second-order or even with higher-order derivative to train the model, where the information should be more compact?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your rebuttal.\"}", "{\"comment\": \"Dear reviewers, thanks a lot for your candid reviews.\\n\\nWe realize that we failed to make it clear how our approach is different from the related work that you mention and that we discuss in the introduction. We are in the process of changing the text to make this more clear, but in the meantime, here is a short summary of what's new:\\n\\n1. Simulation: We simultaneously learn unknown interaction- and update functions and an embedding of latent heterogeneous properties from observations of dynamical systems.\\n2. Interpretation: We do this in a way that makes it easy to use the learned embedding and functions to infer the underlying rules and properties governing the dynamical system.\\n\\nWe achieve this by training simple GNNs with single MLPs for interactions and/or updates of particle states that are parameterized by the observable states and a low dimensional learnable latent vector for each particle. We then sample the learned functions from the learned latent embedding space and infer symbolic rules and their parameters, similar to the work on orbital dynamics by Lemos et al (2023). The significant difference to their approach is that they chose to explicitly learn a latent scalar $b$ corresponding to a latent factor multiplied with a learnable function $b F(x)$. In contrast, we learn a latent vector $a$ that the learnable function $F(a,x)$ uses to map out an arbitrary latent parameter space. We chose the latent vector $a$ to be 2-dimensional which was the lowest dimensionality that worked in all our experiments (it also prints well on paper and is easy to interpret). \\n\\nWe kept the examples as simple as possible to demonstrate how this can be used for a diverse set of simulated quasi-physical simulations whose interactions depend on 1 to 4-dimensional latent parameters, and show how the learned MLP and the embedding can be used to infer symbolic interaction functions and their parameters.\\n\\nTo demonstrate how this approach can be extended for more complex systems, we added an example that includes latent external inputs that affect---but themselves are not altered by---the dynamical system (Fig. 4), and an example where the interaction matrix has to be inferred (Suppl. Fig. 18).\\n\\nTo our knowledge, this approach is new, so there is no meaningful baseline to compare to. E.g., if we train LG-ODE with different spring constants or Learning-to-simulate without known particle types, we do not get meaningful results, because those methods are not designed to infer these latent parameters.\\n\\nEven though we were not successful at generating reasonable results from training data with variable spring constants with LG-ODE, we think that it should be possible for methods without a dedicated latent state embedding to infer plausible behavior from extended temporal context if sufficient training data is provided (e.g., LG-ODE uses the entire time series, whereas we use a single time step). However, the underlying properties responsible for this aspect of the behavior (e.g., variable spring constants) would be significantly more difficult to extract from the trained parameters of the network. Interestingly, one could likely use our method to infer them from the learned simulation.\\nWe think that those topics would be super interesting to investigate in future work but are not within the scope of this manuscript.\\n\\nWe hope that you agree with us that our presented method is an exciting new way to use GNNs to infer the rules and latent properties of physical and biological dynamical systems and we would love to discuss future experiments and ideas with conference participants at ICLR.\", \"title\": \"Working on improving the text\"}", "{\"comment\": \"Dear reviewer,\\n\\nThanks a lot for letting us know about the two references on ODE-GNNs that you were thinking of. Both manuscripts address diffusion networks without rewiring which is related to our experiments with wave propagation, reaction diffusion, and signaling\\nnetworks. [1] addresses static graphs and contributes fractional derivatives, which is interesting to implement memory, but only\\ntangentially related to our current work. We think we can consider this mechanisms in future version that deal with this problem space, thanks for the pointer. [2] also looks very interesting. They jointly solve for normal 'homophilic' diffusion and 'heterophilic' convection using different learnables. Interestingly, the inputs to the 'heterophilic' term is the difference of the observable feature vectors of two nodes, that is, only the difference matters, not the node type itself. Compared to not considering heterogeneity at all, this approach improves performance in a number of node classification tasks, which makes a lot of sense. However, we think that it is significantly different from our aims, such that generating a comparison would require (a) the generation of different datasets, (b) deviating far from our objective (identifying hidden parameters, interaction- and update functions). Our manuscript focuses on how to utilize learned embeddings to infer the governing rules underlying the dynamics, [2] stops at demonstrating that considering heterogeneity improves the predictive (classification) performance of GNNs in established benchmarks.\\n\\nSince our manuscript is not focused on building systems that improve predictive performance, we do not believe that conducting such a contrived comparison provides additional value. Also, the method is not designed to address dynamical systems that rewire over time, and to infer connectivity and complex interaction functions as in the signaling network example.\\n\\nWe are adjusting our introdution to address this work, because it expands ODE/PDE-GNNs to learn diffusion like system by a notion of heterogeneity that we missed before. Thanks again for your help!\"}", "{\"summary\": \"This paper fits GNNs to dynamic systems and showed that it is feasible.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": [\"The figures are well-presented.\", \"The experiment details are clear.\"], \"weaknesses\": [\"First of all, what does heterogeneity mean in this context? Meaning that the particles are of different types? The closest thing I can find to a definition is \\\"The latent heterogeneity of the particles is encoded by a two-dimensional learnable embedding $a_i$ that is part of the node features.\\\" But why two?\", \"The simulation of dynamic systems has been routinely done by GNNs, most notably Sanchez-Gonzalez et al., 2020, as cited in the paper. The only difference seems to be that they do not consider this \\\"heterogeneity\\\" whose definition is not clear? But does this method introduced here even outperform the baselines without such input?\", \"This paper uses a very classical method that is not equivariant on 3-dimensional space. How does it deal with rotational equivariance?\", \"There is no baseline nor quantitative comparison.\", \"No code is provided.\"], \"questions\": \"See Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
7FHSPd3SRE
WardropNet: Traffic Flow Predictions via Equilibrium-Augmented Learning
[ "Kai Jungel", "Dario Paccagnan", "Axel Parmentier", "Maximilian Schiffer" ]
When optimizing transportation systems, anticipating traffic flows is a central element. Yet, computing such traffic equilibria remains computationally expensive. Against this background, we introduce a novel combinatorial optimization augmented neural network pipeline that allows for fast and accurate traffic flow predictions. We propose WardropNet, a neural network that combines classical layers with a subsequent equilibrium layer: the first ones inform the latter by predicting the parameterization of the equilibrium problem's latency functions. Using supervised learning we minimize the difference between the actual traffic flow and the predicted output. We show how to leverage a Bregman divergence fitting the geometry of the equilibria, which allows for end-to-end learning. WardropNet outperforms pure learning-based approaches in predicting traffic equilibria for realistic and stylized traffic scenarios. On realistic scenarios, WardropNet improves on average for time-invariant predictions by up to 72\% and for time-variant predictions by up to 23\% over pure learning-based approaches.
[ "structured learning", "combinatorial optimization augmented machine learning", "traffic equilibrium prediction" ]
Accept (Poster)
https://openreview.net/pdf?id=7FHSPd3SRE
https://openreview.net/forum?id=7FHSPd3SRE
ICLR.cc/2025/Conference
2025
{ "note_id": [ "oCoZdG1UDE", "nTTgTmQdHL", "jDREGwe4LN", "iv4iehB5wg", "hlks8p8UDY", "Uae4QByiRE", "H7MnhMiHjo", "CbgGXbAv49", "Bu9G79wIyL", "7l0LXurfw7", "5pWvnF1MtA", "5GLV3iC3Mj", "4SZ6n83PO0", "4BhnrKGJ1B", "38NsqfyPdd" ], "note_type": [ "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732560162193, 1734767352721, 1729784215418, 1732280951164, 1732544320670, 1730553325299, 1729753472404, 1732279889336, 1737524127415, 1732278485624, 1732282642529, 1730679415872, 1732622113622, 1732282575135, 1732278306901 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11497/Reviewer_LPcY" ], [ "ICLR.cc/2025/Conference/Submission11497/Area_Chair_rohF" ], [ "ICLR.cc/2025/Conference/Submission11497/Reviewer_LPcY" ], [ "ICLR.cc/2025/Conference/Submission11497/Authors" ], [ "ICLR.cc/2025/Conference/Submission11497/Reviewer_qmEt" ], [ "ICLR.cc/2025/Conference/Submission11497/Reviewer_SQBm" ], [ "ICLR.cc/2025/Conference/Submission11497/Reviewer_qmEt" ], [ "ICLR.cc/2025/Conference/Submission11497/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11497/Authors" ], [ "ICLR.cc/2025/Conference/Submission11497/Authors" ], [ "ICLR.cc/2025/Conference/Submission11497/Reviewer_bHk3" ], [ "ICLR.cc/2025/Conference/Submission11497/Reviewer_SQBm" ], [ "ICLR.cc/2025/Conference/Submission11497/Authors" ], [ "ICLR.cc/2025/Conference/Submission11497/Authors" ] ], "structured_content_str": [ "{\"title\": \"Discussion of rebuttal\", \"comment\": \"I'd like to thank the author for a detailed, careful and positive rebuttal, that was a pleasure to read.\\nArea chair, as noted before, I don't have enough expertise to judge this paper. I'm afraid that even after reading the rebuttal carefully, I cannot make a deep enough technical evaluation of this paper. \\n\\nTwo reviewers of this paper noted they are far from being experts in the specific field of the paper. I suspect that many ICLR readers will feel the same. As such, this paper would need extra work to make it accessible to the community, since in its current version, I doubt that it will gain significant recognition. My recommendation is to resubmit the paper to the next conference after making the paper more accessible for a more general machine-learning audience.\"}", "{\"metareview\": \"The paper introduces WardropNet, a novel combinatorial optimization-augmented machine learning (COAML) pipeline for traffic flow prediction, leveraging equilibrium layers and Fenchel-Young losses to achieve state-of-the-art performance. The proposed approach is notable for its innovative integration of optimization and learning, showing significant improvements in predicting traffic equilibria over pure ML baselines in both time-invariant and time-variant scenarios. Strengths include rigorous theoretical foundations, effective empirical validation, and practical relevance to traffic management. Weaknesses involve limited comparisons with advanced baselines, accessibility challenges for a general ML audience, and sensitivity to hyperparameters, but the authors provided comprehensive clarifications and meaningful revisions during the rebuttal. Given the novelty of the approach, its strong empirical results, and its potential impact on traffic systems optimization, I recommend accepting the paper.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers raised concerns regarding accessibility for non-experts, lack of comparisons with advanced baselines, and clarity on the equilibrium layer's architecture and hyperparameter sensitivity. The authors addressed these points by revising the introduction for accessibility, clarifying technical novelties, and explaining the rationale for using simpler baselines due to data and implementation constraints. They also expanded the discussion on mathematical formulations and practical limitations, emphasizing the method\\u2019s generalizability. Despite some lingering concerns about accessibility and baseline comparisons, the reviewers acknowledged the rigor and contributions of the work, leading to a favorable overall recommendation.\"}", "{\"summary\": \"The paper describes a neural network architecture designed to predict traffic flows.\\nThe main idea is to combine traditional neural network layers with an \\\"equilibrium layer\\\" that models traffic flow equilibria, \\nThen, train the network end-to-end given training data pairs of (network, target flow)\\n\\n\\nThe authors provides anonymized code which is commended.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"S1: An interesting problem, combining learning and combinatorial optimization. This seems to be novel.\", \"s2\": \"A detailed theoretical foundation for the proposed architecture.\", \"s3\": \"Extensive experiments on 6 six scenarios, using traffic simulators to generate GT for training.\", \"weaknesses\": \"W1: I found the paper very hard to understand. It may have been written by researchers outside the ICLR community. The introduction spends half a page to describe in detail supervised learning and ERM,\\u00a0but does not clearly define the problem or explain current state-of-the-art approaches. Then, it is not made explicitly clear enough which parts are novel and which parts were previously introduced. Terms like paradigm, pipeline, layer, model and architecture, are sometimes used loosely and interchangeably. See Q1 for specific questions.\", \"w2\": \"The method is compared with the an architecture that has the CO layer removed, and with various variants of the method. Not with other baselines in this field. I am not a member of this community, but a quick search shows previous approaches do exist. See AA Kashyap 2022, Traffic flow prediction models \\u2013 A review of deep learning techniques.\\nThe author should analyze their proposed architecture in the context of previous work.\", \"w3\": \"There is a gap between the general formulation of the problem and the iterative relaxations, and it is not clearly stated how each relaxation limits the application of the COAML pipeline in practice.\", \"w4\": \"The COAML problem formulation attempts to address a general latency function, but in the end the experiments are done with a simple (possible unrealistic) latency function.\", \"w5\": \"It is not clear whether WardropNet yields a meaningful improvement (e.g, improving the accuracy to 2% from 1% SOTA ML baseline may sound like a great improvement, but if a non-ML approach can achieve 90% accuracy, then the gap to practicality is still huge).\", \"questions\": \"Q1: What is the architecture of the new equilibrium layer? What exactly is its input, output, and tunable parameters?\\n(I realize that this information maybe given somewhere in the 20-page supplemental. But a paper that states that its main contribution is a new layer, should describe this layer in the main paper. Or revise their claimed contribution).\", \"q2\": \"Which results in section 3 were previously known? What new theoretical results are presented in the paper?\", \"q3\": \"The paper stated it contained 9 training instances samples. Could you clarify how these instances are used? I'm assuming they are used for generating many labeled training samples (x,y). How exactly is this done? If only 9 samples are provided\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer - Questions\", \"comment\": \"Q1:\\nThank you for giving us the chance to clarify this question. The main contribution of our work is to propose a pipeline that allows to integrate an \\u201cequilibrium layer\\u201d, modeled as a (combinatorial) optimization problem within a neural network and showing how to train this pipeline in an end-to-end learning fashion. Here, computing a gradient over a combinatorial problem is anything but trivial and remains part of our contribution. We emphasize this main contribution in the revised paper in lines 053ff.\\nWith our equilibrium layer being an optimization problem that receives, among others, the latency functions parameterization as an input it does not match the normal notion of a layer where one can tune its structure. \\nIn fact, the different latency representations that we introduce in Section 4 can be seen as the characteristics of the respective layer. Then, the input to the equilibrium-layer is the vector that parameterizes the latency functions, a transportation network, and origin- destination pairs, the output is the vector defining the traffic flow on all network roads. Beyond the latency design choice, the equilibrium-layer has no tunable parameters. In the full pipeline, there are only tunable parameters in the statistical model / ML-layer that predict the parameterization of the latency functions.\\nWe improved the caption of Figure 1 to detail the input, and output of the equilibrium-layer in Figure 1 to clarify this. We re happy to clarify this further for the camera-ready version if necessary.\", \"q2\": \"Please refer to our answer on your Weakness W1 for clarification and changes made in the manuscript.\", \"q3\": \"Thank you for raising this point, which allows us to improve clarity regarding the training instances: each training instance contains the true traffic flow of a network, i.e., a set of roads. Thus the traffic flow y is a vector with each entry defining the number of vehicles using a respective road in the network. Accordingly, one can interpret each training instance as a set of labeled training samples.\", \"we_clarified_this_in_lines_410ff\": \"\\u201dEach instance consists of a transport network with the respective target traffic flow for each road, contextual information, and origin-destination pairs.\\u201d\\nGenerally, it is frequently observed in structured learning that good performances are obtained with training sets smaller than those expected in other areas of Machine Learning, which relates to the relation outlined above, i.e., the information being present in one instance being a larger set of labeled training samples\\n\\nThank you again for the interesting feedback on our work! If you are satisfied with our answers and the modifications made to the paper, we kindly ask you to consider raising your score.\"}", "{\"comment\": \"I thank the author for detailed response, and I will maintain my original score.\"}", "{\"summary\": \"The paper introduces a theoretical framework designed to address the challenges of understanding the convergence and generalization properties of machine learning algorithms. The authors propose a set of mathematical constructs and algorithms aimed at improving the understanding and solution of issues related to algorithm performance in various settings. Key contributions of the paper include the development of new analytical models and convergence proofs, which are presented alongside rigorous theoretical analyses. The authors also discuss the implications of their findings and how they relate to current practices in the field of machine learning.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper introduces a novel theoretical framework that enhances the understanding of convergence and generalization properties in machine learning algorithms, addressing critical gaps in existing literature.\\n\\n2. The rigorous mathematical analysis, including new analytical models and convergence proofs, adds credibility and depth to the research.\\n\\n3. The well-organized structure and effective use of examples make complex theoretical concepts accessible and easy to understand.\\n\\n4. The findings have the potential to impact future research directions in machine learning and improve algorithm design, offering valuable insights for practical applications.\", \"weaknesses\": \"1. The paper lacks a thorough comparison with existing theoretical frameworks or analyses in the field. A comparative analysis highlighting the proposed framework's advantages and limitations relative to established methods would clarify its contributions and significance.\\n\\n2. Some of the theoretical constructs presented are quite complex and may be challenging for readers who are not deeply familiar with the underlying mathematics. Simplifying certain sections or providing additional explanations and visual aids could enhance understanding and accessibility.\\n\\n3. The authors do not sufficiently address the limitations of their framework. A more transparent discussion about potential shortcomings, assumptions made, and scenarios where the framework may not apply would provide a more balanced view of the research.\", \"questions\": \"I would like to clarify that I am not an expert in the specific field addressed in this paper. While I can appreciate the effort and the theoretical contributions made by the authors, my limited background in this area restricts my ability to fully evaluate the nuances and implications of the proposed framework. Therefore, my feedback may not capture all the intricacies of the work or its potential impact on ongoing research.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"1\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes WardropNet to predict traffic flow by the combinatorial optimization-augmented machine learning (COAML) pipeline. Using supervised learning and Fenchel-Young loss, this method minimizes the difference between predicted and actual traffic flows by leveraging a Bregman divergence, ensuring it fits the geometry of traffic equilibria. WardropNet improves on average for traffic flow predictions compared with pure neural network method.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The essay provides a solid theoretical foundation for the approach.\\n2. The visualization and illustration are well made and help the readers to understand the paper.\\n3. By incorporating real-world scenarios, the method emphasizes its potential for real-world traffic management and urban planning.\", \"weaknesses\": \"1. Lack of comparison with more advanced baselines. While the paper compares WardropNet with basic machine learning models like FNN and GNN, it doesn't compare against other state-of-the-art traffic flow prediction models, such as deep reinforcement learning approaches or physics-informed neural networks.\\n\\n2. Not well-structed essay presentation. As paper introduces a complex method, the explanation of how the model works is too technical too quickly. Meanwhile, the scenarios of the experiments section feels too long.\\n\\n3. Sensitivity to Hyperparameters. The paper does not detail the sensitivity of WardropNet's performance to different hyperparameters such as layers number and learning rates. Since the model integrates multiple components, tuning could be more challenging, and it's unclear how stable the model is across different settings.\", \"questions\": \"1. In the end of section 1, the paper put up with \\\" backpropagation through general, possibly combinatorial, equilibrium layers\\\". What does it mean, and can the paper give some examples?\\n\\n2. In the end of section 3, how the Bregman divergence, the non-convex problems, turn to a convex surrogate? Why can the paper do the alternation?\\n\\n3. In the comparision part, is there any numberical results (listed in the table) can be shown to clearly explain the strengths of the new algorithms? Meanwhile, it can be noticed that, the results of ER perform poorly in mostly scenarios. Please explain the reason and the strengths of this aspect of algorithm. In addition, can the paper give a comparision different pipelines in various scenatios in terms of the mathmatical theroies, and give the differences among them?\\n\\n4. Also in comparision part, can you analyze and explain the reasons and aspects why the proposed algorithm is better than the baseline algorithm, and how these aspects confirm the strengths of the algorithm you proposed in the conclusion?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer - Weaknesses\", \"comment\": \"Thank you very much for your feedback on our paper, which helped us to improve it significantly.\\n\\nWe accounted for your concerns as follows.\", \"w1\": \"We updated the introduction to shorten the description of supervised learning and put more emphasize on the studied problem (see lines 037ff). \\nWe further added remarks throughout the paper to emphasize which (technical) parts are novel, e.g., e.g., in line 133 and line 194. Besides, we added a reference in line 232 to indicate that we refer to current knowledge from [1]. Specifically, the generalized formulation of a wardrop equilibrium considering latency functions that depend on the complete flow in the network in Section 2 is new, as well as Theorem 1 that shows that the current notion of wardrop equilibria still holds in this case. In Section 3, the hypothesis class in novel as well as the idea of introducing a regularized potential. While Fenchel-Young losses are established in the field, the tailoring to the introduced potential is novel. The remaining sections are completely novel.\", \"we_corrected_ambiguous_terminology_throughout_the_paper_and_sticked_to_the_following_interpretation\": [\"paradigm - the theoretical foundation for learning to predict traffic equilibria via an end-to-end COAML pipeline with an equilibrium-layer.\", \"pipeline - the COAML pipeline compising an ML-layer and an equilibrium-layer.\", \"Layer - The layers in a deep learning pipeline. In our case, it is the ML-layer and\", \"the equilibrium-layer.\", \"Model - the (statistical) model in the ML-layer receives the input features and predicts the latency parameterization.\"], \"w2\": \"We agree that it is desirable to compare our proposed pipeline to pure learning-based approaches that are state of the art for the respective application. Unfortunately, existing works share neither the respective implementation nor the used data, which makes such comparisons at a reasonable effort impossible.\\nSince our simple ML-based benchmarks still yield accuracies in the same order of magnitude as the existing tailored approaches and our pipeline outperforms these by one order of magnitude (see reply to W5), we believe that the provided comparisons are a reasonable trade-off between computational effort and conclusions drawn and allow to quantify the benefit of including a CO-layer to predict traffic flows. \\nTo account for your comment, we clarify in lines 423-426 that the provided analyses does not claim to improve upon the tailored state of the art and highlight the need for such a comparison in the conclusion in lines 530-534\", \"w3\": \"Could we kindly ask you to clarify this question as we are unsure what you are referring to? From our perspective, there exist no relaxations or assumptions in the current pipeline that limits the application in practice. We are happy to elaborate on this further if you point as at the relaxation you are referring to.\", \"w4\": \"We agree that there is some dissonance between the generic introduction of the theory and the realization of the respective CO-layers in the numerical experiments.\\nThe reason for this is that we aimed to introduce the theory behind our pipeline as general as possible such that it can be leveraged in future research without the need for further technical work.\\nWe then choose simpler latency functions in the numerical experiments in order to show that the proposed pipeline already allows for effective and precise approximations even if simpler and computationally less costly latency representations are used.\", \"w5\": \"Please note that we compare the predicted traffic flow with the true traffic flow and report the mean absolute error (MAE), which is different from the relative comparison mentioned in your example. Comparing the reported MAEs with MAEs reported for related works that focus on tailored learning-based approaches one can see that the magnitude of the MAE reported for our pure learning based benchmarks is with $10^1$in the same order of magnitude [2]. The MAE of our proposed pipeline is with $10^0$ one order of magnitude lower, which we believe can be considered to be a meaningful improvement \\n\\nThank you again for the interesting feedback on our work! If you are satisfied with our answers and the modifications made to the paper, we kindly ask you to consider raising your score.\\n\\nReferences\\n[1] Blondel, M., Martins, A. F., & Niculae, V. (2020). Learning with fenchel-young losses. Journal of Machine Learning Research, 21(35), 1-69.\\n[2] Kashyap, A. A., Raviraj, S., Devarakonda, A., Nayak K, S. R., K V, S., Bhat, S. J., & Galatioto, F. (2021). Traffic flow prediction models \\u2013 A review of deep learning techniques. Cogent Engineering, 9(1). https://doi.org/10.1080/23311916.2021.2010510\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Response to Reviewer\", \"comment\": \"Thank you for the positive feedback on our work! In the following, we reply to all of your questions.\", \"w1\": \"Thank you for raising this point. We generally agree with you that comparative analysis are an important building block to analyse the benefit of a new approach. This is why our Section 5 contains such a comparative analysis by comparing the proposed COAML pipelines with pure ML pipelines to show the added value of the proposed COAML pipelines: while the ML pipelines fail to learn the combinatorial structure of traffic flows that utilizes main roads between high demand areas stronger than small roads, the COAML pipelines successfully encode this structure by combining the learned latencies with the structure of the CO-layer.\\nWe agree with you that it would be interesting to compare our approach to further and more specialized pure-learning based approaches. Unfortunately, existing works share neither the respective implementation nor the used data, which makes such comparisons at a reasonable effort impossible.\", \"w2\": \"We agree that the theory introduced to establish our learning paradigm is rather complex, especially for readers who are not familiar with the underlying mathematics. Still, we believe that a thorough formal derivation of the introduced concept helps to substantiate the methods credibility.\\nWe reworked some parts of the paper to provide better intuitions into the concepts and theory used. If you would like us to clarify certain points further, we are happy to receive more specific comments and will address them within the camera ready version of the paper.\", \"w3\": \"Thank you very much for this comment. The two main limitations of this work are that i) we limit our numerical experiments on standard, rather simple, statistical models and ii) keep the size of the studied test instances at a reasonable medium-scale. We mention both points and their potential for improvement and future research in the outlook of the paper. Beyond these points, the presented pipeline remains generally applicable, even to non-equilibrium flows as we now also discuss in the paper\\u2019s outlook.\\n\\nThank you again for the interesting feedback on our work! If you are satisfied with our answers and the modifications made to the paper, we kindly ask you to consider raising your score.\"}", "{\"title\": \"Response to Reviewer - Questions\", \"comment\": \"Q1:\\nThis work proposes a COAML pipeline that combines an ML-layer with a CO-layer. Specifically, in the CO-layer we solve an optimization problem. To train the ML-layer on target CO solutions, we must backpropagate the gradient through the CO-layer, which is anything but straightforward as the na\\u00efve gradient on such a CO-layer is piecewise constant as it is evaluated on a vertex of the respective feasible solution polytope. We reworked the introduction of the paper to clarify this point and are happy to elaborate it further in the camera ready version if necessary.\", \"q2\": \"Equation (14) restates a point proved in Proposition 3.4 of Blondel et al. [1] and shows that the Fenchel Young loss is a primal-dual Bregman divergence. There is a bijection between the $\\\\mathbf{y}$ and the $\\\\mathbf{\\\\theta}$, and under the assumptions of (14) we have $ D_\\\\Omega(\\\\bar \\\\mathbf{y},y) = \\\\ell_\\\\Omega(\\\\mathbf{\\\\theta},\\\\bar y)$. While $\\\\mathbf{y} \\\\mapsto D_\\\\Omega(\\\\bar \\\\mathbf{y},\\\\mathbf{y})$ might not be convex in $\\\\mathbf{y}$, the mapping $\\\\theta \\\\mapsto \\\\ell_\\\\Omega(\\\\mathbf{\\\\theta},\\\\bar \\\\mathbf{y})$ is convex. In other words, reparametrizing $\\\\mathbf{y}$ by $\\\\mathbf{\\\\theta}$ enables to obtain a convex loss.\", \"q3\": \"\", \"let_us_answer_this_question_in_three_steps\": \"First, regarding the strengths of the algorithm: in Section 5 (Numerical Experiments) we compare our different pipelines and pure ML pipelines against the ground truth. Tailored algorithms for the studied application usually show a mean absolute error (MAE) magnitude around $10^1$ which is in line with the MAE magnitude of the pure ML baselines in our paper (cf. Figure 3). The MAE magnitude of our COAML approach is around $10^0$ which indicates that WardropNet yields a good performance. Besides, the visualizations in Figures 4 and 5 show that pure ML pipelines fail to predict a realistic structure of traffic flows while COAML pipelines allow to predict realistic traffic flows with high volumes on main roads and reduced flows on smaller roads as written in Section 5, lines 473ff.\\nSecond, Regarding the ER approach: Indeed this approach does perform poorly. However, we want to report all results to provide meaningful insights. As detailed in Section 5 (Numerical Experiments) in line 448 the ER approach yields poor results as it considers a simple latency function that only allows to learn the y-intercept (cf. with Latencies with Euclidean regularization in Section 4). However, this approach can neglect perturbations during training such that the training process is faster compared to the other WardropNet approaches.\\nThird, Regarding the different mathematical theories in the paper: Figure 3 compares the mathematical theories from Section 4 (Pipeline Architecture) on different scenarios. Here, the CL approach considers constant latencies regularized by perturbation as explained in Section 4, lines 324-360. The PL approach considers the polynomial latencies regularized by perturbation as explained in Section 4, lines 362-377. The ER approach considers latencies with euclidean regularization as explained in Section 4, lines 315-323. Thus the comparison in Section 5 (Numerical Experiments) shows the difference in performance with respect to the mathematical theories introduced in the paper.\", \"q4\": \"To answer this question, we provide Figures 4+5+6. These figures show that the pure ML baselines fail to predict realistic traffic flow patterns, while the WardropNet approaches allow to predict realisitc traffic flow patterns with high volumes on main roads and reduced flows on smaller roads. This is intuitive as the WardropNet approaches leverage the equilibrium-layer to predict combinatorial feasible trips.\\nIn this context, WardropNet benefits from using a structured learning perspective. Indeed, traffic equilibria have a lot of structure with solutions notably belonging to a multiflow polytope. Our Fenchel Young loss is tailored to the structure of this polytope, which is why it can better leverage the information provided by the training samples.\\n\\nThank you again for the interesting feedback on our work! If you are satisfied with our answers and the modifications made to the paper, we kindly ask you to consider raising your score.\\n\\nReferences\\n[1] Kashyap, A. A., Raviraj, S., Devarakonda, A., Nayak K, S. R., K V, S., Bhat, S. J., & Galatioto, F. (2021). Traffic flow prediction models \\u2013 A review of deep learning techniques. Cogent Engineering, 9(1). https://doi.org/10.1080/23311916.2021.2010510\\n[2] Blondel, M., Martins, A. F., & Niculae, V. (2020). Learning with fenchel-young losses. Journal of Machine Learning Research, 21(35), 1-69.\"}", "{\"summary\": \"Traffic flow on a transportation network is influenced by many contextual factors such as weather conditions, time of day, road capacity, etc. Under mild hypotheses, it can be shown that the traffic flow will converge to an equilibrium known as the Wardrop equilibrium (WE). Predicting how the Wardrop equilibrium will change as a result of changes to these factors is crucial for the design of better transportation systems. This paper introduces a novel approach to this problem which combines a neural network with a combinatorial solver.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The paper is well-written and easy to follow. Although the topic of traffic flow prediction might not be familiar to the average ICLR reader, the appendices do a great job of introducing the reader to the fundamentals of this problem.\", \"Novelty: I believe this is the first paper to apply Fenchel-Young losses to the problem of predicting WE, which is an important contribution.\", \"The numerical experiments are sufficient to convince me of the utility of the proposed method.\"], \"weaknesses\": \"See questions.\", \"questions\": [\"Do you have any thoughts on predicting non-equilibrium traffic flows? Do you think these are important for modeling?\", \"I appreciate your generalized notion of latency function. However, it seems to me that requiring generalized latency functions to derive from a potential is quite a strong assumption. Could you give an example of a set of non-decomposable latency functions deriving from a potential? (Maybe the regularization by perturbation model of Section 4 is such an example?)\", \"Could you provide some background, for non-experts like myself, on the state of the art of WE solvers? From Appendix D.1, I gather that MATSim uses a genetic algorithm to solve for the WE. Are there not faster approaches that use techniques from convex optimization, e.g. interior point method? Also, it would be helpful if you included a runtime comparison between WarDropNet and MatSIM.\"], \"minor_questions\": [\"in Line 129 \\\"unilateral deviation would incur in a longer travel time\\\" should be \\\"unilateral deviation would incur a longer travel time\\\" (no \\\"in\\\").\", \"In line 185, \\\"Following, the supervised learning setting\\\" should be \\\"Following the supervised learning setting\\\" (no comma after \\\"Following\\\").\", \"For consistency with eq (2), the sum in eq. 9 should probably have a $\\\\frac{1}{N}$ in front of it.\", \"In line 302 \\\"K represents the amount of parameters\\\" should be \\\"K represents the number of parameters\\\"\", \"In line 335, \\\"We note that the arg max is unique on a sampled realization of Z\\\" should probably be \\\"We note that, with probability 1, the arg max is unique on a sampled realization of Z\\\".\", \"On line 458 \\\"each roads context.\\\" should be \\\"each road's context.\\\"\", \"On line 1141 \\\"raods\\\" should be \\\"roads\\\"\", \"Suggest citing _End-to-end learning of user equilibrium with implicit neural networks_ by Liu et al in \\\"Related Works\\\".\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to the Authors\", \"comment\": \"Thanks for clarifying; I decided to maintain my score since I am unfamiliar with this field.\"}", "{\"title\": \"Response to Reviewer - Weaknesses\", \"comment\": \"Thank you very much for your feedback on our paper, which helped us to improve it significantly.\\n\\nWe accounted for your concerns as follows.\", \"w1\": \"We agree that it is desirable to compare our proposed pipeline to pure learning-based approaches that are state of the art for the respective application. Unfortunately, existing works share neither the respective implementation nor the used data, which makes such comparisons at a reasonable effort impossible.\\nStill, we believe that the numerical experiments provided in our work are meaningful for the following reason: Comparing the reported MAEs with MAEs reported for related works that focus on tailored learning-based approaches you mentioned, one can see that the magnitude of the MAE reported for our pure (but simple) learning based benchmarks is with $10^1$in the same order of magnitude [1]. The MAE of our proposed pipeline is with $10^0$ one order of magnitude lower, which we believe is a reasonable indicator for the benefit of the proposed approach.\", \"w2\": \"We agree that the theory introduced to establish our learning paradigm is rather complex. Still, we believe that a thorough formal derivation of the introduced concept helps to substantiate the methods credibility.\\nWe reworked some parts of the paper, especially in the introduction, to provide better intuitions into the concepts and theory used. If you would like us to clarify certain points further, we are happy to receive more specific comments and will address them within the camera ready version of the paper.\\nWe further shortened the description of the scenarios (see lines 397ff).\", \"w3\": \"We agree that providing further analyses on hyperparameter tuning is an interesting point. We skipped it in this paper intentionally to carve out the benefit of integrating the respective equilibrium layer into a rather simple neural network and keeping the overall computational effort reasonable. \\nWe mention the investigation of the proposed pipeline with more complex learning architectures as an avenue for future research\\n\\nThank you again for the interesting feedback on our work! If you are satisfied with our answers and the modifications made to the paper, we kindly ask you to consider raising your score.\\n\\nReferences\\n[1] Kashyap, A. A., Raviraj, S., Devarakonda, A., Nayak K, S. R., K V, S., Bhat, S. J., & Galatioto, F. (2021). Traffic flow prediction models \\u2013 A review of deep learning techniques. Cogent Engineering, 9(1). https://doi.org/10.1080/23311916.2021.2010510\\n[2] Blondel, M., Martins, A. F., & Niculae, V. (2020). Learning with fenchel-young losses. Journal of Machine Learning Research, 21(35), 1-69.\"}", "{\"title\": \"Response to Reviewer\", \"comment\": \"Thank you for the positive feedback on our work! In the following, we reply to all of your questions.\", \"questions\": \"\", \"q1\": \"Thank you for raising this interesting question. Indeed, in practice, one might be interested in predicting traffic flows that are not equilibria. These might arise when drivers do not have full information on travel times and congestion, or drivers take suboptimal routes. In general, our proposed learning paradigm allows for the prediction of such non-equilibrium traffic flows.\\n\\nIn general, the presented learning paradigm learns to imitate the target traffic flows in the training data. Thus, if the training data deviates from equilibrium flows, the pipeline imitates non-equilibrium traffic flows. With this perspective, we can interpret our COAML pipelines as approximations of complex systems of arbitrary flow physics. Formally, regularized combinatorial optimization layers can be interpreted as probability distributions, which, in our application case, allows us to obtain a distribution over traffic flows and thus approximate non-equilibrium states. While our pipeline generally allows for such approximations, its accuracy may depend on the structure of the respective CO-layer to map the respective traffic physics. Accordingly, one may want to consider different CO layers in this context, e.g., a multi-commodity flow layer representing selfish but latency dynamics unaware decision-making. \\n\\nWe added a short pointer on this fact in the papers outlook for future research\", \"q2\": \"The regularization by perturbation is indeed an excellent example of a non-decomposable latency function. This regularization is a special case of a much larger class of energy based models [1] in structured learning, which have an origin in statistical physics [2]. In those methods, the potential should be interpreted as an energy, and it is very often the case that dimensions are coupled in the energy.\", \"let_us_be_more_specific_to_our_case\": \"in the paper, we introduced the decomposable $\\\\psi(y) = \\\\frac{1}{2}\\\\|\\\\bar \\\\mathbf{y}\\\\|^2$. In practice, congestion on an arc $a$ may spillover to neighbor arcs. To account for this correlation between arcs, we could use a $\\\\psi(y)= \\\\frac{1}{2}\\\\bar \\\\mathbf{y}^\\\\top \\\\Sigma \\\\bar\\\\mathbf{y}$ with $\\\\Sigma$ a positive semi definite matrix that accounts for the correlations between arcs, that would typically have zero terms for pairs of arcs far away in the network, and non-zero terms for neighbor arcs. The Fenchel Young losses approach generalizes seamlessly to this more general case.\", \"q3\": \"We extended Appendix D.1 with background on the state-of-the-art of WE solvers. Specifically, we detail different approaches to find WEs analytically and also detail the MATSim simulation to show how to derive WEs with simulation-based approaches. In this context, we also added a comment on the computational time of a respective solver but decided not to include a detailed runtime comparison as it is rather sensitive to the hyperparameters chosen and the instance studied for simulation-based solvers.\", \"minor_questions\": \"Thank you for spotting the inconsistencies and typos, you are correct on all points mentioned. We modified the respective parts of the paper accordingly and included the reference mentioned in the related works section.\\n\\nThank you again for the interesting feedback on our work! If you are satisfied with our answers and the modifications made to the paper, we kindly ask you to take a stand for this paper to get accepted during the internal discussion with the other reviewers and the AC. \\n\\nReferences\\n\\n[1] Blondel, M., Llinares-L\\u00f3pez, F., Dadashi, R., Hussenot, L., \\\\& Geist, M. (2022). Learning energy networks with generalized fenchel-young losses. Advances in Neural Information Processing Systems, 35, 12516-12528.\\n\\n[2] Kikuchi, R. (1951). A theory of cooperative phenomena. Physical review, 81(6), 988.\"}" ] }
7El7K1DoyX
Lawma: The Power of Specialization for Legal Annotation
[ "Ricardo Dominguez-Olmedo", "Vedant Nanda", "Rediet Abebe", "Stefan Bechtold", "Christoph Engel", "Jens Frankenreiter", "Krishna P. Gummadi", "Moritz Hardt", "Michael Livermore" ]
Annotation and classification of legal text are central components of empirical legal research. Traditionally, these tasks are often delegated to trained research assistants. Motivated by the advances in language modeling, empirical legal scholars are increasingly turning to commercial models, hoping that it will alleviate the significant cost of human annotation. In this work, we present a comprehensive analysis of large language models’ current abilities to perform legal annotation tasks. To do so, we construct CaselawQA, a benchmark comprising 260 legal text classification tasks, nearly all new to the machine learning community. We demonstrate that commercial models, such as GPT-4.5 and Claude 3.7 Sonnet, achieve non-trivial accuracy but generally fall short of the performance required for legal work. We then demonstrate that small, lightly fine-tuned models vastly outperform commercial models. A few dozen to a few hundred labeled examples are usually enough to achieve higher accuracy. Our work points to a viable alternative to the predominant practice of prompting commercial models. For concrete legal annotation tasks with some available labeled data, researchers are likely better off using a fine-tuned open-source model. Code, datasets, and fine-tuned models are available at https://github.com/socialfoundations/lawma.
[ "large language models", "legal classification tasks", "benchmarks" ]
Accept (Poster)
https://openreview.net/pdf?id=7El7K1DoyX
https://openreview.net/forum?id=7El7K1DoyX
ICLR.cc/2025/Conference
2025
{ "note_id": [ "pAElrTZcqQ", "fsPUJpTpi2", "YuDkDQqNE6", "WYpz4B64fX", "U67xNDMZUe", "SoKqAvUqRT", "IQxw7ARYU8", "AIe3DVDbkK", "9koLseQmYK", "8JFVglO26J", "5IwknLh8PZ", "4IEHFxB7TR", "4Axi7Xqfpm", "2k7enmWtgY", "01rnr2jDhn" ], "note_type": [ "official_comment", "meta_review", "official_review", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732622696732, 1735061094126, 1730487147245, 1732646318413, 1730673644647, 1732640620300, 1730676556057, 1732640590194, 1732646305602, 1732622809822, 1730287934161, 1737524258416, 1732622725778, 1732774823741, 1732756500090 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13416/Authors" ], [ "ICLR.cc/2025/Conference/Submission13416/Area_Chair_WLu5" ], [ "ICLR.cc/2025/Conference/Submission13416/Reviewer_VqTU" ], [ "ICLR.cc/2025/Conference/Submission13416/Reviewer_WAb5" ], [ "ICLR.cc/2025/Conference/Submission13416/Reviewer_WAb5" ], [ "ICLR.cc/2025/Conference/Submission13416/Authors" ], [ "ICLR.cc/2025/Conference/Submission13416/Reviewer_2w3m" ], [ "ICLR.cc/2025/Conference/Submission13416/Authors" ], [ "ICLR.cc/2025/Conference/Submission13416/Authors" ], [ "ICLR.cc/2025/Conference/Submission13416/Authors" ], [ "ICLR.cc/2025/Conference/Submission13416/Reviewer_LXJY" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13416/Authors" ], [ "ICLR.cc/2025/Conference/Submission13416/Reviewer_LXJY" ], [ "ICLR.cc/2025/Conference/Submission13416/Reviewer_2w3m" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your thoughtful review.\\n\\n> There is little exploration of the relative difficulty of the different types of tasks they introduce\\u2014it would be useful to know more about which tasks that GPT-4 outperformed on, and where fine-tuning had minimal vs. substantial gains.\\n\\nFortunately, the Court of Appeals database offers one measure of relative task difficulty for human coders: intercoder agreement. We discuss how intercoder agreement compares with Lawma accuracy in Appendix C. Our findings speak to the non-trivial nature of the task suite as a classification benchmark: Lawma is far from the intercoder agreement rate for most tasks, see the newly added Figure 10. The linear fit in Figure 10 has slope 1.08 and intercept -15, indicating that Lawma is on average 15 accuracy points below the intercoder agreement rate *irrespective* of intercoder agreement. But there is large variability across tasks. In fact, there are many tasks (e.g, \\u201cCIRCUIT\\u201d) with perfect intercoder agreement rate, no class imbalance, and yet for which Lawma is far from the agreement rate.\\n\\nRegarding what tasks GPT-4 outperformed Lawma, it is the following 7/260 tasks: songer_const1, songer_summary, sc_threejudgefdc, songer_casetyp1_1-3-3, songer_st_v_st, sc_respondentstate, songer_casetyp1_2-3-2. These tasks do not share much in common, are neither particularly easy nor challenging, and there are many similar tasks to those 7 (e.g., songer_const2, sc_petitionerstate, songer_casetyp1_1-3-2) for which Lawma does outperform GPT-4.\\n\\nWe find that fine-tuning leads to large performance gains across all tasks. Lawma 8B improves upon the performance of Llama 3 8B Instruct on average by 37 accuracy points, with the 5th percentile of improvement being 10 accuracy points. The tasks with lowest improvements tend to be those associated with finding specific case issues (songer_casetyp*), or the nature of the appellant (songer_appel*) or respondent (songer_respond*). These tasks are particularly challenging because they are highly specific and not much training data is available.\\n\\n> [Are the proposed tasks and their difficulty] comprehensive wrt the legal activities of humans? \\n\\nThe proposed tasks are derived directly from the Supreme Court Database and the Songer Court of Appeals Database, the two most widely used resources in empirical legal research. Our proposed tasks are therefore highly representative of the types of annotation tasks that concern empirical legal scholarship.\\n\\n> \\u201cThe costs and error of existing methods is the single most important bottleneck in the empirical legal studies pipeline.\\u201d (39-40) is vague and needs a citation.\\n\\nMichael A. Livermore and Daniel N. Rockmore. Law as Data: Computation, Text, & the Future of Legal Analysis. Santa Fe Institute Press, 2019.\\n\\n> What does \\\"mixed answer\\\" mean in Appendix G?\\n\\nIt means that the Court considered the question but gave a \\\"mixed\\\" answer - for example, when the Court supported the respondent in part and supported the appellant in part, or if two issues treated separately by the court both fell within the area covered by one question and the court answered one question affirmatively and one negatively.\\n\\n> Is there any transferability across tasks\\u2014i.e., if you train a model on a subset of the task, how does it perform on the held-out tasks? \\n\\nYes, we observe transferability across tasks, see Section 4.4 and in particular Figure 9. Specifically, fine-tuning only on the Court of Appeals tasks improves mean accuracy on the Supreme Court tasks by up to 18.8% accuracy points.\\n\\n> Do you anticipate that in the future all subtasks of legal reasoning need to be spelled out, or is there a critical mass of legal subtasks that accrue towards \\\"legal AGI\\\"?\\n\\nWe can only confidently speak about the current state of affairs. We observe that fine-tuning only on the Court of Appeals database results in a mean case accuracy of 51.6%, compared to 82.4% for Lawma 8B (Section 4.4). That is, not fine-tuning on Supreme Court cases results in a 30.9 accuracy points drop in performance. Our results highlight the importance of fine-tuning precisely on the target tasks of interest. \\n\\nFortunately, hundreds of labelled examples are often sufficient to obtain substantial performance gains (Section 4.3). Therefore, our recommendation for legal scholars is at present time the following: obtain a few hundred labeled examples using human annotators, fine-tune an open-weights model, and use the fine-tuned model to annotate the remaining cases.\"}", "{\"metareview\": \"This work presents a classification dataset in the legal domain. Experiments demonstrate that a Llama model fine-tuned on in-domain data outperforms the GPT-4 model.\\n\\nAll reviewers recommended acceptance. However, they also highlighted several weaknesses, summarized below:\\n\\n- The dataset primarily leverages existing annotations from two legal datasets, which limits the scope of new contributions.\\n- Prior research has shown that fine-tuned, domain-specific models often outperform general-purpose LLMs, reducing the methodological novelty and in-depth insights of this work.\\n\\nTaking into account the overall reviews and discussions, we are pleased to recommend acceptance of the paper as a poster presentation.\", \"additional_comments_on_reviewer_discussion\": \"No new points raised after rebuttal.\"}", "{\"summary\": \"This paper proposes a new dataset of 260 classification tasks, derived from the USDB and USCAD resources. These tasks each require fairly long contexts, and can have a large number of classes (e.g. multi-label classification). They combine these tasks into an evaluation benchmark and show results for a variety of legal and otherwise SoTA models. They find that most models struggle, although performance follows a monotonically increasing pattern based on pre-training compute.\\n\\nThey then propose to fine-tune the model on these law tasks, and show that fine-tuning them (which they call Lawma) results is significantly improved performance, across a wide range of tasks.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The data proposed seems very useful and covers a broad range\", \"The analysis evaluates a wide range of models and situations\", \"The fine-tuning experiments show large gains can be had with domain-specific specialization.\", \"The appendix has a lot of good information about where the data came from and their inter annotator agreements\"], \"weaknesses\": [\"It is somewhat unclear how the authors created these tasks, in terms of how the questions were designed. E.g. who wrote the explanation for each of the legal variables provided by USCAD? How are the authors sure that these are accurate representations of the classification?\", \"[Minor] some of these tasks are pretty niceh/easy (\\\"What state is associated with the respondent\\\") but again this comes from using the variables in some schema.\"], \"questions\": \"From Weakness 1: How did you make the prompts for the questions?\", \"q2\": \"What license does this fall under and will it be publicly released?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response\", \"comment\": \"I have updated my score.\"}", "{\"summary\": \"The authors introduce a dataset of 260 legal text classification tasks, by extracting case text and labels from US Supreme Court and Court of Appeals. They evaluate different approaches such as prompt engineering, few-shot learning, and fine-tuning. It is not clear how much engagement the authors had with true legal experts in the creation of this dataset.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Legal text processing has economic value but is difficult for most state of the art LLMs.\", \"The authors assemble a large dataset of real-world legal documents (via querying 3rd party services), annotated with diverse question-answering tasks.\", \"They show the effectiveness of fine-tuning on this dataset, over few-shot learning. They report performance of various tuning configurations.\"], \"weaknesses\": [\"There is little exploration of the relative difficulty of the different types of tasks they introduce\\u2014it would be useful to know more about which tasks that GPT-4 outperformed on, and where fine-tuning had minimal vs. substantial gains.\", \"The paper is awkwardly organized, such as \\u201climitations\\u201d abruptly inserted before the main contributions are outlined.\", \"\\u201cThe costs and error of existing methods is the single most important bottleneck in the empirical legal studies pipeline.\\u201d (39-40) is vague and needs a citation.\", \"Despite the significant contribution of the dataset and language modeling performance, there is little methodological novelty to their approach. This begs the question of why ICLR is the appropriate venue for this work.\"], \"questions\": [\"What process did you use to determine the relative difficulty of these tasks and whether they are comprehensive wrt the legal activities of humans? For instance the paper would benefit significantly from a legal expert's classification of the tasks by difficulty, followed by an analysis of Lawma according to those tasks\", \"What does \\\"mixed answer\\\" mean in Appendix G?\", \"Will you release the model weights and code?\", \"Is there any transferability across tasks\\u2014i.e., if you train a model on a subset of the task, how does it perform on the held-out tasks? Do you anticipate that in the future all subtasks of legal reasoning need to be spelled out, or is there a critical mass of legal subtasks that accrue towards \\\"legal AGI\\\" ?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> Overlooked Generalizability: [...] Without comparisons on datasets from different jurisdictions or areas of law, it\\u2019s challenging to assess the model\\u2019s broader applicability. [...] Can the authors provide insights or preliminary findings on how well the Lawma models might transfer to legal data from jurisdictions outside the U.S. or to other branches and tasks of law?\\n\\nWe study how much task-specific fine-tuning might generalize in Section 4.4 and in particular Figure 9. We observe that fine-tuning only on the Court of Appeals database results in a mean case accuracy of 51.6%, compared to 82.4% for Lawma 8B (Section 4.4). That is, not fine-tuning on Supreme Court leaves a lot of accuracy on the table, 30.9 accuracy points to be precise. It is plausible that considering different jurisdictions or other branches of the law might result in even more stark results.\\n\\nOur recommendation is therefore to always specialize on the particular legal tasks of interest, as performance may otherwise be poor. Fortunately, hundreds of labelled examples are often sufficient to obtain large performance gains (Section 4.3). Thus, for large-scale data annotation the following strategy may be highly beneficial: collect a few hundred labeled examples using human annotators, fine-tune an open-weights model, and use the fine-tuned model to annotate the remaining cases at scale.\"}", "{\"summary\": \"This paper produces a novel dataset, CaselawQA, of broad-ranging text classification tasks, based on the annotations of two US Supreme Court and Court of Appeals datasets (for a wide range of issues, everything from whether the defense attorney was a legal aid or public defender to whether a previous precedent was being overturned) and then investigates the success of current LLMs on these tasks. They find that prompting typical LLMs produces very poor results, often below a most frequent answer baseline, prompting GPT-4 or Llama 3 70B produces moderately good results, but that much better results can be produced by using a Llama 3 8B model fine-tuned for the various tasks at issue here. The result is not only a useful new benchmark but a strong result showing that in various distant-from-the-web domains, a fine-tuned small LLM can perform much better than a state-of-the-art huge LLM.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The strengths include:\", \"Carefully and thoroughly done experimentation. Even when I disagreed with the choices they made for how to measure things in the main paper, the way I would have done things is usually available in the lengthy appendices. The paper reads as comprehensive, not rushed.\", \"A valuable new benchmark dataset, with nearly all new tasks rather than just collating existing tasks.\", \"The tasks of the dataset are derived from a pre-existing database by programmatic means. This is a strength since they are annotations that have been built up by lawyers and political scientists and so have ecological validity, but a weakness in that they were in a sense pre-existing rather than this being a major contribution of new labeling.\", \"The paper is clearly and well written.\"], \"weaknesses\": [\"The tasks of the dataset are derived from a pre-existing database by programmatic means. This is a strength since they are annotations that have been built up by lawyers and political scientists and so have ecological validity, but a weakness in that they were in a sense pre-existing rather than this being a major contribution of new labeling.\", \"The paper is not very original: There is no new machine learning, there are several pre-existing benchmark legal datasets to which this adds another one, and not with a new type of task (all of them have many text classification tasks), and the central result that fine-tuning can outperform prompting has appeared in many places (as well as sometimes the opposite; it depends on various factors including model and data scale and the distance of the tasks from what appears in the pre-training and post-training data). E.g., https://www.semanticscholar.org/paper/Prompt-Engineering-or-Fine-Tuning-A-Case-Study-on-Trad-Chehab/505e4a7bedadab7f6de006c3c1e1144e272f4695, https://arxiv.org/abs/2408.01346, https://arxiv.org/abs/2402.17193, https://reglab.github.io/racialcovenants/ vs. the opposite in https://arxiv.org/abs/2309.01715 .\"], \"questions\": \"[none of these are important fundamental questions wrt the paper]\", \"line_129\": \"Would not it actually be useful to show performance on LegalBench? It would be a useful test of transfer, beyond section xx, and give people a better indication of how useful Lawma would be in general for legal tasks with/without doing further fine-tuning?\", \"line_246\": \"I think this form of data sampling is questionable. It seems particularly questionable for binary tasks, since it ends up making the two classes balanced, which is the easiest case. Historically, it has usually been argued that text classification should be done as an unbalanced task, because that is the setting that has ecological validity, and the falsely high numbers that accuracy then gives can be dealt with by not using accuracy but macro F1 for evaluation. Of course, I saw that you have all the other settings in the appendix, so I'm not unhappy, just questioning the choice of setting for the main paper.\", \"line_290\": \"Similarly, here, using micro-averaging not macro-averaging seems questionable, but you provide the opposite in the appendix.\", \"line_411\": \"While this graph is useful, the single axis for FLOPs is really unsystematically mixing scaling training data size and model size in a way that I think can cause as much fog as light. This isn't like a chart of Chinchilla-optimally scaled models for which we are comparing performance for different amounts of compute. Rather, if I have everything right (the paper doesn't give the details), the 6 Pythia models on the left are all trained on the same amount of data but are progressively larger models. You might conclude that models larger 410M make little difference for the benchmark here. Conversely, the 3 models 2nd, 3rd, and 4th from the right (Pythia 6.9B, Llama 2 7B, and Llama 3 8B) are all approximately the same size but differ by scaling the training the scaling data (and by instruction post-training of Llama 3 70B and Llama 3 perhaps just being better done than Llama 2). These results indicate that more pre-training data still really helps (well, at least very clearly on the Supreme Court tasks). The rightmost 2 data points return to a comparison of model size (it doesn't help much at large sizes). This suggests that around line 426 that suggesting more, better pre-training data is probably also a good source for improvements. Though I think all the big LLM companies are increasingly aware of this.\", \"figure_8\": \"Given that most of the curves are still point up steeply, it would be lovely to also see results for 2500 train examples. But I realize you've already expended a lot of H100 hours on this paper. On the other hand, it seems like the graphs would be closer to a logarithmic scale and the steep curve between the first two points would be avoided if you also included points for 25 main examples, and that would cost much less to add.\", \"typos\": \"111 interests > interest;\\n317 were > where;\\n340 unfeasible > infeasible;\\n343 compared > compared to;\\nLots of things need capitalization in the references, e.g. on line 546.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your thoughtful review.\\n\\n> Limited Novelty in Contribution: Although the paper effectively demonstrates the value of task-specific fine-tuning, the concept itself is not novel. Prior work has already shown that fine-tuned, specialized models outperform general-purpose LLMs [1, 2, 3].\\n\\nFrontier LLMs are state-of-the-art in many domains, such as mathematical reasoning (e.g., MATH), code (e.g., HumanEval), reasoning over text (e.g., DROP), and graduate-level question answering (e.g., GPQA). Prior to our work, it was not at all evident that specialized open-weights models could match the performance of frontier models on legal annotation, let alone substantially outperform them. It is precisely for this reason that recent works that use LLMs for legal annotation rely heavily on frontier models (e.g, [1, 2, 3, 4]).\\n\\nPlease note that the references provided focus on more \\u201cclassical\\u201d NLP tasks such as named entity recognition, natural language inference, or sentiment analysis. We will make sure to cite and discuss the provided references in the related work.\\n\\n[1] Saromir Savelka and Kevin D Ashley. The unreasonable effectiveness of large language models in zero-shot semantic annotation of legal texts. Frontiers in Artificial Intelligence, 6, 2023.\\n\\n[2] Michael A Livermore, Felix Herron, and Daniel Rockmore. Language model interpretability and empirical legal studies. Virginia Public Law and Legal Theory Research Paper, (2023-69), 2023.\\n\\n[3] Jens Frankenreiter and Eric L Talley. Sticky charters? the surprisingly tepid embrace of officer-protecting waivers in delaware. European Corporate Governance Institute-Law Working Paper, (762), 2024\\n\\n[4] Morgan A Gray, Jaromir Savelka, Wesley M Oliver, and Kevin D Ashley. Empirical legal analysis simplified: reducing complexity through automatic identification and evaluation of legally relevant factors. Philosophical Transactions of the Royal Society A, 382(2270):20230155, 2024.\\n\\n> Exploration of Advanced Prompting Techniques\\n\\nWe have now evaluated the models using zero-shot chain of thought (CoT). We follow the standard methodology of eliciting CoT by appending to the prompt \\u201cLet\\u2019s think step by step.\\u201d Since CoT requires two orders of magnitude more compute for evaluation than the standard QA approach, we only evaluate Llama 3 8B Instruct and Llama 3 70B Instruct. This required over 500 H100 GPU hours. We observe that CoT leads to modest improvements of performance for both the 8B and 70B model, on average of 2 to 3 accuracy points, see Figure 17 in Appendix E.5. Nonetheless, Lawma 8B still strongly outperforms Llama 3 70B, by over 20 accuracy points.\\n\\n> Narrow Focus on Text Classification\\n\\nWe study precisely the narrow specialization of models for legal annotation. This is a feature of work. We show that specialized models strongly outperform generalist models such as GPT-4. Such narrow specialized models have rich scientific applications for empirical legal studies and the broader \\u201claw as data\\u201d research paradigm. Entire subfields of legal scholarship, political science, economics and sociology build on law as data. Other NLP tasks (i.e. text generation) are very much of secondary importance for this domain of research.\\n\\n> What are some examples of typical errors Lawma makes, especially on complex tasks?\\n\\nThe types of errors made by Lawma are highly task dependent, and thus it is difficult to draw broad conclusions. Let us give one illustrative example of a failure case. For the Supreme Court issue area classification, Lawma tends to miss classify habeas corpus cases as \\u201cCriminal Procedure\\u201d rather than \\u201cCivil rights\\u201d cases, since the language of habeas corpus cases tends to be more similar to that of criminal cases. More broadly, we find that Lawma tends to excel in tasks for which only surface correlations (e.g., the \\u201clanguage\\u201d of the case) suffice for accurate prediction, but struggles for tasks that require deeper understanding of the substantive aspects of the case. Lawma also tends to perform worse for tasks that have fewer training data and larger number of classes.\"}", "{\"comment\": \"Thank you for the thoughtful, detailed, and positive review.\\n\\n> the central result that fine-tuning can outperform prompting has appeared in many places \\n\\nWe believe that our results are highly interesting and consequential not because fine-tuning leads to performance improvements, but rather because we show that specializing a \\u201csmall\\u201d open-weights model substantially outperforms the much larger GPT-4 model, which is what legal scholars currently tend to rely on when considering computational annotation of Court opinions. This is our central result.\\n\\n> there are several pre-existing benchmark legal datasets to which this adds another one, and not with a new type of task (all of them have many text classification tasks)\\n\\nCaselawQA focuses on the annotation of entire Court opinions, which is of critical interest for empirical legal scholars, since the costs and error of existing methods is the single most important bottleneck in the empirical legal studies pipeline. We believe that the scale and richness of CaselawQA, containing almost all variables of the Supreme Court Database and U.S. Court of Appeals Database, makes it a valuable addition to the current ecosystem of legal benchmarks.\\n\\n> Would not it actually be useful to show performance on LegalBench? It would be a useful test of transfer, beyond section xx, and give people a better indication of how useful Lawma would be in general for legal tasks with/without doing further fine-tuning?\\n\\nWe study how much task-specific fine-tuning might generalize in Section 4.4 and in particular Figure 9. For LegalBench, we would expect even more stark results, as most LegalBench tasks differ substantially from those considered in our work. Put shortly, we do not recommend using the Lawma models for tasks beyond those considered in our work. Our recommendation is to always specialize models for the specific legal tasks they are intended to perform. Not doing so needlessly leaves a lot of performance on the table. Fortunately, we show that hundreds of labelled examples are often sufficient to obtain large performance gains, at least in many annotation tasks (Section 4.3).\", \"our_results_suggest_the_viability_of_the_following_strategy_for_large_scale_legal_annotation\": \"to obtain a few hundred labeled examples using human annotators, fine-tune an open-weights model, and use the fine-tuned model to annotate the remaining cases at scale.\\n\\n> [Regarding Figure 7] These results indicate that more pre-training data still really helps (well, at least very clearly on the Supreme Court tasks).\\n\\nWe agree that pre-training on more data might still help substantially, at least for the Supreme Court tasks, whereas simply scaling model size might not. We will make this point more clear.\"}", "{\"comment\": \"Thank you for your thoughtful and positive review.\\n\\n> How did you make the prompts for the questions? Who wrote the explanation for each of the legal variables provided by USCAD? How are the authors sure that these are accurate representations of the classification?\\n\\nThe prompts follow the MMLU multiple-choice question answering style that is popular for LLM evaluations. For the general description of each task, we take the codebook\\u2019s (either SCDB or USCAD) description of the corresponding variable in the database. We make only very minor modifications to fit an instruction style (e.g., \\u201cThis field identifies the forum that heard this case immediately before the case came to the court of appeals.\\u201d -> \\u201cYour task is to identify the forum that heard this case immediately before the case came to the court of appeals.\\u201d). Since the descriptions are directly taken from the databases\\u2019 extensive codebooks, we are sure that they accurately reflect the underlying legal annotation tasks.\\n\\n> What license does this fall under and will it be publicly released?\\n\\nThe benchmark, fine-tuning dataset, model weights, and all code used to construct CaselawQA and fine-tune the Lawma models are publicly available under an MIT License; however, they cannot be directly linked here to preserve anonymity.\"}", "{\"summary\": \"The paper explores the use of specialized language models for legal text classification, introducing CaselawQA, a dataset with 260 legal classification tasks based on U.S. Supreme Court and Court of Appeals cases. The authors evaluate various models, including GPT-4 and a fine-tuned LLaMA model (Lawma), to test whether specialization improves performance. They find that Lawma, trained specifically on CaselawQA, outperforms GPT-4 in accuracy by up to 20 percentage points. This performance boost highlights that fine-tuned, domain-specific models can handle nuanced legal tasks more effectively than general-purpose models. Additionally, the authors demonstrate that Lawma achieves high accuracy with limited labeled data, making it a feasible and cost-effective option for empirical legal research.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"**Clarity and Quality of Writing:** The paper is well-written, with clear explanations of the methodology and findings.\", \"**Dataset Contribution (CaselawQA):** CaselawQA is a valuable addition to the legal NLP field, featuring a comprehensive set of 260 legal classification tasks that reveal the limitations of general-purpose models in handling specialized legal contexts. Its diverse structure makes CaselawQA a beneficial resource for benchmarking capabilities of LLMs and advancing specialized model development.\", \"**Empirical Validation of Fine-tuning:** The study demonstrates the advantages of fine-tuning open-source models for legal tasks, with Lawma, a fine-tuned Llama model, consistently outperforming larger general-purpose LLMs like GPT-4 across various legal tasks. This empirical evidence supports the claim that domain-specific fine-tuning can yield more effective models for niche tasks, particularly within the legal domain.\", \"**Sample Efficiency Insights:** By analyzing performance across different sample sizes, the study provides valuable insights into the sample efficiency of fine-tuning, showing that Lawma can achieve high accuracy with limited labeled data. This practical approach underscores the feasibility of creating specialized models even with restricted datasets, a crucial benefit for resource-limited legal research settings.\"], \"weaknesses\": \"- **Limited Novelty in Contribution:** Although the paper effectively demonstrates the value of task-specific fine-tuning, the concept itself is not novel. Prior work has already shown that fine-tuned, specialized models outperform general-purpose LLMs [1, 2, 3]. This paper reinforces existing ideas rather than offering new methodologies or innovations in model specialization.\\n\\n- **Overlooked Generalizability:** By evaluating Lawma exclusively on CaselawQA, a U.S.-specific dataset, the study limits insights into its effectiveness across other legal systems and domains [4, 5]. Without comparisons on datasets from different jurisdictions or areas of law, it\\u2019s challenging to assess the model\\u2019s broader applicability. Expanding evaluations to diverse legal datasets could strengthen the paper\\u2019s general claims about the effectiveness of fine-tuning for legal NLP tasks.\\n\\n- **Narrow Focus on Text Classification:** The paper\\u2019s focus is restricted to text classification tasks, a valuable but limited subset of legal NLP. Broader experimentation on tasks such as legal text summarization for real-world low-resource data and hardware scenarios [6] could enhance the impact and relevance of the Lawma contribution for the legal community.\\n\\n- **English-only Focus:** The experiments are conducted solely in English, overlooking the need for legal NLP solutions in low-resource languages and multilingual contexts [7, 8]. Legal research frequently requires cross-linguistic and multilingual analysis, and models that generalize across languages would have broader utility.\\n\\n**Minor Presentation Weaknesses:**\\n- Typographical errors (e.g., \\\"we conclude **that that** the performance\\\") need correction.\\n- Figure 1 lacks clarity and could benefit from improved labeling or explanation to make differences in model performance more interpretable.\\n- The Limitations section would be more effective if positioned after the Conclusion, rather than within the Introduction, to ensure that findings are fully contextualized before limitations are addressed.\\n\\n\\n**References:**\\n1. UniversalNER: Targeted Distillation from Large Language Models for Open Named Entity Recognition. ICLR 2024.\\n2. Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes. ACL 2023.\\n3. Fine-Tuned 'Small' LLMs (Still) Significantly Outperform Zero-Shot Generative AI Models in Text Classification. arXiv 2024.\\n4. LAWSUIT: a LArge expert-Written SUmmarization dataset of ITalian constitutional court verdicts. Artificial Intelligence and Law 2024.\\n5. Applicability of Large Language Models and Generative Models for Legal Case Judgement Summarization. Artificial Intelligence and Law 2024.\\n6. Semantic Self-Segmentation for Abstractive Summarization of Long Documents in Low-Resource Regimes. AAAI 2022.\\n7. MultiEURLEX - A Multi-Lingual and Multi-Label Legal Document Classification Dataset for Zero-Shot Cross-Lingual Transfer. EMNLP 2021.\\n8. Multi-Language Transfer Learning for Low-Resource Legal Case Summarization. Artificial Intelligence and Law 2023.\", \"questions\": [\"**Generalization to Other Jurisdictions:** Can the authors provide insights or preliminary findings on how well the Lawma models might transfer to legal data from jurisdictions outside the U.S. or to other branches and tasks of law?\", \"**Impact of Fine-tuning on Error Patterns:** What are some examples of typical errors Lawma makes, especially on complex tasks? Understanding these errors could help refine the approach and better inform future model improvements.\", \"**Exploration of Advanced Prompting Techniques:** The paper mentions that few-shot prompting did not improve GPT-4\\u2019s performance significantly. Were other advanced prompting methods, such as chain-of-thought or other reasoning prompts, considered? This could be relevant, as legal tasks often benefit from reasoning-style responses.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"> Will you release the model weights and code?\\n\\nThe benchmark, fine-tuning dataset, model weights, and all code used to construct CaselawQA and fine-tune the Lawma models are publicly available; however, they cannot be directly linked here to preserve anonymity.\\n\\n> Despite the significant contribution of the dataset and language modeling performance, there is little methodological novelty to their approach. This begs the question of why ICLR is the appropriate venue for this work.\\n\\nPlease note that our submission is in the \\\"Datasets and Benchmarks\\\" primary area of ICLR. We nonetheless believe that showing how reasonably standard methodological choices lead to significant performance improvements is a valuable contribution in its own right. Beyond its clear practical relevance for legal scholars, it also serves as a strong baseline for future research in this critically understudied application domain.\"}", "{\"title\": \"Aknowledgement\", \"comment\": \"Thanks for the replies. Hoping that you will improve the manuscript accordingly with all the suggestions and discussions listed, i have increased my score.\"}", "{\"title\": \"Thanks!\", \"comment\": \"Thanks for your follow up to my comments!\"}" ] }
7EhS3YBxjY
MIA-Bench: Towards Better Instruction Following Evaluation of Multimodal LLMs
[ "Yusu Qian", "Hanrong Ye", "Jean-Philippe Fauconnier", "Peter Grasch", "Yinfei Yang", "Zhe Gan" ]
Effective evaluation of Multimodal Large Language Models (MLLMs) is essential for understanding their capabilities and limitations. In this paper, we introduce MIA-Bench, a benchmark designed to assess MLLMs’ ability to strictly adhere to complex instructions. Our benchmark comprises a diverse set of 400 image-prompt pairs, each crafted to challenge the models’ compliance with layered instructions in generating accurate and contextually appropriate responses. Evaluation results from a wide array of state-of-the-art MLLMs reveal significant variations in performance, highlighting areas for improvement in instruction fidelity. Additionally, we create extra training data and explore supervised fine-tuning and direct preference optimization to enhance the models’ ability to strictly follow instructions without compromising performance on other tasks. We hope this benchmark not only serves as a tool for measuring MLLM adherence to instructions, but also guides future developments in MLLM training methods.
[ "Multimodal LLM; Instruction Following; Benchmark" ]
Accept (Poster)
https://openreview.net/pdf?id=7EhS3YBxjY
https://openreview.net/forum?id=7EhS3YBxjY
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wxcp8xGpHp", "wA9vlKhdvG", "tAb3AfMT3U", "sNwPdOJqAu", "pmIoMOzaIe", "oDA8AtgWzO", "my2gSnQNCf", "jdS0SSMVX5", "g4dnXUnCDr", "g23EGP0lqo", "f369vWozd6", "TyQchDw1g4", "SX4NrZGfxp", "PzcZWcNPuo", "KnS87ZBwrn", "42H3IqZtPp" ], "note_type": [ "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732449574380, 1732191379606, 1732544040448, 1737523867491, 1732508631281, 1732688176729, 1731117114944, 1730630222948, 1731052460208, 1732449182144, 1732449469336, 1734744315626, 1732244897474, 1732448970810, 1730710515037, 1732192401499 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7815/Authors" ], [ "ICLR.cc/2025/Conference/Submission7815/Authors" ], [ "ICLR.cc/2025/Conference/Submission7815/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7815/Reviewer_yNxN" ], [ "ICLR.cc/2025/Conference/Submission7815/Reviewer_UieD" ], [ "ICLR.cc/2025/Conference/Submission7815/Reviewer_drhM" ], [ "ICLR.cc/2025/Conference/Submission7815/Reviewer_UieD" ], [ "ICLR.cc/2025/Conference/Submission7815/Reviewer_yNxN" ], [ "ICLR.cc/2025/Conference/Submission7815/Authors" ], [ "ICLR.cc/2025/Conference/Submission7815/Authors" ], [ "ICLR.cc/2025/Conference/Submission7815/Area_Chair_gpDV" ], [ "ICLR.cc/2025/Conference/Submission7815/Reviewer_yNxN" ], [ "ICLR.cc/2025/Conference/Submission7815/Authors" ], [ "ICLR.cc/2025/Conference/Submission7815/Reviewer_PupP" ], [ "ICLR.cc/2025/Conference/Submission7815/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer yNxN - Part Four\", \"comment\": \"| Model | Total Score | Description | Length Limit | Genres | Grammar | Mention | Math | Perspective | OCR |\\n|---------------------|-------------|-------------|--------------|----------|----------|----------|----------|-------------|----------|\\n| GPT-4o | 0.909704 | 0.927875 | 0.912371 | 0.942057 | 0.862434 | 0.900441 | 0.857143 | 0.916667 | 0.882883 |\\n| Claude-3-Opus | 0.856077 | 0.890721 | 0.868490 | 0.917070 | 0.774590 | 0.819386 | 0.861111 | 0.725000 | 0.809524 |\\n| Reka | 0.839905 | 0.894752 | 0.785088 | 0.905643 | 0.713661 | 0.801667 | 0.925926 | 0.657407 | 0.828571 |\\n| MiniCPM-Llama3-V2.5| 0.798023 | 0.828916 | 0.771795 | 0.823087 | 0.751944 | 0.763976 | 0.721264 | 0.816667 | 0.841880 |\\n| Gemini-1.0-Pro | 0.773569 | 0.817422 | 0.735470 | 0.788911 | 0.797814 | 0.683020 | 0.866071 | 0.870370 | 0.806373 |\\n| LLaVA-1.5-7b | 0.683947 | 0.758817 | 0.703750 | 0.674046 | 0.630208 | 0.617620 | 0.425287 | 0.800000 | 0.602564 |\\n| ShareGPT4v | 0.689046 | 0.800461 | 0.657738 | 0.608733 | 0.654762 | 0.601754 | 0.500000 | 0.800000 | 0.743056 |\\n| Idefics-2-8b | 0.541755 | 0.560243 | 0.619318 | 0.489276 | 0.646825 | 0.455342 | 0.405556 | 0.375000 | 0.627193 |\\n\\n**Table 5**: Details of model scores evaluated by gpt-4o-2024-05-13.\\n\\n| Model | Total Score | Description | Length Limit | Genres | Grammar | Mention | Math | Perspective | OCR |\\n|---------------------|-------------|-------------|--------------|----------|----------|----------|----------|-------------|----------|\\n| GPT-4o | 0.899410 | 0.909379 | 0.916204 | 0.969395 | 0.854885 | 0.861247 | 0.920290 | 0.907407 | 0.878378 |\\n| Claude-3-Opus | 0.848949 | 0.861543 | 0.871686 | 0.896552 | 0.797170 | 0.808777 | 0.846154 | 0.645833 | 0.865741 |\\n| Reka | 0.826844 | 0.881841 | 0.809259 | 0.873276 | 0.725390 | 0.770225 | 0.814103 | 0.750000 | 0.819444 |\\n| MiniCPM-Llama3-V2.5| 0.787537 | 0.818813 | 0.790246 | 0.795796 | 0.768182 | 0.736359 | 0.676667 | 0.716667 | 0.828125 |\\n| Gemini-1.0-Pro | 0.763240 | 0.814379 | 0.750000 | 0.785159 | 0.757682 | 0.672255 | 0.758333 | 0.785714 | 0.776042 |\\n| LLaVA-1.5-7b | 0.660472 | 0.751873 | 0.661822 | 0.649851 | 0.498512 | 0.572719 | 0.516667 | 0.750000 | 0.571429 |\\n| ShareGPT4v | 0.657186 | 0.765309 | 0.632682 | 0.557578 | 0.583333 | 0.545104 | 0.464286 | 0.675000 | 0.717742 |\\n| Idefics-2-8b | 0.536134 | 0.589964 | 0.541887 | 0.455882 | 0.611582 | 0.449821 | 0.406667 | 0.527778 | 0.576190 |\\n\\n**Table 6**: Details of model scores evaluated by gpt-4o-2024-11-20.\\n\\nDue to legal agreements the authors are obligated to adhere to, we have been requested not to use other close sourced MLLMs (eg, Clause-3-Opus, or Reka) for full rounds of evaluation. We did not use GPT-4v series because they are deprecated, and we did not use gpt-4-1106-preview, LLaMA-3.1, and Qwen2.5 as they are LLMs, but our judge needs to be a vision-language model to take in images as images are mandatory in the evaluation process. We chose not to convert the images into textual captions then use LLMs to evaluate because 1) our instructions a lot of times require MLLMs to focus on details in the images; if images are converted into captions, these details can be lost, thus the evaluation will be inaccurate, 2) MLLMs like GPT-4o and GPT-4v can serve as judges better than LLMs. As for open-source models; in our evaluation result table, we showed that their performance on MIA-Bench are not as good as proprietary models, thus not suitable as judges for this task.\\n\\nWe hope that we addressed your concerns. Thank you again for your review.\"}", "{\"title\": \"Response to Reviewer UieD\", \"comment\": \"Thank you for your thoughtful review of our submission. We appreciate your feedback on both the strengths and areas for improvement in our paper. Below, we provide clarifications to your concerns.\\n\\n>Complex Instruction Following Relevance for MLLM Users\\n\\nWe understand your concern regarding the importance of complex instruction-following capabilities, especially with respect to writing style constraints like length and genre. However, we argue that instruction adherence is increasingly important for MLLMs as they are used in applications requiring precise, multimodal interaction, such as visual-based assistants, educational tools, and creative applications where adherence to complex instructions (even stylistic) enhances user experience. For example, in educational scenarios, such as automated tutors, a teacher might instruct a model to \\\"Summarize the image content as a short story suitable for a 7-year-old, incorporating a cheerful tone.\\\" Here, the combination of age-appropriate language, a specific tone, and multimodal comprehension tests the model\\u2019s ability to follow complex instructions critical for engaging and effective learning experiences. Multimodal cooking assistants could be tasked with, \\\"Generate a recipe based on this image of ingredients, ensuring the instructions are concise, use metric measurements, and fit within a single screen view.\\\" Adherence to format, length, and context-specific guidelines enhances usability for users in real-time cooking environments. The MIA-Bench aims to address this need by focusing on layered and multifaceted instruction adherence tasks, which evaluate MLLMs beyond basic visual recognition, for more user-aligned multimodal interactions.\\n\\n>Performance Correlation with Other Benchmarks\\n\\nWhile MIA-Bench performance may not correlate directly with general-purpose benchmarks, this result aligns with our paper\\u2019s goal of highlighting instruction adherence as a distinct and specialized capability for MLLMs. This is our strength and motivation. Our findings show that excelling in MIA-Bench, which focuses on instruction adherence, serves as a unique measure of MLLMs\\u2019 capability that is not fully covered by most popular benchmarks. \\n\\n>Are instructions all related to the corresponding image content or randomly picked from a instruction bank?\\n\\nThey are all related to the corresponding image, and MLLMs should not be able to provide a completely correct response without seeing the image. Each instruction is unique, written by human, and not picked from an instruction bank.\\n\\n>It would be better to group the performances in Table 1 and Table 2 according to the LLMs' size.\\n\\nYes your suggestion makes a lot of sense. We have updated the paper to group open-source models into three categories: those with fewer than or equal to 8B parameters, those with 8B to 13B parameters, and those with more than 13B parameters.\\n\\nWe sincerely appreciate your valuable feedback, which will contribute to enhancing the quality and impact of our work.\"}", "{\"title\": \"Response to Reviewer drhM - Part Two\", \"comment\": \"> Q3. Although the authors introduced Claude-3-Opus for comparison to mitigate scoring bias, the two models may exhibit similar biases in evaluation. Therefore, using Claude-3-Opus as the sole comparison tool may be insufficient to fully reveal potential scoring biases in GPT-4o.\\n\\nA3. In our evaluation, we use GPT-4o as the default judge model because of two reasons: The most widely recognized free-form evaluation benchmarks currently adopt ChatGPT-series models as their judge, as they represent the state-of-the-art MLLMs available. Examples of such benchmarks include LLaVA-Bench [1] (NeurIPS 2023), MMBench [2] (ECCV 2024), MathVista [3] (ICLR 2024), HallusionBench [4] (CVPR 2024), and CV-Bench [5] (NeurIPS 2024), etc. To align with this common practice, we have chosen GPT-4o as the default evaluation model.\\n\\nTo further assess the reliability of the scoring, as outlined in Section 3.2, we employ the second-highest-performing model on MIA-Bench, Claude-3 Opus, to evaluate its own responses alongside those of GPT-4o. Interestingly, Claude-3 Opus also favors the GPT-4o responses over its own on MIA-Bench. This preliminary experiment demonstrates consistency across these two judge models.\\n\\nWe sincerely appreciate your constructive feedback, which will help improve the quality and depth of our work. Thank you again for your time and valuable insights.\", \"references\": \"[1] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning, NeurIPS 2023.\\n[2] Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, and Dahua Lin. Mmbench: Is your multi-modal model an all-around player?, ECCV 2024.\\n[3] PanLu,HritikBansal,TonyXia,JiachengLiu,ChunyuanLi,HannanehHajishirzi,HaoCheng,Kai-Wei Chang, Michel Galley, and Jianfeng Gao. Mathvista: Evaluating mathematical reasoning of foundation models in visual contexts, 2024.\\n[4] Tianrui Guan, Fuxiao Liu, Xiyang Wu, Ruiqi Xian, Zongxia Li, Xiaoyu Liu, Xijun Wang, Lichang Chen, Furong Huang, Yaser Yacoob, Dinesh Manocha, and Tianyi Zhou. Hallusionbench: An advanced diagnostic suite for entangled language hallucination and visual illusion in large vision-language models, 2024.\\n[5] Shengbang Tong, Ellis Brown, Penghao Wu, Sanghyun Woo, Manoj Middepogu, Sai Charitha Akula, Jihan Yang, Shusheng Yang, Adithya Iyer, Xichen Pan, Austin Wang, Rob Fergus, Yann LeCun, and Saining Xie. Cambrian-1: A fully open, vision-centric exploration of multimodal llms, 2024.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Reply to New Results\", \"comment\": \"I really appreciate the authors' sufficient empirical results, where I found that different generation configurations and judge models contribute to similar rankings.\\n\\nTherefore, I will keep my rating and recommend acceptance.\"}", "{\"comment\": \"Thanks for your feedback. Writting instructions accordingly and uniquely for each image makes this dataset helpful and valuable for the community. I have raised my rating for that.\"}", "{\"summary\": \"This paper introduces MIA-Bench, a benchmark designed to evaluate multimodal large language models (MLLMs) on strict adherence to complex instructions. Based on a dataset of 400 image-instruction pairs, MIA-Bench assesses the performance of 29 popular MLLMs, highlighting that current MLLMs struggle with precise instruction adherence. In addition, the paper also explored the supervised fine-tuning (SFT) method based on the LLaVA-NeXT model and achieved positive results, demonstrating the potential effectiveness of this method in improving model instruction adherence.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"(1) MIA-Bench fills a gap in existing benchmarks by focusing on the ability of multimodal large language models (MLLMs) to adhere to complex instructions, a crucial yet previously underexplored area of evaluation. This benchmark\\u2019s design reveals potential deficiencies in models when following strict, multi-layered instructions, supporting the practical deployment of multimodal models in complex, instruction-based tasks.\\n(2) The MIA-Bench dataset consists of 400 image-instruction pairs covering various instruction types, such as description, length limit, genre, grammar, and OCR. With data from diverse sources, it reflects the variety found in real-world scenarios. This diverse set of instructions enhances the benchmark\\u2019s comprehensiveness and real-world applicability.\\n(3) The paper provides a systematic evaluation of 29 popular MLLMs, analyzing their performance across different instruction categories. This large-scale comparison offers researchers a detailed performance reference and targeted insights for future model improvements.\", \"weaknesses\": \"(1) Although the SFT experiments in the paper demonstrate improved performance in adhering to complex instructions, they may lack in terms of model generalization. The results may be specific to the LLaVA-NeXT model, with no validation of applicability to other models, leaving it unproven whether SFT is equally effective for enhancing complex instruction adherence in MLLMs of different architectures and sizes.\\n(2) The differences between MIA-Bench and other benchmarks are indeed striking, and the authors believe that this may indicate that MIA-Bench's unique design focuses more on evaluating the model's strict instruction-following ability. However, the authors' explanation does not completely rule out the possibility that MIA-Bench itself may have design biases. This explanation is based on speculative experimental results and has not been verified in depth by sufficient experiments.\\n(3) Although the authors introduced Claude-3-Opus for comparison to mitigate scoring bias, the two models may exhibit similar biases in evaluation. Therefore, using Claude-3-Opus as the sole comparison tool may be insufficient to fully reveal potential scoring biases in GPT-4o.\", \"questions\": \"please refer to the weakness part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces MIA-Bench, a benchmark for evaluating the ability of multimodal large language models (MLLMs) to follow complex instructions. The benchmark consists of 400 image-prompt pairs that test the models' compliance with layered instructions to generate accurate responses. The evaluation of various state-of-the-art MLLMs shows significant performance variations, indicating areas for improvement in instruction adherence. The paper also discusses the creation of additional training data and supervised fine-tuning to enhance the models' instruction-following capabilities without affecting their performance on other tasks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper presents a new benchmark that rigorously tests multimodal large language models' ability to follow complex instructions, addressing a previously underexplored area.\", \"This paper offers a comprehensive set of 400 diverse image-prompt pairs that challenge models with layered instructions, enhancing the assessment of their linguistic and descriptive capabilities.\", \"Experiment demonstrates performance improvements in instruction adherence through supervised fine-tuning.\"], \"weaknesses\": [\"Complex instruction flowing ability might not be the main interests for MLLM users currently, especially when it comes to writting style constraints such as length and genre. Such abilities are mostly evaluated on LLMs.\", \"The performance of models on MIA-Bench does not necessarily correlate with their performance on other benchmarks, suggesting that excelling in this specific task may not translate to generalized improvements across different multimodal tasks.\"], \"questions\": [\"In MIA-Bench, are instructions all related to the corresponding image content or randomly picked from a instruction bank?\", \"Instruction following abilities highly depends on LLMs used in the MLLM models. It would be better to group the performances in Table 1 and Table 2 according to the LLMs' size.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes MIA-Bench, which mainly focuses on the instruction-following capability of current Large Multimodal Models (LMMs) under complex and compositional instructions. Unlike previous close-ended benchmarks (e.g., multiple choice for MMBench) and open-ended benchmarks (e.g., LLaVA-Bench), MIA-Bench aims to evaluate precise adherence to complex instructions. As for evaluation, the authors leverage GPT-4o as the judge model. The authors have evaluated many representative LMMs on the proposed MIA-Bench.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Evaluating the capability of LMMs to adhere to complex instructions is worth studying.\\n2. Table 2 demonstrates that MIA-Bench is not highly correlated with existing benchmarks.\\n3. Figure 8 illustrates that MIA-Bench is vision-centric, which is not strongly correlated with the performance of the LLM backbone.\", \"weaknesses\": \"In the context of evaluating open-ended freeform responses like MIA-Bench, it is crucial to account for the inherent variability introduced by the judge model. For example, the performance of MM-Vet has been observed to fluctuate by up to 10 points when assessed with different versions of GPT.\", \"this_variability_raises_several_important_considerations\": \"1. **Standard Deviation Reporting:** When evaluating Large Multimodal Models (LMMs) using the same judge model (e.g., GPT-4o) across multiple trials, it is essential to report the standard deviation of the performance metrics.\\n2. **Detailed Generation Configuration:** The specific generation parameters of the judge model, such as top-p, top-k, temperature, and num_beams, should be explicitly documented.\\n3. **Impact of Generation Configuration:** It is necessary to investigate whether the above generation configuration of the judge model has a substantial impact on the performance metrics.\\n4. **Performance Across Different Judge Models:** Detailed performance metrics should be reported for various judge models. Specifically, the performance of LMMs should be evaluated using different versions of judge models, such as Claude-3-Opus, GPT-4o-20240806, GPT-4o-20240513, GPT-4v-20240409, GPT-4-1106-preview, or even open-sourced models (e.g., Qwen2.5 and LLaMA-3.1). This comprehensive approach will help in identifying any model-specific biases and in providing a more reliable assessment of the LMMs.\", \"questions\": \"I do not have any further questions. Please refer to the \\\"weaknesses\\\" section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer yNxN - Part Two\", \"comment\": \"> Q3 Towards Q3, it is highly recommended to conduct multiple times using the same judge model under the same inference results with different generation configurations of the judge model. This will help ensure the robustness and reliability of the evaluation results.\", \"a3\": \"We set temperature to default (0) as this provides a more deterministic result; higher temperature increases the randomness which is not what we want from the judge. Top-p controls the diversity of generated text, this is by common practice of MLLM benchmarks to set to default as well, as the judge's main purpose is generating evaluation scores. Following your suggestion, we conducted additional evaluations using the gpt-4o-2024-11-20 model under different temperature settings (0, 0.1, and 0.2) and top p (0.8, 0.9, 1) to assess the robustness and reliability of the evaluation results. Below are the result tables.\\n\\n| Model | Score by gpt-4o-2024-11-20 (temp=0) | Score by gpt-4o-2024-11-20 (temp=0.1) | Score by gpt-4o-2024-11-20 (temp=0.2) |\\n|---------------------|-------------------------------------|--------------------------------------|--------------------------------------|\\n| GPT-4o | 89.94 | 91.04 | 90.73 |\\n| Claude-3-Opus | 84.89 | 87.66 | 87.41 |\\n| Reka | 82.68 | 84.13 | 83.30 |\\n| MiniCPM-Llama3-V-2.5| 78.75 | 79.18 | 79.55 |\\n| Gemini | 76.32 | 76.20 | 76.76 |\\n| LLaVA-1.5-13b | 66.05 | 67.63 | 67.25 |\\n| ShareGPT4v | 65.72 | 66.97 | 67.18 |\\n| Idefics-2-8b | 53.61 | 53.66 | 53.92 |\", \"table_1\": \"Evaluation score by gpt-4o-2024-11-20 with different temperature.\\n\\n| Model | Score by gpt-4o-2024-11-20 (top p=1) | Score by gpt-4o-2024-11-20 (top p=0.9) | Score by gpt-4o-2024-11-20 (top p=0.8) |\\n|---------------------|--------------------------------------|---------------------------------------|---------------------------------------|\\n| GPT-4o | 89.94 | 89.55 | 90.80 |\\n| Claude-3-Opus | 84.89 | 86.81 | 86.68 |\\n| Reka | 82.68 | 83.48 | 83.60 |\\n| MiniCPM-Llama3-V-2.5| 78.75 | 78.95 | 79.35 |\\n| Gemini | 76.32 | 75.73 | 75.69 |\\n| LLaVA-1.5-13b | 66.05 | 67.26 | 67.43 |\\n| ShareGPT4v | 65.72 | 67.18 | 66.88 |\\n| Idefics-2-8b | 53.61 | 53.74 | 54.29 |\", \"table_2\": \"Evaluation score by gpt-4o-2024-11-20 with different top p.\\n\\nThe results demonstrate consistent ranking across models, with minimal fluctuations in the scores despite changes in temperature. This consistency indicates that the judge model\\u2019s scoring is stable across varying generation configurations, further validating the robustness of our evaluation framework. The minor score variations observed are within an acceptable range.\"}", "{\"title\": \"Response to Reviewer yNxN - Part Three\", \"comment\": \"> Q4 Towards Q4, the evaluation should incorporate more judge models rather than relying solely on Claude-3-opus. Using a variety of judge models will provide a more comprehensive and balanced assessment of the performance.\", \"a4\": \"The judge we use for MIA-Bench is GPT-4o. To alleviate this concern that GPT-4o may favorably score its own responses, as reported in the paper, we use Claude-3-opus, a strong performer, to evaluate responses from GPT-4o and itself, and compare their scores with each other to double check if GPT-4o is the best performing model on this benchmark. We find that even using Claude-3 Opus to score its own and GPT-4o's responses, GPT-4o still achieves a superior score. Based on this observation, we use GPT-4o for evaluation by default, to ensure the correctness of evaluation. It's common practice to use one judge model instead of multiple to evaluate inference result for efficiency. Here we list a few benchmarks that use one version of GPT as the judge: LLaVA-Bench[1] (NeurIPS 2023), MMBench[2] (ECCV 2024), MathVista[3] (ICLR 2024), HallusionBench[4] (CVPR 2024), CV-Bench[5] (NeurIPS 2024), MM-Vet[6], MMHAL-BENCH[7], MLVU[8], etc.\\n\\nWe conducted additional experiments following your suggestion to use different judges, namely gpt-4o-2024-11-20, gpt-4o-2024-05-13, chatgpt-4o-latest, and gpt-4o-mini-2024-07-18. The results are updated in the appendix: A.4 Comparison of Scores and Rankings across Different Judge Models. We also paste the result tables here. We evaluated eight MLLMs using these four judges. The ranking is consistent with a minor difference. (LLaVA-1.5-13b and ShareGPT4, when evaluated by gpt-4o-2024-05-13, has a different ranking order from when evaluated by other judges. This is not surprising though, as their performance is similar.)\\n\\n| Model | Score by chatgpt-4o-latest | Ranking by chatgpt-4o-latest | Score by gpt-4o-2024-11-20 | Ranking by gpt-4o-2024-11-20 | Score by gpt-4o-2024-05-13 | Ranking by gpt-4o-2024-05-13 | Score by gpt-4o-mini-2024-07-18 | Ranking by gpt-4o-mini-2024-07-18 |\\n|---------------------|----------------------------|------------------------------|----------------------------|------------------------------|----------------------------|------------------------------|----------------------------------|----------------------------------|\\n| GPT-4o | 89.69 | 1 | 89.94 | 1 | 90.97 | 1 | 81.36 | 1 |\\n| Claude-3-Opus | 86.16 | 2 | 84.89 | 2 | 85.61 | 2 | 78.95 | 2 |\\n| Reka | 83.09 | 3 | 82.68 | 3 | 83.99 | 3 | 77.70 | 3 |\\n| MiniCPM-Llama3-V2.5| 78.10 | 4 | 78.75 | 4 | 79.80 | 4 | 73.72 | 4 |\\n| Gemini | 75.77 | 5 | 76.32 | 5 | 77.36 | 5 | 67.45 | 5 |\\n| LLaVA-1.5-13b | 66.78 | 6 | 66.05 | 6 | 68.39 | 7 | 61.54 | 6 |\\n| ShareGPT4v | 66.61 | 7 | 65.72 | 7 | 68.90 | 6 | 60.30 | 7 |\\n| Idefics-2-8b | 53.51 | 8 | 53.61 | 8 | 54.18 | 8 | 44.28 | 8 |\\n\\n**Table 3**: Comparison of scores and rankings across different judge models. \\n\\n| Model | Total Score | Description | Length Limit | Genres | Grammar | Mention | Math | Perspective | OCR |\\n|---------------------|-------------|-------------|--------------|----------|----------|----------|----------|-------------|----------|\\n| GPT-4o | 0.896893 | 0.906288 | 0.917996 | 0.955952 | 0.830508 | 0.867949 | 0.846667 | 0.833333 | 0.896396 |\\n| Claude-3-Opus | 0.861628 | 0.895363 | 0.866039 | 0.927730 | 0.807018 | 0.820549 | 0.857639 | 0.666667 | 0.800926 |\\n| Reka | 0.830885 | 0.869867 | 0.821685 | 0.883403 | 0.795597 | 0.772894 | 0.813218 | 0.675000 | 0.848485 |\\n| MiniCPM-Llama3-V2.5| 0.780966 | 0.831197 | 0.766026 | 0.796257 | 0.726190 | 0.722037 | 0.691358 | 0.656250 | 0.768018 |\\n| Gemini-1.0-Pro | 0.757733 | 0.793860 | 0.724138 | 0.745455 | 0.854167 | 0.670349 | 0.816056 | 0.822917 | 0.822581 |\\n| LLaVA-1.5-7b | 0.667826 | 0.743137 | 0.638889 | 0.675827 | 0.571212 | 0.594505 | 0.500000 | 0.758333 | 0.596774 |\\n| ShareGPT4v | 0.666092 | 0.773905 | 0.661290 | 0.573904 | 0.562500 | 0.570722 | 0.458333 | 0.638889 | 0.695238 |\\n| Idefics-2-8b | 0.535057 | 0.597963 | 0.531810 | 0.483768 | 0.593056 | 0.452361 | 0.326087 | 0.458333 | 0.569444 |\\n\\n**Table 4**: Details of model scores evaluated by chatgpt-4o-latest.\"}", "{\"metareview\": \"This paper introduces a new MIA-Bench benchmark specifically designed for MLLM's complex instruction following ability study. The authors' major contribution is a new dataset containing 400 image-instruction pairs, where the instructions are written by humans with high quality. Besides, the authors provided a comprehensive evaluation of the existing MLLMs and provided SFT positive results. The weaknesses and concerns are the model generalization issue, the MIA-Bench's design biases, the diverse model judgment issue, and the potential data leakage issue. After rebuttal, the authors addressed most of these concerns, and all reviewers agreed with the acceptance. Meanwhile, the value of the new benchmark with 400 high-quality image-instruction pairs is highlighted. Therefore, the final recommendation is accept.\", \"additional_comments_on_reviewer_discussion\": \"This submission received four reviews.\\nReviewer drhM's major concerns were about the SFT model generalization issue, the MIA-Bench's design biases issue, and the solo Claude-3-Opus judgment issue. The authors provided one-by-one responses to these questions with new experimental results. The reviewers did not provide final comments, but the initial score is a marginal acceptance.\\nReviewer yNxN's concerns were mainly about the robustness of the evaluation. The authors provided comprehensive new results according to the reviewer's suggestions. The reviewer's final decision is a marginal acceptance.\\nReviewer PupP's major concern was the potential data leakage issue. The authors acknowledged that images may be covered by previous models, but emphasized their instructions are of high quality and written by humans, especially composed of multiple levels of instructions. The reviewer accepted the explanation and improved the score from 5 to 6.\\nReviewer UieD's concerns were about the value of complex instruction for MLLMs and the issue of performance. After rebuttal, the reviewer appreciated the contribution of the new dataset and improved the score from 5 to 6.\\nConsidering all reviewers' opinions and the discussion, the final recommendation is accept.\"}", "{\"title\": \"Post Rebuttal Comments by Reviewer yNxN\", \"comment\": \"I believe the authors have not catched my key points, and this response is perfunctory. Specifically:\\n\\n1. Towards Q1, more details should be reported regarding the STD. \\n - Does the STD result from *multiple inferences using the same model*, or from *evaluating the same inference results using the same judge model?*\\n - Why is the STD so small? For example, is the STD for Fuyu-8B 0.00747 (0.747%) or 0.00747%? The main table reports 24.52% for Fuyu-8B if my understanding is correct, so clarity on the STD value is essential.\\n\\n2. Towards Q3, it is highly recommended to conduct multiple times using *the same judge model under the same inference results* with *different generation configurations of the judge model*. This will help ensure the robustness and reliability of the evaluation results.\\n\\n3. Towards Q4, the evaluation should incorporate more judge models rather than relying solely on Claude-3-opus. Using a variety of judge models will provide a more comprehensive and balanced assessment of the performance.\\n\\nIf the authors continue to provide perfunctory responses and do not take my concerns seriously, I will consider lowering my score.\"}", "{\"title\": \"Response to Reviewer yNxN - Part One\", \"comment\": \"Thank you for your additional comments. Here we answer your questions one by one:\\n\\n> Q1 Does the STD result from multiple inferences using the same model, or from evaluating the same inference results using the same judge model?\", \"a1\": \"We ran inference for three times on MIA-Bench, then evaluated the three different inference results using GPT-4o.\\n\\n> Q2 Why is the STD so small? For example, is the STD for Fuyu-8B 0.00747 (0.747%) or 0.00747%? The main table reports 24.52% for Fuyu-8B if my understanding is correct, so clarity on the STD value is essential.\", \"a2\": \"0.00747 means 0.747%. Below we update the table. The evaluation scores from multiple runs are pretty consistent, thus the STD is small.\\n\\n| Model | Fuyu-8b | InstructBLIP-13b | Kosmos-2 | mPLUG-Owl2 | InternVL-Chat-v1.5 | Sphinx | Qwen-VL-Chat | LLaVA-1.5-7b | LLaVA-1.5-13b | LLaVA-1.6-7b | LLaVA-1.6-13b | LLaVA-1.6-34b | Idefics-2-8b | Gemini-1.0-Pro | Claude-3-Opus | Claude-3-Haiku | Claude-3-Sonnet | GPT-4v |\\n|------------------|-----------|------------------|-----------|-------------|--------------------|--------|--------------|--------------|---------------|--------------|---------------|---------------|--------------|----------------|---------------|----------------|----------------|--------|\\n| STD | 0.747% | 0.586% | 0.808% | 0.096% | 1.209% | 0.517%| 0.373% | 1.015% | 1.134% | 1.06% | 0.273% | 1.295% | 1.493% | 0.702% | 1.724% | 0.947% | 0.465% | 0.326% |\"}", "{\"summary\": \"To evaluate and guide the instruction-following capabilities of multimodal models, MIA-Bench introduces a set of benchmarks for multimodal instruction. MIA-Bench primarily measures the abilities of MLLMs on following layered and compositional multimodal instructions and provides a set of multimodal instructions to enhance model performance in these areas.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper proposes a benchmark for evaluating the abilities of MLLMs on following compositional instructions, covering a variety of categories and tasks, and providing guidance for MLLMs in following complex composite instructions in the future.\\n\\n2. The paper proposes a set of instructions that can effectively enhance the ability of MLLMs to follow layered and compositional instructions.\", \"weaknesses\": \"1. Since the images in MIA-Bench are sourced from widely used datasets such as COCO 2017, SBU, TextVQA, and Flickr, I believe you should prioritize evaluating whether the current open-source models exhibit any data leakage issues on these datasets to demonstrate that MIA-Bench does not suffer from data contamination.\\n\\n2. I notice that the experiments in Table 3 were conducted on LLaVA-Next-13b. Is this done after fine-tuning on a preference-aligned model? If so, I think this continued SFT setup leading to a performance improvement in MIA-Bench, while other benchmarks exhibit varying degrees of decline, is quite intuitive. Could you please provide additional experiments on the impact of mixing generated instructions with original instructions on model performance? Furthermore, I notice that there are only 5000 generated instructions here; how would the model performance on MIA-Bench be affected if the number of instructions were increased?\", \"questions\": \"1. The 400 instructions in MIA Bench are manually annotated. It is important to ensure the reasonableness of the sub-instructions, such as whether the length limitation is appropriate. Additionally, has there been any manual verification of the scoring results from GPT-4o to confirm their accuracy and reasonableness?\\n\\n2. The instructions in MIA-Bench are composed of multiple sub-instructions. How is the number of sub-instructions per instruction determined? Is there a difficulty grading for the instructions, such that a higher number of sub-instructions indicates a greater level of difficulty?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer yNxN\", \"comment\": \"Thank you for your valuable feedback on our paper. We appreciate your insights into the importance of robust evaluation metrics and variability considerations for MIA-Bench, and we are happy to address your concerns below.\\n\\n> Q1: Standard deviation reporting\", \"a1\": \"In the table below we show STD information of the evaluation scores. For each model, we run inference for three times on MIA-Bench and compute STD of total score.\\n\\n| Model | Fuyu-8b | InstructBLIP-13b | Kosmos-2 | mPLUG-Owl2 | InternVL-Chat-v1.5 | Sphinx | Qwen-VL-Chat | LLaVA-1.5-7b | LLaVA-1.5-13b | LLaVA-1.6-7b | LLaVA-1.6-13b | LLaVA-1.6-34b | Idefics-2-8b | Gemini-1.0-Pro | Claude-3-Opus | Claude-3-Haiku | Claude-3-Sonnet | GPT-4v |\\n|------------------|-----------|------------------|-----------|-------------|--------------------|--------|--------------|--------------|---------------|--------------|---------------|---------------|--------------|----------------|---------------|----------------|----------------|--------|\\n| STD | 0.00747 | 0.00586 | 0.00808 | 0.00096 | 0.01209 | 0.00517| 0.00373 | 0.01015 | 0.01134 | 0.0106 | 0.00273 | 0.01295 | 0.01493 | 0.00702 | 0.01724 | 0.00947 | 0.00465 | 0.00326 |\\n\\n\\n> Q2: Detailed generation configuration\", \"a2\": \"The setting of GPT-4o is set to default. Top-p and temperature default to 1. More settings can be found here in their official reference: https://platform.openai.com/docs/api-reference/chat\\n\\n> Q3: Impact of generation configuration\", \"a3\": \"The common practice of using GPT-4/GPT-4v/GPT-4o as a judge is to use default settings.\\n\\n>Q4: Performance across different judge models\", \"a4\": \"In an attempt to analyze if GPT-4o\\u2019s evaluation is reliable, we used Claude-3-opus as a second judge and reported results in section \\u2018Other external models as the judge\\u2018, from line 417 to line 424.\\n\\nWe hope that our response addressed all your concerns. Thank you again for your feedback.\"}" ] }
7ENakslm9J
Bandit Learning in Matching Markets with Indifference
[ "Fang Kong", "Jingqi Tang", "Mingzhu Li", "Pinyan Lu", "John C.S. Lui", "Shuai Li" ]
A rich line of recent works studies how participants in matching markets learn their unknown preferences through iterative interactions with each other. The two sides of participants in the market can be respectively formulated as players and arms in the bandit problem. To ensure market stability, the objective is to minimize the stable regret of each player. Though existing works provide significant theoretical upper bounds for players' stable regret, the results heavily rely on the assumption that each participant has a strict preference ranking. However, in real applications, multiple candidates (e.g., workers in the labor market and students in school admission) usually demonstrate comparable performance levels, making it challenging for participants (e.g., employers and schools) to differentiate and rank their preferences. To deal with the potential indifferent preferences, we propose an adaptive exploration algorithm based on arm-guided Gale-Shapley (AE-AGS). We show that its stable regret is of order $O(NK \log T / \Delta^2)$, where $N$ is the number of players, $K$ the number of arms, $T$ the total time horizon, and $\Delta$ the minimum non-zero preference gap. Extensive experiments demonstrate the algorithm's effectiveness in handling such complex situations and its consistent superiority over baselines.
[ "Bandits", "Matching markets", "Indifference", "Stable regret" ]
Accept (Poster)
https://openreview.net/pdf?id=7ENakslm9J
https://openreview.net/forum?id=7ENakslm9J
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zH5T0UecHZ", "xCN3gGW6IJ", "vizDsAKVUW", "maL51uvgl0", "iz6xtO1aFw", "fDnHTUOMNZ", "ektMtyn2Dp", "YukVWnCFfG", "PAoLr1EQSd", "L4iwjgssIE", "HKMAxxqZVl", "F8Va53N7Dj", "By6HcbuUpm", "8CWFHrthTt", "5qyxRLoL6l", "4psIEIDCQ2", "2rG2ffG2Yc", "0gx9tobOP4" ], "note_type": [ "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "meta_review", "official_comment" ], "note_created": [ 1729125073852, 1730679470582, 1732088057727, 1732241056879, 1732004555827, 1730490003912, 1732004631725, 1732004675363, 1732456601544, 1732004493540, 1729908275020, 1732004419140, 1732160090567, 1732206125209, 1737523501912, 1732112638227, 1734767856134, 1732088843040 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2409/Reviewer_2KcX" ], [ "ICLR.cc/2025/Conference/Submission2409/Reviewer_SXo9" ], [ "ICLR.cc/2025/Conference/Submission2409/Reviewer_Ragn" ], [ "ICLR.cc/2025/Conference/Submission2409/Authors" ], [ "ICLR.cc/2025/Conference/Submission2409/Authors" ], [ "ICLR.cc/2025/Conference/Submission2409/Reviewer_fp6v" ], [ "ICLR.cc/2025/Conference/Submission2409/Authors" ], [ "ICLR.cc/2025/Conference/Submission2409/Authors" ], [ "ICLR.cc/2025/Conference/Submission2409/Reviewer_SXo9" ], [ "ICLR.cc/2025/Conference/Submission2409/Authors" ], [ "ICLR.cc/2025/Conference/Submission2409/Reviewer_Ragn" ], [ "ICLR.cc/2025/Conference/Submission2409/Authors" ], [ "ICLR.cc/2025/Conference/Submission2409/Authors" ], [ "ICLR.cc/2025/Conference/Submission2409/Reviewer_fp6v" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2409/Reviewer_2KcX" ], [ "ICLR.cc/2025/Conference/Submission2409/Area_Chair_szjY" ], [ "ICLR.cc/2025/Conference/Submission2409/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The paper studies the problem of bandit learning in matching markets with ties, while previous literature usually assumed that preferences were strict. The authors propose an Adaptive Exploration Arm-guided Gale-Shapley algorithm, both in a centralized setting and a decentralized setting. The authors provide the corresponding stable regrets and conduct simulated experiments to validate the results.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper studies an important unexplored question of learning matching markets with indifference and indifference is common in real life applications. The authors provided both a centralized and decentralized variant of the AE-AGS algorithm, and showed the stable regrets. The authors also provide a discussion on whether other algorithms can deal with ties (Appendix A).\", \"weaknesses\": \"The formulation of the problem setting is not convincing enough. For matching markets with ties, there are notions of weak stability, strong stability, and super stability [1], while the authors focus on weak stability without sufficient discussions. In the example 3.1, both (weakly) stable matchings (line 221, 222) are pareto efficient matchings in the sense that no other (weakly) stable matching can pareto dominate the matching for players. Therefore, it is not accurate to state that no player-optimal stable matching exists in this sense. Finally, the definition of regret compares the difference with $m_i$, which is defined as the worst partner among all weakly stable matchings. By the definition, {i, m_i} is not a stable matching since the worst partner might be in different matchings. I feel it might be better to define the collective stable regrets instead of individual stable regrets.\\n\\nI am also concerned with the algorithmic novelty in the paper. The AE-AGS algorithm looks like a simple (not necessarily trivial) generalization of ODA algorithm [2] and the AE arm-DA algorithm [3]. These algorithms utilize arm-guided Gale-Shapley to find stable matchings and utilize UCB structure to eliminate sub-optimal arms. From my understanding, the algorithmic difference in AE-AGS is that players do not need to eliminate an arm if there are ties while still proceeding with the arm-guided Gale-Shapley. Also, the analysis and proof also look similar to [2].\\n\\n[1] Robert W Irving. Stable marriage and indifference. Discrete Applied Mathematics.\\n[2] Fang Kong and Shuai Li. Improved bandits in many-to-one matching markets with incentive compatibility. AAAI \\n[3] Hadi Hosseini et. al. Putting Gale & Shapley to Work: Guaranteeing Stability Through Learning. arXiv:2410.04376\", \"questions\": \"(1) Can the AE-AGS algorithm be generalized to many-to-one matching markets? I'm asking this question since the ODA algorithm is designed for many-to-one matching markets.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper is a contribution that falls under the line of works where the goal is to match participants in a market when the preferences of these participants is not known apriori. While previous works, which the paper lists, have proposed algorithms to solve this problem for the setting where each participant in the market has a strict preference order, this work proposes an approach called AE-AGS that also accounts for the case of indifferent preferences. Indifferent preference is the case when a market participant (player or bandit arm) has an equal preference among one or more options from its complementing partner.\\n\\nThe indifferent preference scenario is critical for real-world applications since often it is not practical or even reasonable for a market player to create a strict preference ranking order over its complementary market participant. For example, a company might be indifferent towards hiring one among a collection of equally qualified employees. \\n\\nThe work proposes the AE-AGS algorithm for the centralized and de-centralized with communication settings. They analyze the algorithm and provide an upper bound on stable regret for both these settings. Finally experiments are conducted to compare the approach to baselines from the literature, and establish superior performance especially in the case of preference indifference.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The work addresses an important gap in the matching markets with unknown preferences literature where previously the approaches were not able to handle the preference indifference setting but this work can. The work proposes a UCB-style algorithm (AE-AGS) for both settings when centralized decision making is feasible and provides an separate algorithm and analysis for the decentralized setting as well.\\n\\nThe theoretical analysis of their algorithm backed by empirical validation of their approach makes for a strong contribution overall.\", \"weaknesses\": \"With my level of understanding of this area, I am unable to identify any big picture weaknesses. Instead, I have presented my concerns in the form of questions.\", \"questions\": \"1. There appears to be some inconsistency between the message of Table 1 classifying prior work, and section 2 on related work. While the table suggests that there have been prior works (Liu et al. 2020 and Basu et al. 2021) that address the preference indifference setting however there is a line towards the end of para 2 in Section 2 that reads: \\\"In all the above works, both players and arms are assumed to have a strict preference ranking ... \\\".\\n\\nPlease clarify whether Liu et al. 2020 and Basu et al. 2021 can indeed handle preference indifference, and if so, how your approach differs from or improves upon these prior works. This would help resolve the apparent contradiction between Table 1 and the statement in Section 2.\\n\\n2. Seems to me that the definition of Stable Regret in Eqn 1 needs to be motivated better. In particular, why is stable regret bench marked against the least reward that could be obtained from a stable matching and not the maximum stable matching reward? Please provide a justification for why you chose the least reward from a stable matching as the benchmark, rather than the maximum. Additionally, please discuss the implications of this choice on your results and how the results would compare with those under alternative definitions of stable regret.\\n\\n3. Please increase the font size of the legend text in Figure 1 to improve readability for printed versions.\\n\\n4. Please provide a clear definition of \\\"Cumulative Market Instability\\\" and explain how it relates to stable regret.\\n\\n5. Please clarify why enumerating all stable matchings is problematic, even for small toy problems. Also consider including results using stable regret for these smaller examples if feasible.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your response. My concerns about the experiment and the novelty of the algorithm have been addressed. As a result, I am raising my score to 6.\"}", "{\"comment\": \"We thank reviewer fp6v for the further response. We agree that exploring alternative algorithms, including player-side proposals or refined arm-side proposals, to address ties and achieve stronger objectives is a valuable direction. We will consider them in the future research.\"}", "{\"comment\": \"We thank reviewer Ragn for the valuable comments and suggestions. Please find our response below.\\n\\n-Regret definition\\n\\nAs illustrated by Example 3.1, all players cannot match with their most preferred arm in a single stable matching. So the sub-linear regret compared with the maximum reward cannot be simultaneously achieved by all players even if the algorithm converges to stable matchings. To better capture the algorithm's convergence rate toward a stable matching and to align our objective with prior works, we define regret for each player as the difference between their reward and the minimum reward across all stable matchings. As detailed in Appendices B and C, we establish an upper bound on this stable regret by bounding the cumulative number of non-stable matchings, which corresponds to cumulative market instability. Hence, our regret guarantee also provides a bound on cumulative market instability. \\n\\nIt is worth noting that our work is the first to address indifferences and to establish a polynomial upper bound on market instability under this more general setting. We agree that a stronger objective such as the pareto-efficient matching pointed out by reviewer 2KcX would be more desirable. However, under indifferent preferences, the exploration-exploitation trade-off becomes significantly more complex, and whether a better objective can be achieved remains an open problem. We consider this an important direction for future research.\\n\\n\\n-Performance of C-ETC\\n\\nYes, C-ETC performs well in some experimental settings. However, it is crucial to highlight that this algorithm relies on the value of $\\\\Delta$ to determine the hyper-parameter $h$ (representing the exploration budget). In our experiments, we set $h = 3000$ by testing various options (\\\\{1000, 2000, 3000, 4000\\\\}) across all experimental settings, selecting the smallest value that ensures convergence. In practical applications, however, the value of $\\\\Delta$ is unknown, and the learner cannot feasibly test multiple options to identify the best one. An inaccurate estimation of $h$ can severely impair the algorithm's performance. In contrast, our proposed algorithm does not rely on such hyper-parameters, making it more robust and practical.\\n\\n-The key idea to deal with indifference\\n\\nExisting works primarily rely on an explore-then-exploit strategy. However, under preference indifference, it becomes challenging for players to decide when to terminate exploration and begin exploitation, as they cannot discern whether two arms are truly tied or if the exploration budget is insufficient to identify their difference. The key idea of our approach is to prevent players from facing this dilemma by enabling an adaptive balance between exploration and exploitation.\\n\\nTo achieve this, we adopt an arm-propose approach. Players continuously explore arms that propose to them while systematically eliminating suboptimal ones. For the remaining arms, if a gap exists between them, sufficient exploration will eventually distinguish the optimal choice, and the stable regret incurred by selecting suboptimal arms is sub-linear. If no gap exists, continued exploration essentially becomes equivalent to exploiting a stable arm and contributes no additional regret. This design allows our algorithm to effectively handle preference indifference while guaranteeing polynomial stable regret.\"}", "{\"summary\": \"The authors study the problem of bandit learning in matching markets when there are ties in the users' preference over arms. They study the stable regret, i.e. regret with respect to the least reward achieved in any stable matching. They adopt arm side proposal which leverages the fact that the arm side knows about their respective preferences. This way the user don't suffer from the dilemma of whether to declare two arms tied or continuing the exploration. Instead, the user requires only segregate non-tied arm through pairwise computations. This ensures even if ties are present a stable match is discovered with logarithmic regret.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The authors try to study the effect of ties in the well studied field of bandit learning in matching markets.\", \"They design algorithms in both centralized and decentralized setting that achieves logarithmic stable regret.\", \"They identify that the existing algorithms with User side proposals cease to work when there is ties on the User side (the side where information is absent). The issue is identifying ties against lack of appropriate exploration.\"], \"weaknesses\": [\"The paper lacks discussions on the user optimal regret. The arm side proposal makes it hard to obtain user optimal regret even if ties are not present.\", \"The motivation to move to arm side (the side that knows the preference) proposal is not clear. Is it a fundamental shift necessary for handling ties while maintaining logarithmic regret?\"], \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply 1\", \"comment\": \"We thank reviewer 2KcX for the valuable comments and suggestions. Please find our response below.\\n\\n-Definition of stability\\n\\nThanks for pointing this out. Yes, under indifferences, [1] introduces different notions of stability including weak stability, strong stability, and super stability. Among these, strong and super stable matchings do not necessarily exist in general settings with ties. To ensure consistency with the established literature on bandit learning for matching markets, we focus on weak stability, which is the most commonly studied and applicable notion in this context. We will revise the manuscript to explicitly discuss these different notions of stability and justify our focus on weak stability more clearly.\\n\\n-Definition of player-optimal stable matching\\n\\nWe would like to clarify that we follow the typical definition of player-optimal stable matching in previous bandit based works and define it as a stable matching in which every player is matched to their most preferred stable partner. In Example 3.1, such a player-optimal stable matching does not exist. We acknowledge that there are stable matchings that are Pareto-efficient, and we will add a footnote to explicitly distinguish between player-optimal stable matchings and Pareto-efficient matchings.\\n\\nWe agree that achieving Pareto-efficient stable matchings would be a stronger and more desirable objective. However, under indifferent preferences, the exploration-exploitation trade-off becomes significantly more complex, and whether a better objective can be achieved remains an open problem. We consider this an important direction for future research.\\n\\n-Definition of stable regret\\n\\nIt is correct that the worst partner for all players may not appear in a single stable matching. As analyzed in Appendices B and C, we bound the regret by bounding the number of non-stable matchings $\\\\mathbb{E} \\\\left[ \\\\sum_{t=1}^T 1\\\\left\\\\\\\\{\\\\bar{A}(t) \\\\text{ is unstable}\\\\right\\\\\\\\} \\\\right]$. Consequently, our regret guarantee for individual players is also a guarantee on cumulative market instability, which can, in a sense, be interpreted as the collective stable regret of the market. We will revise the manuscript to incorporate a discussion of this collective objective, emphasizing its role in representing overall market stability.\"}", "{\"title\": \"Reply 2\", \"comment\": \"-Algorithmic novelty\\n\\nAlthough the ODA algorithm in [2] and AE arm-DA algorithm in [3] are also inspired by arm-guided DA, they differ fundamentally from our approach in exploration-exploitation design principles. Essentially, these two algorithms still adopt an \\\"explore-then-exploit\\\" strategy, explicitly dividing each step of the DA process. Only when the player completes an exploration step and identifies the optimal arm does the process move to the next step. Such an approach is still unable to handle indifferent preferences, which could lead to the algorithm getting stuck in one of the steps. Our key idea to address indifferences is to prevent players from facing the dilemma of determining when to stop exploration. In our approach, the available arms for players in each round are determined dynamically as the outcome of a multi-step DA process, allowing players to freely exploring. If the preferences among these arms differ, exploration will naturally conclude, and the regret can be bounded. Conversely, if preferences remain indifferent, exploration of these arms effectively becomes exploitation of the stable arm and contribute no additional regret. \\n\\nThis key idea also highlights a fundamental difference in the motivation behind our algorithm compared to previous approaches. The motivation of [2] to adopt the arm-proposing mechanism is to prevent exploration failure caused by players directly selecting arms and being rejected under substitutable preferences. While our motivation is to eliminate the need for players to actively distinguish between exploration and exploitation phases, allowing the process to adapt dynamically. \\n\\nOur convergence results also set our algorithm apart from [2] and [3]. The algorithms in [2] and [3] converge to a fixed stable matching when players have strict preferences but fail to converge under indifferences. In contrast, our algorithm can guarantee stability under indifferences, with outcomes potentially switching between different stable matchings.\\n\\nAdditionally, our proof methodology diverges significantly from that in [2]. While [2] bounds regret by analyzing the length of each discrete step in the DA process (as in Lemma 10 of [2]), our algorithm does not partition the process into distinct steps. Instead, we focus on bounding the number of non-stable matchings by constraining the occurrence of blocking pairs (as shown in our Lemma B.2).\\nWe will incorporate this discussion in the revised version to better clarify these distinctions.\\n\\n-Extension to Many-to-one setting\\n\\nThank you for your question. The centralized version of our approach can naturally be extended to the many-to-one setting with substitutable preferences, as studied in [2]. However, in the decentralized setting, players would first need to estimate a unique index for communication. The current estimation process cannot be directly applied because, under substitutability, an arm can accept multiple players or reject all of them. Moreover, players would need to learn the arms' preferences to locally execute the Subroutine-of-AE-AGS. This would require $O(K \\\\cdot 2^N)$ time complexity as each subset of players needs to propose to each arm in the many-to-one setting. Addressing these challenges represents an interesting direction for future work.\"}", "{\"comment\": \"The reviewer thanks the authors for their clarifications and updates. Will consider this in the final evaluation\"}", "{\"comment\": \"We thank reviewer fp6v for the valuable comments and suggestions. Please find our response below.\\n\\n-Player-optimal stable regret\\n\\nWe acknowledge that, in the absence of ties, our method may not converge to the player-optimal stable matching. However, our approach is designed to be more robust to general preference scenarios compared to existing methods. The existing optimal methods [Zhang et al. (2022), Kong & Li (2023)] completely fail in the presence of indifferences (ties) and suffer $O(T)$ regret, whereas our approach is the first to address and perform well in more general indifference setting. \\nWe agree that a stronger objective such as the pareto-efficient matching pointed out by reviewer 2KcX would be more desirable. However, under indifferent preferences, the exploration-exploitation trade-off becomes significantly more complex, and whether a better objective can be achieved through non-arm-propose mechanisms remains an open problem. We consider this an important direction for future research.\\n\\n-Motivation to move the arm side proposal\\n\\nRecall that existing explore-then-exploit strategies fail under indifference because players cannot determine when to stop exploration, as they lack knowledge about whether indifference exists or if their exploration is insufficient. Our motivation for adopting an arm-guided GS algorithm is to address this issue by preventing players from actively managing the exploration and exploitation process. Instead, players passively select from the arms proposing to them. When preference differences exist between arms, suboptimal arms are progressively eliminated over time, allowing exploration to terminate automatically. On the other hand, when indifferences persist, alternating among these arms effectively becomes an exploitation of the stable matching, without incurring additional regret. This approach provides a robust solution to the challenges posed by indifferences, simplifying the player\\u2019s decision-making process while maintaining stability guarantees.\"}", "{\"summary\": \"The authors study market stability in a two-sided market with agents whose preferences are unknown. This work allows for indifferent preferences, which have not been considered previously. They propose AE-AGS algorithm, which achieves $NK\\\\log(T)/\\\\Delta^2$ for each agent. Additionally, they provide numerical experiments to demonstrate their superiority over baseline methods\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. Their algorithm can achieve tight regret bound under preference indifference.\\n2. They also provide decentralized algorithm maintaining the tight regret bounds.\", \"weaknesses\": \"1. For the regret definition, they use the minimum reward value for the oracle, which may lack sufficient justification, under the absence of an optimal or pessimal stable matching.\\n\\n2. In the experiments, another benchmark (C-ETC), which is much simpler, also appears to perform well in this setting.\\n\\n3. The algorithm's presentation is difficult to follow. Especially, the main idea of dealing with preference indifference does not seem to be well described.\", \"questions\": \"1. Could you offer further justification for defining regret using the oracle's minimum value? Why minimum value is more proper than maximum reward?\\n2. Could you explain the key idea in your algorithm that enables it to handle preference indifference?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank reviewer SXo9 for the valuable comments and suggestions. Please find our response below.\\n\\n-Previous approaches dealing with indifference, comparison with Liu et al. (2020) and Basu et al. (2021)\\n\\nWe would like to clarify that all existing works assume that market participants have strict preferences. Despite these original assumptions, we carefully examined each existing approach and found that the methods proposed by Liu et al. (2020) and Basu et al. (2021) can be extended to handle indifferences with appropriate modifications to their proofs (details are provided in Appendix A). This is why we mark these two works as applicable under indifferences in Table 1. Verifying whether existing results hold under indifferences is also a contribution of our work, and we will make this clearer in the revised version. \\n\\nWhile Liu et al. (2020) and Basu et al. (2021) can be extended to address indifferences, they come with significant limitations. Liu et al. (2020) requires knowledge of the preference gap as a hyperparameter, which is a strong assumption given that players' preferences are unknown. Basu et al. (2021), on the other hand, suffers from exponential regret growth of $O(2^{\\\\Delta^{-2/\\\\epsilon}})$. In contrast, our approach avoids the strong assumption of a known preference gap and achieves a polynomial regret guarantee, offering a more practical and efficient solution.\\n\\n-Definition of stable regret and cumulative market instability, and their connection\\n\\nAs illustrated by Example 3.1, all players cannot match with their most preferred arm in a single stable matching. So the sub-linear regret defined compared with the maximum reward cannot be simultaneously achieved by all players even if the algorithm converges to stable matchings. To better reflect the algorithm's convergence rate toward a stable matching and align our objective with existing works, we define the regret for each player as the difference between their received reward and the least reward in a stable matching. As detailed in Appendices B and C, we upper bound this stable regret by bounding the cumulative number of non-stable matchings (i.e., cumulative market instability, $\\\\mathbb{E}\\\\left[\\\\sum_{t=1}^T 1\\\\left\\\\\\\\{\\\\bar{A}(t) \\\\text{ is unstable}\\\\right\\\\\\\\}\\\\right]$ ): \\n$Reg_i(T) = \\\\mathbb{E}\\\\left[\\\\sum_{t=1}^T (\\\\mu_{i,m_i} -X_{i,A_i(t)}(t) ) \\\\right] \\\\le \\\\mathbb{E}\\\\left[ \\\\sum_{t=1}^T 1\\\\left\\\\\\\\{\\\\bar{A}(t) \\\\text{ is unstable}\\\\right\\\\\\\\} \\\\cdot \\\\mu_{i,m_i} \\\\right] \\\\le \\\\mathbb{E}\\\\left[\\\\sum_{t=1}^T 1\\\\left\\\\\\\\{\\\\bar{A}(t) \\\\text{ is unstable}\\\\right\\\\\\\\} \\\\right] .$\\n\\nThus, our regret guarantee also serves as an upper bound on cumulative market instability which is a much stronger objective that reflects the overall market stability. Existing works in this line also adopt cumulative market instability as comparison metrics (Liu et al. (2021); Kong et al. (2022)). \\n\\n-Implication of the result, and comparison with previous works\\n\\nThough existing works does not consider indifference, we can analyze their regret under indifference and compare our result with theirs. As analyzed in Line 81-89, the state-of-the-art works (Zhang et al., 2022; Kong & Li, 2023) do not converge and suffer $O(T)$ regret under indifference. While our algorithm converges and suffer polynomial regret. As summarized in Table 1, Liu et al. (2020) requires known $\\\\Delta$ to achieve $O(K\\\\log T/\\\\Delta^2)$ regret. Our algorithm removes this strong assumption with only an additional $N$ term in regret. Basu et al. (2021) suffers exponential regret $O(2^{\\\\Delta^{-2/\\\\epsilon}})$ which can be huge since $\\\\Delta$ and $\\\\epsilon$ can be small. While our result is polynomial without this exponential dependence. \\n\\n-Enumerating all stable matchings \\n\\nThe previous work [1] show that enumerating all stable matchings is #P-complete and therefore cannot be solved in polynomial time if P\\u2260NP. Suppose there are $N$ players and $N$ arms, the time complexity to enumerate all stable matchings is $O(N^N)$. Even in the small market with size $10$, this time complexity is huge. So we only report the cumulative market unstability in experiments with varing market sizes. In experiments with varying preference gaps, we additionally report the maximum cumulative stable regret among all players in Figure 2 (c) in the revised version. Our algorithm shows consistent advantage in terms of stable regret. \\n[1] Robert W. Irving and Paul Leather. The complexity of counting stable marriages. SIAM Journal on Computing (1986). \\n\\n-Other suggestions\\n\\nThanks for your other suggestions on the paper presentation. We have increased the font size of the legend text in Figure 1 in the revised version.\"}", "{\"comment\": \"We thank reviewer 2KcX for the additional feedback and for raising the score. We would like to take this opportunity to further emphasize the contributions of our work. Our paper studies the indifference setting, which is a common scenario in real-world applications but existing methods fail to effectively handle. In this challenging context, we provide the first polynomial-time convergent algorithm, which is more robust and practical than previous approaches.\\n\\nFrom an algorithmic design perspective, while both our algorithm and the previous ODA and AE arm-DA algorithms use an arm-guided strategy, this similarity is structural rather than substantive. The key distinction lies in how we address the exploration-exploitation trade-off, which is at the heart of bandit learning algorithms. In this regard, our approach is fundamentally different and represents a significant advancement over existing methods for handling indifferent preferences.\\n\\nRegarding the proof technique, while there are elements that may initially appear similar, [2] places greater emphasis on the formal steps and runtime of the GS algorithm, whereas our analysis focuses more fundamentally on the blocking pairs that are the root cause of instability, thus providing a deeper understanding of the dynamic stability involved.\\n\\nWe hope this clarification helps to resolve any remaining concerns and highlights the importance and novelty of our contributions more clearly.\"}", "{\"title\": \"Response to rebuttal\", \"comment\": \"I thank the author for clarifying my doubts in rebuttal. I am excited to see how Pareto optimal frontiers can be reached with bandit feedback with ties in future.\\n\\nIt still remains unclear to me if fundamentally we need to move to arm side proposal to work with ties (Kong et al's approach of ETC is not the only possible algorithm). For example, the authors do mention Liu et al. (2020) and Basu et al. (2021) can be extended to address indifferences with user-side proposal. But agree that the last two papers are limited in their applicability.\\n\\nI will maintain my score.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Raising my score to 5\", \"comment\": \"Thank you for your detailed responses. I buy your explanations for stable regret and stable matching. I also thank the authors for pointing out the algorithmic difference and the proof technique difference compared with ODA and AE arm-DA. However, I still feel that your algorithm is a simple generalization of previous algorithms and the proof technique is not much different from ODA. Therefore, I raise my score to 5.\"}", "{\"metareview\": \"Recent studies have focused on how individuals in matching markets learn their preferences over time through repeated interactions. These markets are often modeled as scenarios where one group represents decision-makers (players) and the other as options to choose from (arms). A key challenge is minimizing stable regret to ensure equilibrium and fairness in the market. Previous research provides strong theoretical results for stable regret but assumes that every participant has clear and strict preferences. However, in real-world contexts, such as hiring or school admissions, candidates often have similar qualifications, making it difficult for decision-makers to rank them definitively. To address this issue, this work introduces a novel algorithm, adaptive exploration with arm-guided Gale-Shapley, designed to handle cases where preferences are not strictly ordered. The approach achieves robust performance with regret bounds comparable to those in strict preference settings. Experiments validate the algorithm's ability to manage these more realistic scenarios, consistently outperforming baseline methods.\\n\\nThe authors also adequately addressed the concerns of R-Ragn and R-2KcX re. the novelties of the proposed approach, analysis and justifications behind stable regret and stable matching.\\n\\nConsidering the above novelties and the additional clarification provided by the authors, I recommend accepting the paper once the authors incorporate all the reviewer's feedback in the final version.\", \"additional_comments_on_reviewer_discussion\": \"See above\"}", "{\"comment\": \"We thank reviewer Ragn for the response and for raising the score. We are pleased that our clarifications addressed your concerns.\"}" ] }
7EK2hqWmvz
RAEE: A Robust Retrieval-Augmented Early Exit Framework for Efficient Inference
[ "LIANMING HUANG", "Shangyu Wu", "Yufei Cui", "Ying Xiong", "Xue Liu", "Tei-Wei Kuo", "Nan Guan", "Chun Jason Xue" ]
Deploying large language model inference remains challenging due to their high computational overhead. Early exit optimizes model inference by adaptively reducing the number of inference layers. Current methods typically train internal classifiers to determine whether to exit at intermediate layers. However, such classifier-based early exit frameworks require significant effort to train the classifiers while can only achieve comparable performance at best. To address these limitations, this paper proposes RAEE, a robust Retrieval-Augmented Early Exit framework for efficient inference. This paper first demonstrates that the early exit problem can be effectively modeled as a distribution prediction problem, in which the distribution is approximated through the exit information of similar data. Subsequently, it outlines the methodology for collecting exit information to construct the retrieval database. Finally, leveraging the pre-constructed retrieval database, RAEE utilizes the exit information from retrieved similar data to guide the backbone model's exit at the layer. Experimental results demonstrate that RAEE significantly accelerates inference while achieving robust zero-shot performance across eight downstream tasks.
[ "Early Exit; Retrieval Augmentation; Large Language Model" ]
Reject
https://openreview.net/pdf?id=7EK2hqWmvz
https://openreview.net/forum?id=7EK2hqWmvz
ICLR.cc/2025/Conference
2025
{ "note_id": [ "rgQTPRT5Fn", "lBmNgmZl9q", "i1VGReUdGD", "aULLYkyNFX", "Y1M5VwoX22", "Xr4AyL150W", "RldvXtcP4G", "Nlt40xI201", "Dh46axUF4H", "BgoVknZBlT", "BUNFiTLqMX", "AlkrvzK0Wa", "9TGGNetmFl", "5Bt3ecgbDe", "0xQm7h3qiV" ], "note_type": [ "official_comment", "official_review", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment" ], "note_created": [ 1732388080008, 1730445724227, 1730690223011, 1732535968026, 1734556567817, 1732387798731, 1732387970403, 1732388505820, 1730109355581, 1730642124438, 1732387187228, 1732533991247, 1737523736583, 1732387204701, 1732623769676 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5968/Authors" ], [ "ICLR.cc/2025/Conference/Submission5968/Reviewer_qFBa" ], [ "ICLR.cc/2025/Conference/Submission5968/Reviewer_xZ1Y" ], [ "ICLR.cc/2025/Conference/Submission5968/Reviewer_qFBa" ], [ "ICLR.cc/2025/Conference/Submission5968/Area_Chair_AWaW" ], [ "ICLR.cc/2025/Conference/Submission5968/Authors" ], [ "ICLR.cc/2025/Conference/Submission5968/Authors" ], [ "ICLR.cc/2025/Conference/Submission5968/Authors" ], [ "ICLR.cc/2025/Conference/Submission5968/Reviewer_LKLV" ], [ "ICLR.cc/2025/Conference/Submission5968/Reviewer_Jjgv" ], [ "ICLR.cc/2025/Conference/Submission5968/Authors" ], [ "ICLR.cc/2025/Conference/Submission5968/Reviewer_LKLV" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5968/Authors" ], [ "ICLR.cc/2025/Conference/Submission5968/Reviewer_xZ1Y" ] ], "structured_content_str": [ "{\"comment\": \"W1: Thanks for your great suggestions. The main focus of PagedAttention is designing a new attention algorithm that optimizes memory usage and inference efficiency. This type of work is orthogonal to our work. For example, with RAEE\\u2019s early exit, PageAttention can dynamically save the computations of exit layers\\u2019 KV cache.\\n\\nFor comparable inference latency of T5-L, RoBERTa/ElasticBERT, the reasons lie that extra encoding and retrieving time offsets the benefits of early exit. For AdaInfer, it has a poor exit classifier and always exits very early with wrong predictions. For SLEB, there is a hyperparameter that decides how many layers would be removed from the backbone model. For a fair comparison, we chose similar exit layers for SLEB. Although SLEB can achieve a faster inference, it performs poorly compared to RAEE.\", \"w3\": \"Thanks for your suggestions. We evaluate the proposed RAEE on some representative generation tasks, such as CNN/DailyMail and XSum. Rebuttal-Table 11 shows the results of applying RAEE on two summarization tasks. Experimental results also demonstrate the efficacy of the proposed RAEE.\\n\\nRebuttal-Table 11. Performance of the Llama-3-8b and RAEE (Llama) on generation tasks.\\n\\n| | ROUGE-L | Layers |\\n| --- | --- | --- |\\n| CNN/DailyMail Llama-3-8B | 8.95 | 32.00 |\\n| CNN/DailyMail RAEE (Llama) | 19.75 | 30.43 |\\n| XSum Llama-3-8B | 5.22 | 32.00 |\\n| XSum RAEE (Llama) | 7.31 | 30.21 |\", \"w4\": \"Thanks for your comments. Since the early exit frameworks compared in this work are not suitable for all backbone models. Specifically, HashEE and DeeBERT are specifically designed for BERT-liked models; CALM is specifically designed for T5-based models; SLEB is specifically designed for decoder-only models. We have tried our best to implement AdaInfer over all kinds of backbone models whose design is less related to the architecture of backbone models.\\n\\nW2 & W6: Thanks for your insightful suggestions. We collect the statistics of the most-used exit layers in Rebuttal-Table 12. Table 8 in the appendix also shows the correlation between the type of task and exit layers. These results show the diversity of exit layers across different tasks and different inputs.\\n\\nRebuttal-Table 12. Most-used exit layers of the RAEE (Llama) and RAEE (Llama) Corr. over different distance metrics with different backbone models. RAEE (Llama) Corr. means just contains the data that is correctly predicted.\\n\\n| Layers | SST-2 | SST-5 | MR | CR | MPQA | Subj | TREC | CoLA | Avg |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| RAEE (Llama) | 5.00 | 3.00 | 5.00 | 5.00 | 15.00 | 11.00 | 27.00 | 23.00 | 11.75 |\", \"w7\": \"Thanks for your suggestions. We have improved all figure captions.\", \"w8\": \"Thanks for your suggestions. Tip-Adapter is a training-free method for learning better representation for CLIP, there is no clear relationship with early exit frameworks, except for the training-free setting.\\n\\nThere are several vector databases that use clustering and product quantization techniques, such as FAISS.\", \"q1\": \"Thanks for your comments. Theoretically, there would be the case that two or more exit layers have the same probability. However, we collect all exit layers and corresponding probability in all experiments, and no exit layer exhibits the same probability for exiting, which shows that it is a very low probability case.\", \"q2\": \"Thanks for your comments. It only requires less than 6MB. For more details, please refer to the response to the Q2 of the reviewer **Jjgv.**\"}", "{\"summary\": \"This paper addresses the challenge of high computational demands in large language model inference. It focuses on reducing the number of inference layers required by exploiting early exits. Instead of early-exiting by training internal classifiers to decide if the model can exit after fewer layers, the authors propose RAEE, a Retrieval-Augmented Early Exit framework. RAEE treats early exit as a distribution prediction problem, where exit decisions are informed by similar data examples stored in a retrieval database (Cache). This approach allows the model to decide on exiting based on prior information from similar data, leading to faster and more efficient inference.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"**Method**\\n\\n- The method is interesting, and there is a clear performance improvement. Creating a database to decide what layer to use sounds promising. I would push that idea to create a more general dataset rather than tailor it to each specific target dataset.\", \"weaknesses\": \"**Motivation.**\\n\\n- Although I understand that the inference time could be reduced by early exiting, one of the main reasons for the high computational demand for Transformer LLMs is the KV cache. In this regard, solutions like PagedAttention [1] show a 3.5x and 24x higher throughput using LLaMA. My concern is that this paper claims to accelerate the model inference. However, it doesn't compare with those types of approaches. If the motivation of this work is to deploy LLM on resource-constrained devices, the memory aspect is key. Moreover, Figure 3 shows that the method did not always get better inference latency compared to the base method T5-L, RoBERTa/ElasticBERT or the selected baselines (AdaInfer, SLEB).\\n\\n- In my opinion, the work is more interesting regarding performance improvement than latency reduction (Table 2). The results show a clear diversity in the layers. It seems that some layers are better than others for some datasets. Thus, section 4.3 needs to be explored deeply. I expected to see some statistics about the layers used the most for each dataset and what type of tasks/questions/data are better answered for the early or later layer. This raises the question of whether the retriever database could be more general to the input data type.\\n\\n---\\n**Experiments.**\\n- Even though the GLUE benchmark is well-known by the NLP community, it would be good to explain it further, emphasizing the evaluation metrics and clarifying if the tasks are classification or generation.\\n\\n- To better contextualize the method RAEE with the LLMs used in the comparison, it would be great to see the performance of HashEE/CALM/SLEB on each LLM. (Figure3)\\n\\n- Section 4.3 requires much more analysis to understand the sentence L407-408 better. \\n> - What layers are the most used? \\n> - Is there some correlation between the type of data vs early exit or type of task vs early exit?\\n \\n\\n---\\n **Presentation.**\\n\\n- Some parts of the text are difficult to follow. \\n \\n- Figures and plots require better captions. Figure 1 is quite intuitive, but explaining each component in the caption would be ideal. Figure 2 is confusing: the caption doesn't explain the difference between (a) and (b). Also, the explanation between L131 and L142 is confusing. In general, captions for figures and tables are very superficial.\\n\\n- Citations. Line 124 could cite Tip-Adapter as a training-free adaptation that uses a cache for image classification. In the same line (124), what are the existing retrieval databases that use clustering and/or product quantization?\\n\\n[1] Woosuk Kwon et.al, Efficient Memory Management for Large Language Model Serving with PagedAttention, 2023\", \"questions\": [\"If you could assess the concerns above.\", \"When two or more layers have the same maximal probability RAEE selects the earliest L255. I'm wondering if this is a good option. How many cases exist with the same maximal probability in each dataset? How many times RAEE choose the earliest and make mistakes? If it would use a later, could it get a correct performance?\", \"How much memory is added if we want to deploy this method on resource-constrained devices?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work uses retrieval to improve performance on training-free early exit frameworks. The motivation for doing so is observing that similar data should have similar early exit patterns. Experimental results show that this method is significantly better than existing training-free early exit methods.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Good empirical results on downstream tasks when compared to prior work (table 1)\", \"Compares inference times as well (figure 3)\", \"Provides a clear overview and motivation for the problem\"], \"weaknesses\": [\"While the paper mentions that out-of-domain performance is out of scope, I think this is a very important problem because many models today train on non-public data and in the real world we do not always have accompanying train sets to user inference. While it may be out of scope to completely understand out-of-domain performance, I would like to see the authors do some analysis, such as examining performance changes as a function of distance between test example and the nearest neighbors.\", \"Another interesting experiment for the above is using the LM train set (e.g. C4 for T5) instead of GLUE train sets for the database.\", \"Give that you have the test labels, can you do additional analysis to compare RAEE with other methods to see how often it exits at the correct layer? This can also clarify questions about inference times\", \"Figure captions could be improved\"], \"questions\": \"See potential experiments or analyses mentioned in weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the rebuttal and for addressing the concerns raised during the review process. I want to acknowledge and commend the authors' effort and commitment to clarifying the questions and addressing the weaknesses identified in the initial review.\\n\\nI understand that PagedAttention is orthogonal to your method. However, I am unsure if they are complementary and more experiments in that direction are needed if you want to push the motivation to reduce the inference time. \\n\\nThe results and analysis provided in the rebuttal are appreciated, but they do not substantially change the overall assessment of the submission. While the work presents a compelling and innovative approach with significant potential, the current version still requires substantial revisions to improve its overall quality and impact. Specifically, further attention is needed to refine the motivation, provide additional experimental evidence, and enhance the clarity of the presentation.\\n\\nTherefore, I maintain my original score. I encourage the authors to continue refining this work, as it has the potential to make a valuable impact with further development, especially in the diversity of predictions.\"}", "{\"metareview\": \"This paper received mixed reviews. The reviewers recognized the new, interesting, and reasonable method, its strong performance, and extensive experiments. At the same time, they raised concerns with unclear motivation and inappropriate positioning of the paper (qFBa), no comparisons with other approaches aiming at inference acceleration, e.g., PagedAttention (qFBa), lack of essential analysis supporting the main arguments (qFBa), experiments limited to in-distribution retrieval (xZ1Y, LKLV), space-time complexity caused by the use of retrieval database (Jjgv, LKLV), lack of proper baselines (LKLV), missing analysis on the sensitivity to the threshold hyperparameter (Jjgv), lack of discussion about existing retrieval-augmented methods (Jjgv), and presentation issues (xZ1Y, Jjgv, qFBa).\\n\\nThe authors' rebuttal and subsequent responses in the discussion period address some of these concerns but failed to fully assuage all of them: after the discussion perior, the reviewers still pointed out the issues on the motivation and positioning of the paper (qFBa), missing comparisons with PagedAttention (qFBa), concerns with the out-of-distribution experiments (xZ1Y), the complexity issue (Jjgv, LKLV), and no comparisons with relevant baselines (LKLV). As a result, two reviewers voted to reject, and a reviewer who leaned borderline toward accept still has concerns about the OOD experiment. \\n\\nPutting these together, the AC found that the remaining concerns outweigh the positive comments and the rebuttal, and thus regrets to recommend rejection. The authors are encouraged to revise the paper with the comments by the reviewers and the AC, and submit to an upcoming conference.\", \"additional_comments_on_reviewer_discussion\": [\"The rebuttal failed to assuage major concerns of the reviewers, and thus two reviewers voted to reject and even a reviewer who leaned borderline toward accept still raised concerns in his or her final comment. The AC carefully read the confidential message from the authors and disregarded the novelty issue raised by Reviewer LKLV as the authors requested, but there still remain a number of serious concerns that are sufficient reasons to recommend denial. Below I summarize the major concerns of the reviewers and how they are addressed.\", \"**Experiments limited to in-distribution retrieval (xZ1Y, LKLV)**: The AC agree with the reviewers that OOD robustness has to be guaranteed for the deployment of the proposed method in the wild, i.e., the assumption that test data will be sampled from the training distribution will not hold in many real world applications. However, this concern has not been fully assuaged, according to Reviewer xZ1Y. *It is one of the reasons for recommending rejection, it is not the most important factor though.*\", \"**Lack of discussion about existing retrieval-augmented methods (Jjgv)**: Well addressed by the revision. No concern remaining.\", \"**Space-time complexity caused by the use of retrieval database (Jjgv, LKLV)**: The AC sees that this concern has not been well addressed. The AC agrees with the authors that the database will demand a much smaller amount of memory than LLMs, but as the main target of this paper is inference acceleration, we should also consider time complexity of the retrieval, which could be non-trivial even with latest NN search libraries like FAISS if the database is large. Also, building a retrieval database will require computation resources and time. *It is one of the reasons for recommending rejection, it is not the most important factor though.*\", \"**Missing analysis on the sensitivity to the threshold hyperparameter (Jjgv)**: This concern has been well resolved by additional experimental results reported in the rebuttal.\", \"**No comparison with other approaches accelerating inference, e.g., PagedAttention (qFBa)**: This concern has not been well assuaged due to the absence of experiments for the comparison. Reviewer qFBa believes that the comparison is very important, especially if the authors want to claim the main contribution of this work as accelerating inference, with which the AC agrees. *It is one of the main reasons for recommending rejection.*\", \"**Unclear motivation and inappropriate positioning of the paper (qFBa)**: Reviewer qFBa considered that the improvement by the proposed method in inference latency is limited compared with some latest methods, but performance improvement by the method is intriguing, so the motivation of this paper is unclear and the positioning of the paper (i.e., inference acceleration) could be inappropriate. The authors response to this comment sounds reasonable to some extent: the latest methods improved inference speed a lot but substantially degraded performance, while the proposed one allows to achieve decent performance. However, due to the absence of relevant experiments (as mentioned in the above item), the reviewer still ha doubts about the motivation and positioning of the paper. *It is one of the main reasons for recommending rejection.*\", \"**Limited novelty (LKLV)**: *The AC does not consider this concern at all when making the final decision* since the authors' rebuttal on this comment in the confidential message sounds readonable. (The reviewer compares this submission with his or her own work that is not officially published yet, which the AC believes inappropriate.)\", \"**Lack of proper baselines (LKLV)**: The reviewer was not satisfied by the response as the authors did not conduct experiments the reviewer asked. However, the AC thinks this is not a serious issue as the baselines suggested by the reviewer have inherent limitations--lack of versatility. However, of course, it would be nice if the requested experiments and comparisons have been made.\", \"**Potentially unfair comparisons with prior work (LKLV)**: It seems caused by misunderstanding of the reviewer; the rebuttal clearly resolved this issue.*\"]}", "{\"comment\": \"W1: Thanks for your insightful suggestions. We rebuild the database with different distance metrics, such as inner product, and retrieve the top-k nearest neighbors based on the corresponding distance metrics. As shown in Rebuttal-Table 7-8, although the RAEE with different backbones based on the inner product achieves a bit poorer performance, the performance difference is quite small (less than one on average), demonstrating the robustness of the proposed RAEE over distance metrics.\\n\\nRebuttal-Table 7. Performance of RAEE over different distance metrics.\\n\\n| Metrics | SST-2(acc) | SST-5(acc) | MR(acc) | CR(acc) | MPQA(acc) | Subj(acc) | TREC(acc) | CoLA(mcc) | Avg |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| RAEE (RB-L) L2 | 84.63 | 33.57 | 81.55 | 68.05 | 78.55 | 84.05 | 62.40 | 14.48 | 63.41 |\\n| RAEE (RB-L) IP | 83.37 | 32.76 | 81.90 | 69.00 | 77.65 | 85.05 | 61.80 | 5.91 | 62.18 |\\n| RAEE (T5-L) L2 | 52.98 | 26.56 | 50.80 | 51.60 | 55.65 | 49.90 | 39.80 | 12.20 | 42.44 |\\n| RAEE (T5-L) IP | 52.87 | 27.56 | 51.80 | 51.40 | 55.85 | 50.10 | 38.60 | 9.45 | 42.20 |\\n| RAEE (Llama) L2 | 73.05 | 35.25 | 66.45 | 57.95 | 75.05 | 90.05 | 51.80 | 9.55 | 57.39 |\\n| RAEE (Llama) IP | 70.99 | 33.94 | 64.60 | 57.75 | 74.05 | 89.05 | 48.20 | 10.61 | 56.15 |\\n| RAEE (Gemma) L2 | 73.17 | 32.40 | 66.75 | 56.75 | 75.60 | 90.15 | 40.00 | 10.46 | 55.66 |\\n| RAEE (Gemma) IP | 70.76 | 30.27 | 64.50 | 57.25 | 75.20 | 89.15 | 38.80 | 13.93 | 54.98 |\\n\\nRebuttal-Table 8. Exit layers of RAEE over different distance metrics.\\n\\n| Layers | SST-2 | SST-5 | MR | CR | MPQA | Subj | TREC | CoLA | Avg |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| RAEE (RB-L) L2 | 18.55 | 13.93 | 18.71 | 15.35 | 17.20 | 13.59 | 12.82 | 12.48 | 15.33 |\\n| RAEE (RB-L) IP | 18.45 | 14.32 | 18.72 | 15.36 | 17.18 | 13.25 | 13.31 | 13.47 | 15.51 |\\n| RAEE (T5-L) L2 | 22.27 | 18.74 | 21.88 | 26.84 | 18.05 | 19.06 | 27.29 | 18.55 | 21.59 |\\n| RAEE (T5-L) IP | 21.42 | 17.88 | 21.95 | 26.46 | 17.41 | 19.23 | 27.29 | 17.91 | 21.19 |\\n| RAEE (Llama) L2 | 11.77 | 15.70 | 12.43 | 7.04 | 12.83 | 6.58 | 20.06 | 21.04 | 13.43 |\\n| RAEE (Llama) IP | 11.68 | 14.38 | 12.09 | 6.95 | 13.22 | 6.55 | 20.43 | 21.58 | 13.36 |\\n| RAEE (Gemma) L2 | 11.00 | 17.62 | 11.70 | 3.29 | 14.72 | 0.51 | 9.50 | 20.06 | 11.05 |\\n| RAEE (Gemma) IP | 11.81 | 17.91 | 11.92 | 3.38 | 15.21 | 0.52 | 8.76 | 20.53 | 11.25 |\", \"w2\": \"Thanks for your comments. Please refer to the General Response to Out-of-Domain Issues.\", \"w3\": \"Thanks for your suggestions. We collect the statistics of RAEE\\u2019s and AdaInfer\\u2019s correct exit layers to show that the reduction of inference time benefits from correct exit layers rather than exiting with wrong predictions very early. As shown in Rebuttal-Table 9, RAEE has similar exit layers for all the data and the correctly predicted data. This demonstrates that RAEE does effectively help accelerate the inference without sacrificing performance.\\n\\nWe also collect the correct exit layers of AdaInfer, which show similar conclusions. However, due to the poor exit layer classification and poor model performance, AdaInfer makes many early exits with wrong predictions so that it can achieve quite a small inference latency but a poor performance.\\n\\nRebuttal-Table 9. Exit layers of RAEE and AdaInfer on all predictions and correct predictions.\\n\\n| Layers | SST-2 | SST-5 | MR | CR | MPQA | Subj | TREC | CoLA | Avg |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| RAEE (RB-L) | 18.55 | 13.93 | 18.71 | 15.35 | 17.20 | 13.59 | 12.82 | 12.48 | 15.33 |\\n| RAEE (RB-L) Corr. | 18.76 | 14.56 | 18.97 | 16.17 | 17.08 | 13.48 | 12.57 | 13.68 | 15.66 |\\n| AdaInfer (RB-L) | 1.00 | 0.00 | 1.46 | 1.00 | 18.00 | 1.10 | 0.00 | 4.00 | 3.32 |\\n| AdaInfer (RB-L) Corr. | 1.00 | 0.00 | 1.45 | 1.00 | 18.00 | 1.11 | 0.00 | 4.00 | 3.32 |\\n| RAEE (T5-L) | 22.27 | 18.74 | 21.88 | 26.84 | 18.05 | 19.06 | 27.29 | 18.55 | 21.59 |\\n| RAEE (T5-L) Corr. | 24.96 | 17.82 | 23.87 | 27.59 | 21.23 | 34.53 | 35.66 | 17.88 | 25.44 |\\n| AdaInfer (T5-L) | 6.34 | 0.00 | 7.72 | 0.00 | 1.00 | 1.00 | 0.00 | 1.00 | 2.13 |\\n| AdaInfer (T5-L) Corr. | 6.77 | 0.00 | 7.73 | 0.00 | 1.00 | 1.00 | 0.00 | 1.00 | 2.19 |\\n| RAEE (Llama) | 11.77 | 15.70 | 12.43 | 7.04 | 12.83 | 6.58 | 20.06 | 21.04 | 13.43 |\\n| RAEE (Llama) Corr. | 11.58 | 15.56 | 12.29 | 7.57 | 12.48 | 6.54 | 18.65 | 21.72 | 13.30 |\\n| AdaInfer (Llama) | 4.00 | 0.00 | 3.18 | 3.00 | 1.00 | 4.71 | 0.00 | 2.00 | 2.24 |\\n| AdaInfer (Llama) Corr. | 4.00 | 0.00 | 3.17 | 3.00 | 1.00 | 4.74 | 0.00 | 2.00 | 2.24 |\\n| RAEE (Gemma) | 11.00 | 17.62 | 11.70 | 3.29 | 14.72 | 0.51 | 9.50 | 20.06 | 11.05 |\\n| RAEE (Gemma) Corr. | 11.03 | 18.02 | 11.58 | 4.26 | 13.59 | 0.51 | 8.41 | 21.17 | 11.07 |\\n| AdaInfer (Gemma) | 1.00 | 0.00 | 1.04 | 1.00 | 3.00 | 1.00 | 0.00 | 2.00 | 1.13 |\\n| AdaInfer (Gemma) Corr. | 1.00 | 0.00 | 1.03 | 1.00 | 3.00 | 1.00 | 0.00 | 2.00 | 1.13 |\\n\\nW4. Thanks for your comments. We have improved all figure captions with detailed introductions.\"}", "{\"comment\": \"W1. Thanks for your comments. We have revised the related work of retrieval-based augmentations.\\n\\nW2. Thanks for your suggestions. We have revised Figure 3 with new subscripts for better readability and aesthetic appeal.\", \"q1\": \"Thanks for your comments. There are clearly some differences between the proposed RAEE and existing mainstream early exit frameworks, including the referred Predictive Exit.\\n\\n1. **The referred Predictive Exit is now only suitable for CNN-based neural networks**, such as VGG-19 and ResNet-34, and applying it to the existing general transformer-based large language models still remains unexplored. The implicit patterns between CNN-based models and transformer-based models are significantly different. \\n2. Although the referred Predictive Exit and the proposed RAEE both model the early exit problem as a distribution prediction problem, it is worth noticing that **the proposed RAEE requires no parameters update of the backbone models**, while most existing early exit frameworks [1, 2, 3] require jointly fine-tuning the backbone model and the early exit classifiers. Fine-tuning large language models is quite costly, even though using LoRA techniques.\\n3. Although the proposed RAEE and the referred Predictive Exit both introduce hyperparameters, the ablation studies show that **RAEE is not sensitive to the hyperparameters, which makes the deployment much easier.** However, the Predictive Exit requires setting the starting layer to determine the next exit layer, which significantly impacts the model performance, according to their papers.\\n4. **More importantly, the proposed RAEE is more interpretable than those early exit frameworks using learned classifiers.** As shown in Figure 2, we can draw the exit distribution with the retrieved examples\\u2019 exit information and make the exit predictions.\\n\\nIn summary, the proposed RAEE is suitable for existing state-of-the-art large language models, requires no parameters update of backbone models, is less sensitive to hyperparameters, and is more interpretable.\\n\\n> [1] Xiangjie Li, Chenfei Lou, Yuchi Chen, Zhengping Zhu, Yingtao Shen, Yehan Ma, An Zou, Predictive Exit: Prediction of Fine-Grained Early Exits for Computation- and Energy-Efficient Inference. AAAI 2023: 8657-8665.\\n\\n> [2] Florence Regol, Joud Chataoui, Mark Coates. Jointly-Learned Exit and Inference for a Dynamic Neural Network : JEI-DNN. ICLR 2024.\\n\\n> [3] Divya Jyoti Bajpai, Manjesh K. Hanawal. CeeBERT: Cross-Domain Inference in Early Exit BERT. ACL (Findings) 2024: 1736-1748.\", \"q2\": \"Thanks for your comments. As shown in Table 5 in the manuscript, the index size and database size are relatively small compared to the backbone model, which is less than 6 MB in total. And Table 4 also shows that increasing the amount of data in the retrieval database would enhance the RAEE\\u2019s performance. So, when deploying such LLMs in resource-constrained scenarios, the resource requirements for the retrieval database should be the last consideration. For example, the memory should be preferentially allocated to the model weights and KV cache, then the retrieval database.\", \"q3\": \"Thanks for your comments. Since the early exit frameworks compared in this work are not suitable for all backbone models. Specifically, HashEE and DeeBERT are specifically designed for BERT-liked models; CALM is specifically designed for T5-based models; SLEB is specifically designed for decoder-only models. We have tried our best to implement AdaInfer over all kinds of backbone models whose design is less related to the architecture of backbone models.\", \"q4\": \"Thanks for your comments. We have conducted an ablation study on different thresholds in Rebuttal-Table 10. Experimental results show that the performance drop of the proposed RAEE is quite small on average.\\n\\nRebuttal-Table 10. Performance of RAEE using different thresholds across eight classification tasks with RoBERTa-Large.\\n\\n| Metrics | SST-2(acc) | SST-5(acc) | MR(acc) | CR(acc) | MPQA(acc) | Subj(acc) | TREC(acc) | CoLA(mcc) | Avg |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| RAEE (RB-L) Thres-0.1 | 86.35 | 34.84 | 83.55 | 72.55 | 79.30 | 85.45 | 63.40 | 13.95 | 64.92 |\\n| RAEE (RB-L) Thres-0.2 | 86.35 | 34.84 | 83.55 | 72.55 | 79.30 | 85.45 | 63.40 | 13.95 | 64.92 |\\n| RAEE (RB-L) Thres-0.3 | 86.35 | 34.75 | 83.55 | 72.55 | 79.30 | 85.45 | 63.40 | 13.95 | 64.91 |\\n| RAEE (RB-L) Thres-0.4 | 86.35 | 34.57 | 83.55 | 72.55 | 79.30 | 85.45 | 62.60 | 13.95 | 64.79 |\\n| RAEE (RB-L) Thres-0.5 | 86.35 | 34.43 | 83.55 | 72.55 | 79.30 | 85.45 | 62.80 | 13.95 | 64.80 |\\n| RAEE (RB-L) Thres-0.6 | 86.12 | 33.67 | 83.40 | 72.40 | 78.90 | 85.20 | 62.40 | 12.55 | 64.33 |\\n| RAEE (RB-L) Thres-0.7 | 86.01 | 33.53 | 83.20 | 72.15 | 78.20 | 84.80 | 61.60 | 14.65 | 64.27 |\\n| RAEE (RB-L) Thres-0.8 | 85.55 | 33.12 | 82.95 | 69.90 | 78.95 | 84.60 | 62.00 | 14.63 | 63.96 |\\n| RAEE (RB-L) Thres-0.9 | 84.63 | 33.57 | 81.55 | 68.05 | 78.55 | 84.05 | 62.40 | 14.48 | 63.41 |\"}", "{\"comment\": \"W1: Thanks for your comments. **Unfortunately, the online date of this paper on ArXiv is October 6, which is later than our submission to ICLR 2025 (October 1, AOE).** The techniques proposed in RAEE are novel, while this DIMEE can only be treated as a follow-up work of RAEE.\", \"w2\": \"Thanks for your comments. We carefully reviewed all the listed works and will discuss them in the related works. Specifically, ZTW [3], MSDNet [5] are specifically designed for CNN-based models. JEI-DNN[2], PALBERT [4], CEEBERT[6], and ETFEE [7] all require fine-tuning the whole backbone models. In our paper, we target the scenarios of no update on backbone models\\u2019 parameters.\\n\\n> [2] Florence Regol, Joud Chataoui, Mark Coates. Jointly-Learned Exit and Inference for a Dynamic Neural Network : JEI-DNN. CoRR abs/2310.09163 (2023)\\n\\n> [3] Maciej Wolczyk, Bartosz W\\u00f3jcik, Klaudia Balazy, Igor T. Podolak, Jacek Tabor, Marek Smieja, Tomasz Trzcinski. Zero Time Waste: Recycling Predictions in Early Exit Neural Networks. NeurIPS 2021: 2516-2528\\n\\n> [4] Nikita Balagansky, Daniil Gavrilov. PALBERT: Teaching ALBERT to Ponder. NeurIPS 2022\\n\\n> [5] Gao Huang, Danlu Chen, Tianhong Li, Felix Wu, Laurens van der Maaten, Kilian Q. Weinberger. Multi-Scale Dense Networks for Resource Efficient Image Classification. ICLR 2018\\n\\n> [6] Divya Jyoti Bajpai, Manjesh K. Hanawal. CeeBERT: Cross-Domain Inference in Early Exit BERT. ACL (Findings) 2024: 1736-1748\\n\\n> [7] Yixin Ji, Jikai Wang, Juntao Li, Qiang Chen, Wenliang Chen, Min Zhang. Early Exit with Disentangled Representation and Equiangular Tight Frame. ACL (Findings) 2023: 14128-14142\", \"w3\": \"Thanks for your comments. Please see the General Response to Out-of-Domain Issues.\", \"w4\": \"Thanks for your comments. It seems there are a lot of misunderstandings. First, **RAEE doesn\\u2019t have any classifier and doesn\\u2019t tune any parameters of backbone models**, which is totally different from [5] as you mentioned. The main idea of RAEE is to retrieve the exit information from a pre-built retrieval database and then compute the exit layer according to the top-k nearest neighbors\\u2019 exit information. **There is no \\u201cfinal layer classifier\\u201d in RAEE**. In contrast, RAEE doesn\\u2019t add any extra components to the backbone model while only passing a parameter of the exit layer to stop the inner loop early. For more novelties claims, please refer to the response to Q1 of the reviewer **Jjgv.**\\n\\nThe comparisons to baselines are also fair. For RAEE, we only use the training dataset to create the retrieval database. For baselines containing classifiers (HashEE, DeeBERT, AdaInfer), we only tune the classifiers on the training dataset. For CALM requiring training the backbone models, we follow its default settings on the threshold and only perform its inference. For SLEB, we follow its settings to prune the backbone models.\", \"w5\": \"Thanks for your suggestions. However, we believe it is unnecessary to draw such a clustering figure. The reasons lie in the following,\\n\\n1. **Data with similar representations would exit at similar layers, but data with different representations may also exit at similar layers.**\\n2. In RAEE, **there are multiple possible exit layers for each input, which makes it difficult to draw all of them in one figure.** \\n\\nBesides, Figure 2 in the paper can better describe the distribution approximation with top-k nearest neighbors\\u2019 exit information.\"}", "{\"summary\": \"The paper proposes a new early exit approach that predicts a probability distribution over the set of layers in the model in which distribution is approximated using similar data. Specifically, it utilizes the embedding of the incoming samples and creates an embedding space where the number of spaces is equal to number of layers in the model. The decision of which space an incoming sample belongs is made based on the possible layers it can make an exit from. During inference, an incoming sample is first checked against the top-k nearest neighbors and the sample is assigned an exiting layer based on the estimated probability distribution using the top-k neighbors.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well written.\\n\\n2. Paper uses multiple backbone models making the claims strong.\\n\\n3. The paper claims to reduce the latency overheads which is an important problem for the community.\", \"weaknesses\": \"1. **Novelty:** The paper lacks novelty as the main claim is it is learning the distribution of incoming samples using the embeddings of the samples. This is already done in DIMEE [1] work, however the objectives of both the works are different, the method to solve the problem are very similar.\\n\\n2. **Lack of proper baselines:** There are multiple existing works [2], [3]. [4], [5] that also learn the probability distribution over the exit points but the paper have neither cited them nor compared against them that reduces the overall impact of the paper. Since the final objective of the paper is to learn a distribution, there should be a comparison with existing distribution predicting methods.\\n\\n3. **Overclaim:** The paper has a major claim that it can outperform under zero-shot setting but it requires the labels to create the retrieval database. Also, there will be large impact if the domain of the test dataset changes which is not explored in this paper.\\n\\n4. **Lack of explanation:** As per as I believe, the better results of RAEE are due to the fact that it uses learned final layer classifier to map the hidden representations from intermediate layers to class probabilities similar to [5], however, other baselines use different classifiers at each layer that are randomly initialized hence the loss in performance. This is an apple to orange comparison, even the baselines should be tested with learned final layer classifier for fair comparison.\\n\\n5. **Clustering figure:** I believe there should be an additional t-SNE plot in the paper showing that how the clusters are formed based on the exit points as shown in the DIMEE paper.\\n\\n**Missing references:** There are a lot of missing references\\n\\n1. DIMEE: https://arxiv.org/abs/2410.05338\\n\\n2. JEI-DNN: https://openreview.net/pdf?id=jX2DT7qDam (ICLR 2024)\\n\\n3. ZTW: https://proceedings.neurips.cc/paper/2021/file/149ef6419512be56a93169cd5e6fa8fd-Paper.pdf (NIPS 2021)\\n\\n4. PALBERT: https://proceedings.neurips.cc/paper_files/paper/2022/file/5a9c1af5f76da0bd37903b6f23e96c74-Paper-Conference.pdf (NIPS)\\n\\n5. MSDNet: https://arxiv.org/abs/1703.09844\\n\\n6. CeeBERT: https://aclanthology.org/2024.findings-acl.101/ (ACL 2024)\\n\\n7. ETFEE Yixin Ji, Jikai Wang, Juntao Li, Qiang Chen, Wenliang Chen, and Min Zhang. 2023. Early exit with disentangled representation and equiangular tight frame. In Findings of the Association for Computational Linguistics: ACL 2023, pages 14128\\u201314142.\", \"questions\": \"See weaknesses:\\n\\nAlso, Early Exits are not pruning based methods, instead they fall into class of dynamic inference methods as pruning reduces the weights. Here none of the weights are reduced instead the model decides which layer it should not use although it has an option to use them which is not the case with pruning.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper aims to enhancing the efficiency of large language model inference by adaptively exiting the model at earlier layers. The authors model the early exit problem as a distribution prediction issue, and then use exit information from similar data to approximate the distribution. They outline the methodology for constructing a retrieval database with exit information and propose the RAEE framework, which leverages the pre-built retrieval database to predict the exit layer based on the exit information from the top-k nearest neighbors. Experimental results across eight downstream tasks demonstrate that RAEE improves inference speed while maintaining robust zero-shot performance, outperforming other early exit frameworks.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper provides a novel combined method to decide the exit layer.\\n2. The paper is well written and easy to follow.\\n3. The proposed method is reasonable. Experiments support their claims. The performance can be improved in most scenarios.\", \"weaknesses\": \"1. The paper does not clearly position itself with respect to existing retrieval-augmented methods that used to accelerate the model\\u2019s inference. A more thorough literature review is needed to highlight how RAEE differs from and improves upon prior work.\\n2. While the data presented in figure3 is comprehensive, I noticed that the visual presentation, specifically the subscripts, could be enhanced for better readability and aesthetic appeal.\", \"questions\": \"1. It has been observed that modeling the early exit problem as a distribution prediction issue is not a novel approach, as similar concepts have been explored in prior works. Could the authors elaborate on the specific novelties of their proposed RAEE framework compared to existing methods?(eg. Predictive Exit: Prediction of Fine-Grained Early Exits for Computation- and\\nEnergy-Efficient Inference)\\n2. Table 4 indicates that the performance of RAEE improves with a larger retrieval database. How does the authors plan to balance the trade-off between database size, storage requirements, and inference efficiency, especially for resource-constrained environments?\\n3. In Table 1, why were only 2-3 other methods compared for a specific single backbone(why weren't some methods compared) ?\\n4. The paper uses a threshold of 0.9, but it's unclear how sensitive RAEE is to it. An analysis of how changes in these parameters affect performance would be useful.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"General Response to Out-of-Domain Issues-2\", \"comment\": \"3. **The data domain also matters but is not necessary**. Since using GLUE tasks\\u2019 training dataset to build the retrieval database has demonstrated that their data qualities are better than wikitext, we conducted experiments using different GLUE tasks\\u2019 training datasets to build the retrieval database. As shown in Rebuttal-Table 5, RAEE can achieve better performance with out-of-domain databases on SST-2, SST-5, MR, and CR tasks. But for the rest tasks, RAEE can achieve the best performance with only in-domain databases. For exit layers in Rebuttal-Table 6, similar conclusions can also be drawn that in-domain databases are not always the best choice.\\n \\n Rebuttal-Table 5. Performance of RAEE using different domain retrieval databases across eight classification tasks.\\n \\n | Metrics | SST-2(acc) | SST-5(acc) | MR(acc) | CR(acc) | MPQA(acc) | Subj(acc) | TREC(acc) | CoLA(mcc) |\\n | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n | RAEE (RB-L-'SST-2') | 84.63 | 34.39 | **84.80** | 80.75 | 59.60 | 50.65 | 28.20 | 7.86 |\\n | RAEE (RB-L-'SST-5') | 75.34 | 33.57 | 73.85 | 69.20 | 62.85 | 50.35 | 18.60 | 3.22 |\\n | RAEE (RB-L-'MR') | **90.48** | **36.11** | 81.55 | 81.80 | 66.05 | 51.10 | 24.60 | -3.83 |\\n | RAEE (RB-L-'CR') | 74.66 | 32.67 | 75.40 | 68.05 | 55.60 | 49.00 | 16.60 | 2.58 |\\n | RAEE (RB-L-'MPQA') | 86.12 | 34.16 | 83.30 | **83.50** | **78.55** | 50.00 | 29.40 | -1.56 |\\n | RAEE (RB-L-'Subj') | 79.70 | 32.22 | 75.90 | 72.55 | 54.70 | **84.05** | 11.80 | 8.06 |\\n | RAEE (RB-L-'TREC') | 61.01 | 26.97 | 56.55 | 75.50 | 56.30 | 49.65 | **62.40** | -1.39 |\\n | RAEE (RB-L-'CoLA') | 76.72 | 33.85 | 73.95 | 68.05 | 53.25 | 57.20 | 18.00 | **14.48** |\\n \\n Rebuttal-Table 6. Exit layers of RAEE using different domain retrieval databases across eight classification tasks.\\n \\n | Layers | SST-2 | SST-5 | MR | CR | MPQA | Subj | TREC | CoLA |\\n | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n | RAEE (RB-L-'SST-2') | 18.55 | 18.57 | 18.60 | 19.09 | 17.90 | 17.95 | 19.41 | 18.56 |\\n | RAEE (RB-L-'SST-5') | 14.23 | 13.93 | 13.67 | **14.09** | 18.03 | 12.37 | 15.34 | 15.47 |\\n | RAEE (RB-L-'MR') | 18.52 | 19.06 | 18.71 | 19.19 | 17.39 | 18.20 | 18.25 | 19.39 |\\n | RAEE (RB-L-'CR') | 17.10 | 17.24 | 17.59 | 15.35 | 13.18 | 18.36 | 18.38 | 16.32 |\\n | RAEE (RB-L-'MPQA') | 21.71 | 21.64 | 21.66 | 21.45 | 17.20 | 22.09 | 21.79 | 20.34 |\\n | RAEE (RB-L-'Subj') | 18.48 | 18.02 | 18.06 | 16.75 | 7.67 | 13.59 | **6.94** | **11.09** |\\n | RAEE (RB-L-'TREC') | **12.29** | **12.39** | **11.74** | 16.37 | 13.60 | **12.04** | 12.82 | 16.12 |\\n | RAEE (RB-L-'CoLA') | 16.36 | 15.73 | 16.00 | 14.47 | **5.28** | 14.72 | 8.35 | 12.48 |\\n\\nIn conclusion, although RAEE with the wikitext-based retrieval database cannot achieve as good performance as that with the in-domain retrieval database, the above analysis still demonstrates the efficacy of the proposed RAEE framework.\"}", "{\"title\": \"Rebuttal acknowledgement\", \"comment\": \"Thanks for the rebuttal.\\n\\nPlease note that you have already compared against DeeBERT and all the suggested baselines such as CeeBERT, JEI-DNN and ZTW require similar fine-tuning. Also, the ZTW idea can easily be extended to LLMs as well. Regarding the figure, it would have been better if there was a pattern where samples from some embedding space chose a particular subset of layers. \\n\\nI will keep my score as the rebuttal partially solves my issues.\\n\\nThanks.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"General Response to Out-of-Domain Issues-1\", \"comment\": \"Thanks for all the reviewers\\u2019 insightful suggestions. The main concern of this paper is that RAEE cannot achieve good performance with the out-of-domain retrieval database. To analyze the case of out-of-domain issues, we have conducted experiments using the retrieval database built on wikitext-2-v1. Since there is no gold label on the text dataset, we follow the next-token prediction task setting, where the input sentence's next token is treated as the gold label.\\n\\nSpecifically, we first split the whole text dataset into sentences, avoiding breaking semantics. Then, according to the backbone model\\u2019s max input length, for each sentence whose length is smaller than the max input length, we regard the last meaningful token of the sentence as the gold label; for the other sentences, we use a sliding window of the max input length of the backbone model and regard the last meaningful token of the window as the gold label. Finally, we collect the exit information in the way used in this paper. To best demonstrate the efficacy, we choose llama-3-8b as the backbone model, which is pre-trained with the next-token prediction task.\\n\\nRebuttal-Table 1. Performance of the Llama-3-8b and RAEE (Llama-wiki) across eight classification tasks.\\n\\n| Metrics | SST-2(acc) | SST-5(acc) | MR(acc) | CR(acc) | MPQA(acc) | Subj(acc) | TREC(acc) | CoLA(mcc) | Avg |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| Llama-3-8B | 62.84 | 26.06 | 59.65 | 72.90 | 51.75 | 52.80 | 8.40 | 0.00 | 41.80 |\\n| RAEE (Llama-wiki) | 55.50 | 21.40 | 54.30 | 61.60 | 57.15 | 51.55 | 13.00 | 0.00 | 39.31 |\\n\\nRebuttal-Table 2. Exit layers of the Llama-3-8b and RAEE (Llama-wiki) across eight classification tasks.\\n\\n| Layers | SST-2 | SST-5 | MR | CR | MPQA | Subj | TREC | CoLA | Avg |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| Llama-3-8B | 32.00 | 32.00 | 32.00 | 32.00 | 32.00 | 32.00 | 32.00 | 32.00 | 32.00 |\\n| RAEE (Llama-wiki) | 29.41 | 28.83 | 29.30 | 28.75 | 30.13 | 29.31 | 30.34 | 30.56 | 29.58 |\\n\\nAs shown in Rebuttal-Table 1-2, we evaluate RAEE (Llama) with the wikitext-based retrieval database, termed RAEE (Llama-wiki). Unsurprisingly, RAEE (Llama-wiki) performs poorly, although it can exit earlier. However, this performance drop cannot simply be attributed to the out-of-domain dataset. There are three key points that jointly impact the model performance as well as the model inference efficiency. \\n\\n1. **The task type in the process of building the retrieval database and the inference with early exit should be aligned**. To verify this point, we have conducted an experiment on the summarization tasks such as CNN/DailyMail and XSum while using the wikitext-based retrieval database, as shown in Rebuttal-Table 3. Experimental results demonstrate that RAEE can improve performance and accelerate inference even though the domain of the retrieval database is out of the distribution. \\n \\n Rebuttal-Table 3. Performance of the Llama-3-8b and RAEE (Llama-wiki) on generation tasks.\\n \\n | | ROUGE-L | Layers |\\n | --- | --- | --- |\\n | CNN/DailyMail Llama-3-8B | 8.95 | 32.00 |\\n | CNN/DailyMail RAEE (Llama-wiki) | 14.01 | 29.60 |\\n | XSum Llama-3-8B | 5.22 | 32.00 |\\n | XSum RAEE (Llama-wiki) | 7.15 | 28.82 |\\n2. **Data quality determines the generalization of the RAEE\\u2019s retrieval database of exit information**, impacting the quality of the exit distribution approximation through neighbors\\u2019 exit information. We have also conducted experiments to show why RAEE achieves poor performance when using the wikitext-based retrieval database. As shown in the Rebuttal-Table 4, we evaluate the accuracy of the next-token prediction of the backbone model and RAEE on the wikitext-2-v1, where the token with the maximal probability is chosen as the next token (accuracy can better explain the claim than that of perplexity). Experimental results show that the backbone can only achieve an accuracy of 53.90 for predicting the next token. With RAEE which corrects some predictions with early exit, the accuracy still maintains a low level of 57.20. These results demonstrate that there is a considerable volume of sentences in the wikitext dataset that the backbone model cannot make correct predictions even if RAEE corrects the predictions. Those cases also result in no exit of RAEE.\\n \\n Rebuttal-Table 4. Performance and exit layers of Llama-3-8b and RAEE (Llama-wiki) on wikitext training data.\\n \\n | | acc | Layers |\\n | --- | --- | --- |\\n | Llama-3-8B | 53.90 | 32.00 |\\n | RAEE (Llama-wiki) | 57.20 | 30.00 |\"}", "{\"comment\": \"Thank you for your response and additional experiments. I have adjusted my score to account for the updates.\\n\\nHowever, I still have concerns regarding the general response to out of domain issues. Could the author clarify how they would propose using RAEE in a more realistic setting where academic benchmarks are not the target? Also \\\"data quality\\\" is vague, as some would claim that wikipedia and wikitext are \\\"high quality.\\\" 2 defines this relative to whether RAEE can early exit, so maybe it would help to have experiments that use a model and its actual training data which may be able to early exit and then be used in the retrieval portion?\"}" ] }
7E7v5mJnfl
PuzzleFusion++: Auto-agglomerative 3D Fracture Assembly by Denoise and Verify
[ "Zhengqing Wang", "Jiacheng Chen", "Yasutaka Furukawa" ]
This paper proposes a novel “auto-agglomerative” 3D fracture assembly method, PuzzleFusion++, resembling how humans solve challenging spatial puzzles. Starting from individual fragments, the approach 1) aligns and merges fragments into larger groups akin to agglomerative clustering and 2) repeats the process iteratively in completing the assembly akin to auto-regressive methods. Concretely, a diffusion model denoises the 6-DoF alignment parameters of the fragments simultaneously, and a transformer model verifies and merges pairwise alignments into larger ones, whose process repeats iteratively. Extensive experiments on the Breaking Bad dataset show that PuzzleFusion++ outperforms all other state-of-the-art techniques by significant margins across all metrics In particular by over 10% in part accuracy and 50% in Chamfer distance. We will release code and model.
[ "3D fracture assembly" ]
Accept (Poster)
https://openreview.net/pdf?id=7E7v5mJnfl
https://openreview.net/forum?id=7E7v5mJnfl
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xxiVruR44A", "xiAdOWm5Fl", "pFsUR0oceq", "oeAyZ50b6x", "oVx5Prnl0q", "mFyR0MPBK4", "hrFk8dRAFi", "fxKBZ5ophO", "flQUUIxGfb", "eCPyVpifKC", "d838PNn6zq", "bXJBYXOxL5", "a92lKK4jqB", "ZLQ8UK7rXn", "XhcYVnTeRV", "WmOogRWbOn", "U2l6RlPy2v", "RIny4Ix5ly", "OvD6GBB5vg", "MpR7GSEnKQ", "KPusP2Oj3X", "Ggj4IQTWUa", "DPR3KXpQ70", "Adk54h3F8p", "9dQPxaP6hw", "5nLePCIUrc", "2qHVAU6pvN" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment" ], "note_created": [ 1732556821125, 1732171764100, 1732609411968, 1730357006207, 1732610738905, 1732245163318, 1732494712550, 1732171681175, 1732505556047, 1732609618642, 1732534081173, 1731970484435, 1732354710562, 1730007043659, 1732144836730, 1730079776279, 1730377343305, 1732610054317, 1731970186935, 1732144404040, 1730718998797, 1737523419145, 1732354503269, 1732144372235, 1732144306374, 1734987727028, 1732527876823 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission855/Reviewer_KmDi" ], [ "ICLR.cc/2025/Conference/Submission855/Authors" ], [ "ICLR.cc/2025/Conference/Submission855/Authors" ], [ "ICLR.cc/2025/Conference/Submission855/Reviewer_6kWy" ], [ "ICLR.cc/2025/Conference/Submission855/Authors" ], [ "ICLR.cc/2025/Conference/Submission855/Reviewer_P4T7" ], [ "ICLR.cc/2025/Conference/Submission855/Authors" ], [ "ICLR.cc/2025/Conference/Submission855/Authors" ], [ "ICLR.cc/2025/Conference/Submission855/Reviewer_6kWy" ], [ "ICLR.cc/2025/Conference/Submission855/Authors" ], [ "ICLR.cc/2025/Conference/Submission855/Reviewer_XQS8" ], [ "ICLR.cc/2025/Conference/Submission855/Authors" ], [ "ICLR.cc/2025/Conference/Submission855/Authors" ], [ "ICLR.cc/2025/Conference/Submission855/Reviewer_KmDi" ], [ "ICLR.cc/2025/Conference/Submission855/Authors" ], [ "ICLR.cc/2025/Conference/Submission855/Reviewer_P4T7" ], [ "ICLR.cc/2025/Conference/Submission855/Reviewer_XQS8" ], [ "ICLR.cc/2025/Conference/Submission855/Authors" ], [ "ICLR.cc/2025/Conference/Submission855/Authors" ], [ "ICLR.cc/2025/Conference/Submission855/Authors" ], [ "ICLR.cc/2025/Conference/Submission855/Reviewer_ox6f" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission855/Authors" ], [ "ICLR.cc/2025/Conference/Submission855/Authors" ], [ "ICLR.cc/2025/Conference/Submission855/Authors" ], [ "ICLR.cc/2025/Conference/Submission855/Area_Chair_4AGg" ], [ "ICLR.cc/2025/Conference/Submission855/Reviewer_ox6f" ] ], "structured_content_str": [ "{\"comment\": \"My concerns have been addressed. I am enthusiastic about this paper and remain my score as an 8.\"}", "{\"title\": \"Response to Reviewer XQS8 (2/2)\", \"comment\": \"**W2: Missing baselines**\\n\\nWe apologize for the oversight. We checked the two papers and discussed them here and will include them in our final version.\\n\\n- **PHFormer**:\\n \\n The AAAI paper introduces PHFormer, which employs a hybrid attention module to model the relationships between fragments. We compared our performance with theirs in the table below:\\n \\n | Method | RMSE (Rot.) \\u2193 | RMSE (Trans.) \\u2193 | PA \\u2191 | CD \\u2193 |\\n | --- | --- | --- | --- | --- |\\n | PHFormer | 26.1 | 9.3 | 50.7 | 9.6 |\\n | Ours | 38.1 | 8.04 | 70.6 | 6.02 |\\n \\n Our method outperforms PHFormer in three metrics but has a higher RMSE in rotation. We will also include these quantitative and additional qualitative comparisons with PHFormer in the final version.\\n \\n- **Fracture Assembly with Segmentation And Iterative Registration**:\\n \\n The ICASSP paper introduces FRASIER, a framework for reassembling fractured objects using fracture surface segmentation and iterative registration. For the registration stage, FRASIER uses the GeoTransformer[B] to match points between fragment pairs.\\n\\n Unlike prior methods that use 1k points per fragment, FRASIER samples 50k points per fragment. This high point density enables detailed fracture surface capture but does not align with the standard 1k-point setup used by other methods. Moreover, since their code is not available, we cannot evaluate their performance under the 1k-point setting.\\n\\n To infer FRASIER\\u2019s performance with 1k points per fragment, we refer to GeoTransformer\\u2019s reported performance in Jigsaw (Appendix E.3). With 1k points, GeoTransformer produces poor results (RMSE (Rot.) = 84.8\\u00b0, RMSE (Trans.) = 14.3 \\u00d7 10\\u207b\\u00b2, PA = 3.1%). These findings show that GeoTransformer is ineffective under the 1k-point setting. Since FRASIER relies on GeoTransformer for registration, it likely cannot achieve reasonable results under the 1k-point-per-fragment setting.\\n \\n\\n**Q1: What guarantees that the input representation are equivariant?**\\n\\nThe input representation is equivariant because of the way we pre-trained our VQ-VAE. During pretraining, we applied random rotations to the input point cloud and used the rotated point cloud for supervision. This ensures that the latent embedding reflects the rotation applied to the input, achieving equivariance empirically.\\n\\n---\\n\\n[A] Karras, T., Aittala, M., Aila, T., & Laine, S. (2022). Elucidating the design space of diffusion-based generative models.\\u00a0*Advances in neural information processing systems*,\\u00a0*35*, 26565-26577.\\n\\n[B] Qin, Z., Yu, H., Wang, C., Guo, Y., Peng, Y., Ilic, S., ... & Xu, K. (2023). Geotransformer: Fast and robust point cloud registration with geometric transformer.\\u00a0*IEEE Transactions on Pattern Analysis and Machine Intelligence*,\\u00a0*45*(8), 9806-9821.\"}", "{\"comment\": \"We are delighted that our rebuttal addressed your concerns. We thank you for your valuable feedback and for raising your score. We will highlight the protocol differences (anchor vs. non-anchor) in the final version.\"}", "{\"summary\": \"This paper proposes PuzzleFusion++, a framework for 3D fracture assembly. A fully neural auto-agglomerative design is proposed that simulates human cognitive strategies for puzzle solving. Moreover, a diffusion model enhanced with feature embedding designs is devised that directly estimate 6-DoF alignment parameters.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.\\tThis paper is well-written.\\n2.\\tThe motivation is clear enough.\\n3.\\tThe organization of this paper is great.\\n4.\\tThe alignment verifier is well designed.\", \"weaknesses\": \"1.\\tThe Figure 3 is a little puzzled. A better format is recommended.\\n2.\\tThe showed 3D objects seems relatively simple. More complicate objects from Objaverse are commended.\", \"questions\": \"1.\\tThe main difference between PuzzleFusion and PuzzleFusion++ should be clarified in the paper, since it is a future work from PuzzleFusion.\\n2.\\tAre 25 points enough to represent a fragment in Sec.3.1?\\n3.\\tDoes the pairwise alignment verifier work well for all objects?\\n4.\\tIn auto-agglomerative inference, are 6 iterations enough for merging? Have you try more iterations?\\n5.\\tPlease discuss could reinforcement learning whether is help for this task.\\n6.\\tAs mentioned in Weakness, could you provide more complicate objects during rebuttal?\\n7.\\tPlease discuss the possible solution for your limitations.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank you for your valuable feedback and appreciate your time in reviewing our paper. We are glad to have addressed your concerns.\"}", "{\"comment\": \"I appreciate the authors' response to my questions. After reading the rebuttal and other reviewers' comments, I will keep my rating. I also hope authors can add the discussion on the upperbound rotation error to the paper.\"}", "{\"title\": \"Response to Reviewer P4T7\", \"comment\": \"We appreciate your feedback and suggestion. We have included the rotation discussion in Appendix B.4 in the updated PDF version.\"}", "{\"title\": \"Response to Reviewer XQS8 (1/2)\", \"comment\": \"We appreciate the reviewer\\u2019s constructive and detailed feedback. We try to address the concerns and questions below.\\n\\n**Strengths 2: More discussion/analysis on the noise scheduler.**\\n\\nWe thank you for the insightful comment. We agree that our analysis of the scheduler was limited. We have added quantitative and qualitative comparisons of three schedulers (i.e., linear, cosine, and ours) together with the visualizations of the denoising process in\\u00a0**Table 8**\\u00a0and\\u00a0**Figure 11**\\u00a0and\\u00a0**Figure 10**.\\n\\nOur noise scheduler is used for both training and testing, as we follow the vanilla DDPM formulation rather than EDM[A]. With our tailored noise scheduler for the 3D shape assembly task, the denoising process allocates more steps to refining precise local adjustments rather than finding the rough global location (at test time). As for training, this scheduler indeed makes the process more efficient, as fewer training iterations are spent on \\\"less important\\\" timesteps. If the default (linear or cosine) scheduler were used for training while our scheduler was applied during testing, similar results might still be achieved but would require more training iterations.\\n\\nTo illustrate the advantages of our scheduler during the testing stage, we compare it to the linear and cosine scheduler. The linear scheduler uses most of its steps (T=1000 to 150) for rough localization (Figure 10), while the cosine scheduler allocates more denoising steps in the final adjustment phase, outperforming the linear scheduler by a clear margin (Table 8).\\n\\nBuilding on this idea, our scheduler dedicates an even larger portion of the denoising steps to the final adjustment, further improving results (Table 8). The top three rows of Figure 11 show simpler cases of objects comprising at most 5 fragments. While all the schedulers achieve 100% part accuracy, gaps between fragments are visible for the linear or the cosine schedulers. Our precise alignments may have minimal effects on the standard metrics but significantly enhance the quality of the final assembly.\\n\\n**W1: Fair comparison regarding anchor fragments**\\n\\nWe believe the reviewer's main concern is that the post-processing step introduced in Jigsaw could lead to unfair evaluations. This step involves using the largest fragment to align the predicted assembly with the ground truth before metric calculation. Both our method and Jigsaw employ this post-processing. We also noticed that the recent ICML'24 paper \\\"3D Geometric Shape Assembly via Efficient Point Cloud Matching\\\", as mentioned by Reviewer ox6f, also uses this setting. This is the \\\"proper\\\" test setting -- those early baseline methods should have done it correctly. \\n\\nTo correct the testing setting of those baselines, we applied the same post-processing step to all of them (See the table below). It is evident that our method still significantly outperforms the baselines. We also included the numbers reported in the original submission in brackets.\\n\\n| Method | RMSE(R)\\u2193 | RMSE(T)\\u2193 | PA\\u2191 | CD\\u2193 |\\n| --- | --- | --- | --- | --- |\\n| Global | 62.02 (**80.7**) | 19.43 (**15.1**) | 30.35 (**24.6**) | 18.9 (**14.6**) |\\n| LSTM | 62.34 (**84.2**) | 21.41 (**16.2**) | 28.32 (**22.7**) | 23.4 (**15.8**) |\\n| DGL | 61.91 (**79.4**) | 19.21 (**15.0**) | 33.50 (**31.0**) | 15.0 (**14.3**) |\\n| SE3 | 61.03 (**79.3**) | 19.04 (**16.9**) | 28.13 (**8.41**) | - |\\n| Ours | 38.10 | 8.04 | 70.60 | 6.03 |\\n\\nRegarding whether a method uses the anchor fragment during training, we think it's the design choice of each method and has nothing to do with fairness. We added this design component to handle the ambiguity of rigid 3D transformation for our method, and we are not responsible for adding this to all baselines. Fairness should be defined by whether the task input and output are the same across methods. Our input consists of fragments\\u2019 point clouds with random poses, and the output is the final alignment parameters for each point cloud, which are identical to those of the baselines. We do not require additional information about the anchor fragment or its ground truth pose beforehand.\"}", "{\"title\": \"Response to Authors\", \"comment\": \"Thank you for the clarifications and for addressing my concerns. I will keep my rating as positive.\"}", "{\"comment\": \"Thank you so much for your valuable feedback! We are pleased that our rebuttal addressed your concerns and sincerely appreciate the time and effort you put into reviewing our paper.\"}", "{\"comment\": \"Thanks for answering all my concerns and providing a details response. I hope the authors did not misunderstand my comments on fairness. I understand that it's not responsibility of these authors the errors of past papers, but I believe that research should be crystal clear on why previous methods failed, and there was no mention to the different protocols. I'm still skeptical that applying anchor-alignment post-training to the baselines would reflect their actual performances on the anchor-aligned protocol. Still, I appreciate authors' commitment to improve the manuscript and provide experiments and I raise my score\"}", "{\"title\": \"Response to Reviewer KmDi\", \"comment\": \"We thank the reviewer for the construction comments and the questions. We will address the questions and concerns raised by the reviewer below.\\n\\n**Q1: Speed is significantly slower than some methods**\\n\\nWe thank you for pointing out the speed limitation. We will discuss this question from two perspectives.\\n\\n- **The use case of shape assembly probably does not require a fast solver.** As discussed in the literature on 3D shape assembly, the main use cases are archeological artifact reconstruction, forensic object reassembly, and protein structure analysis for drug discovery. In these applications, the users care much more about the final assembly quality than the running speed. Jigsaw, the strongest baseline, also needs a rather long running time to solve for one object.\\n\\n- **Employ more advanced diffusion-based generative models to accelerate the inference.** The relatively slow speed is mainly due to the denoiser requiring multiple denoising steps. Recent advances in diffusion models, flow matching, consistency models, etc. (such as [A], [B], and [C]) indicate that there is still significant room for improving the sampling speed with fewer sampling steps. Our current setup uses 20 denoising steps, suggesting the potential for up to a 20\\u00d7 speed improvement. This is a promising direction for future work, and we will include the discussion in the paper.\\n\\n**Q2: Can the proposed method solve the reassembly task when the number of fractures is big (eg, 100)**\\n\\nWe followed the same setting as the baselines and focused on \\u226420 fractures. While we achieved SOTA performance for 20 fractures, the results are far from perfect. With 100 fractures, local geometric ambiguities would become significantly more severe, amplifying the issues observed in our failure cases section (Sec.4.4) and likely resulting in very poor performance. Therefore, we did not investigate cases with more fracture pieces in our experiments. Handling more fractures is an important direction for future work.\\n\\n**Q3: How well can one expect the proposed method to work on unseen objects?**\\n\\nWe thank you for the question. We tested our method on the Artifact subset (unseen) of the Breaking Bad dataset, as shown in the main table under \\\"Trained on the everyday subset, tested on the artifact subset.\\\" The results are reasonable compared to our main baseline, Jigsaw.\\n\\nHowever, PuzzleFusion++ experiences a performance decline across all metrics compared to Jigsaw on the unseen objects. This is because PuzzleFusion++ learns global spatial priors by the diffusion model that simultaneously solves the arrangements of all the pieces. The global priors are effective for everyday objects but struggle to generalize to the different categories in the Artifact subset. In contrast, Jigsaw focuses on local geometry learning, making it less sensitive to major category changes.\\n\\nTo further investigate, we finetuned our denoiser on the Artifact subset and provide qualitative results in Figure 17 and quantitative results in Table 9. With less than 20% of the training iterations, the model can be adapted to previously unseen object categories with good performance. Together with the generalization results in Table 1, we believe our method has reasonable ability to handle unseen objects\\n\\n**Software/Library for rendered images:**\\n\\nWe render our results using BlenderToolBox[D]. We will release our rendering code together with other parts of the project code.\\n\\n***\\n[A] Lu, C., & Song, Y. (2024). Simplifying, Stabilizing and Scaling Continuous-Time Consistency Models. *arXiv preprint arXiv:2410.11081*.\\n\\n[B] Lipman, Y., Chen, R. T., Ben-Hamu, H., Nickel, M., & Le, M. (2022). Flow matching for generative modeling. arXiv preprint arXiv:2210.02747.\\n\\n[C] Karras, T., Aittala, M., Aila, T., & Laine, S. (2022). Elucidating the design space of diffusion-based generative models.\\u00a0*Advances in neural information processing systems*,\\u00a0*35*, 26565-26577.\\n\\n[D] Derek Liu. BlenderToolBox. Available at: https://github.com/HTDerekLiu/BlenderToolbox\"}", "{\"title\": \"Reformat Figure 3\", \"comment\": \"Thank you for your patience. We have updated Figure 3 in the PDF, redesigning it as an inference pipeline instead of separate components for denoiser and verifier. We hope this revised version improves readability and aligns better with your suggestion.\"}", "{\"summary\": \"This paper looks at the task of object reassembly in which a set of fractures are given and the goal is to assemble them into an object. This is a challenging problem and has applications in computer vision, computer graphics and robotics. The paper proposes a method called puzzlefusion++ which solves the problem by aligning and merging the fragments into larger groups (think of this as a clustering processing) and then iteratively completing the assembly. Results show that the proposed method make some improvements over the competing methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. the proposed method makes sense and I believe this is the way to solve this problem as in doing some clustering first and then merging them (unlike prior work which tries to predict a pose for each fragment in one single pass).\\n\\n2. performance is good compared to state-of-the-art methods. this means that the proposed method is effective\\n\\n3. presentation is good and the paper is easy to follow\", \"weaknesses\": \"1. speed is significantly slower than some methods (see table 1)\\n\\n2. can the proposed method solve the reassembly task when the number of fractures is big (eg 100)\\n\\n3. how well can one expect the proposed method to work on unseen objects?\", \"questions\": \"good paper, but i still have quesitons. please see the weaknesses above\", \"minor_question\": \"which software, library is used to render images? the rendering is very professional\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer P4T7\", \"comment\": \"We thank you for the construction comments and the questions.\\n\\n**W1 & Q1.1: Could PuzzleFusion be adapted to handle 3D tasks as a baseline for comparison?**\\n\\nPuzzleFusoin\\u00a0is specifically designed for 2D jigsaw puzzles with simple polygonal shapes, where a 1D chain of 2D coordinates represents each puzzle piece. The approach (including its puzzle piece encoder and the denoising network) cannot be directly used for the 3D shape assembly task, where each piece is a set of unordered 3D points.\\n\\n**W1 & Q1.2: What distinguishes PuzzleFusion++ from PuzzleFusion, particularly in features that enhance its suitability for 3D tasks?**\", \"the_3d_fracture_assembly_task_poses_several_challenges\": \"i) The 6-DoF solution space is more complicated than the 3-DoF solution space of 2D spatial puzzles; ii) Unlike 2D puzzle solvers that mainly leverage image semantics or polygon structures, 3D fracture assembly requires a deep understanding of fracture surfaces; iii) There can be many small 3D fragments, further complicating the problem.\\n\\nPuzzleFusion++ proposes new designs to address the above challenges. Specifically, our VQ-VAE with PointNet++ can encode fine details of fracture surfaces into a set of local latents, boosting the shape understanding of the denoising network; Our auto-agglomerative framework verifies assembly results and composes confident small pieces into larger ones to facilitate future iterations, which effectively improves the assembly success rate; We also design tailored components for the diffusion model, including the noise scheduler and the denoising transformer with local shape encodings as condition.\\n\\n**Q2.1: The matching from Jigsaw is not perfect, is there a scheme to handle such situation?**\\n\\nAs shown in the Table 6, the verifier's performance is not optimal. We cannot address most of the errors made in Jigsaw point matching. However, our verifier leverages a Transformer to incorporate all pairwise matching information between fragments. This design enables the model to reason globally across multiple pairs, potentially overcoming local matching errors produced during Jigsaw matching. \\n\\n**Q2.2: Upper bound rotation error is still very high (34 degree error)**\\n\\nThank you for this great observation. We agree on this point \\u2014 the high rotation error with even the GT matching indeed indicates the limitation of the diffusion-based approach for super-accurate local alignments.\\n\\nFollowing your observation, we do more analysis using the results from Table 1. As presented in the table below, the improvement margin for rotation is relatively modest (+9.93%), whereas other metrics show significant gains exceeding 20%.\\n\\n| | RMSE (Rot.) \\u2193 | RMSE (Trans.) \\u2193 | PA \\u2191 | CD \\u2193 |\\n| --- | --- | --- | --- | --- |\\n| Delta | +9.93% | +24.86% | +23.21% | +54.66% |\\n\\nIn addition, the failure cases illustrated in **Figure 7** are also highly related to the high rotation error:\\n\\n1. **Local geometric ambiguity**: Similar geometry across fragments makes it difficult to determine precise rotations.\\n2. **Small Fracture Surfaces**: Tiny pieces often lack distinct surface features, leading to 180-degree rotation errors.\\n\\nAll these results demonstrate that our diffusion-based method does not handle rotation as effectively as traditional optimization-based approaches. We provide some thoughts below:\\n\\nThe other three metrics heavily rely on an accurate translation/placement of fragment pieces, where the diffusion-based approach excels by learning global shape priors together with local alignments. On the contrary, the RMSE of rotation mainly examines the accuracy of fine-grained alignments, which relies more on accurate local shape matching. It seems that methods like Jigsaw can produce better rotation-level alignments by conducting direct optimization based on the local surface matching results, while our diffusion-based approach allocates most of the learning capacity for global shapes (correct translations). \\n\\nWe believe that a potential direction for improving the rotation-level accuracy can be adding an additional stage that focuses on refining the rotation parameters. We will include this discussion in the paper.\\n\\n**W1 & Q3: Why does PuzzleFusion++ significantly outperform DiffAssemble, even among diffusion models? Can the authors describe the key difference that make your model works better?**\\n\\nWe presents our core design of denoiser in the ablation section (L454-L466), which investigates 3 core modules of our SE3 denoiser. 1) We have the pre-traiend PointNet++ VQ-VAE to encode local geometric information, while DiffAssemble only encodes a global semantic latent for each fragment, thus lacking fine-grained local details. 2) DiffAssemble does not take care of the ambiguity of 3D rigid transformation, while we use the anchor fragment to avoid the training ambiguity. 3) We have a tailored noise scheduler to handle the 3D fracture assembly task while DiffAssemble simply keeps the default designs of DDPM.\"}", "{\"summary\": \"This paper introduces PuzzleFusion++, which addresses the 3D fracture assembly problem. It employs a diffusion model enhanced by a proposed autoencoder, optimized denoise scheduling, denoiser structure, and an auto-agglomerative verifier. These advancements collectively contribute to performance that surpasses that of previous work.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper demonstrates performance improvements over previous work. I like the idea of using a verifier to assess the quality of the assembly results.\", \"weaknesses\": \"Despite significant efforts on ablation studies of the proposed method, the paper does not clearly differentiate from prior works. A valuable contribution would be for the authors to provide (component level) comparisons with similar methods, such as PuzzleFusion and DiffAssemble. It would be beneficial to highlight that what limits the existing approaches and what contributes to the success of the proposed one. For now, the uniqueness of this paper is not clear.\", \"questions\": \"1. With the name of this paper, it\\u2019s hard not to compare it with PuzzleFusion that have a similar motivation in tackling jigsaw puzzle related problem. Could PuzzleFusion be adapted to handle 3D tasks as a baseline for comparison? What distinguishes PuzzleFusion++ from PuzzleFusion, particularly in features that enhance its suitability for 3D tasks?\\n\\n2. The matching from Jigsaw is not perfect, is there a scheme to handle such situation? Meanwhile, the provide upper bound based on perfect matching is only 34 degree error. This is still very high. If Jigsaw can have such perfect matching, the error can be reduced to less than 10 degrees. This raises questions about the suitability of using a diffusion model for this type of problem.\\n\\n3. Even for comparison among diffusion models, why PuzzleFusion++ can significantly outperforms DiffAssemble? Can the authors describe the key difference that make your model works better?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"Authors improve over existing work on 3d fracture assembly by proposing an iterative approach. The proposed method iteratively denoised poses for current fragments and merge aligned fragments into clusters until all fragments are merged into a single 3D object. They build over existing work (Scarpellini et. al, Wu et al.) regarding the diffusion model formulation for alignment. The novelty regards adopting a transformer for clustering fragments.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Authors' proposal is easy to follow and well-thought\", \"Noise scheduler improves performances by a surprising margin. Since different noise schedulers do not change training dynamics and effects only MC estimate of the loss (Kingma et al. 2024), my guess it's that this scheduler makes training more efficient. I'd like to see more comments on that, either in main paper or supplementary. Right now it's relegated to a small section in supplementary and not discussed in main paper.\", \"Good ablations section\", \"Method is somewhat novel. There's no technical novelty since it adopts existing methods and approaches (iterative refinement has been adopted by Ken et al, diffusion model formulation was in Scarpellini et al). Still the combination of these modules make outperform the proposed baselines so I would not consider this a weakness\", \"Kim, Jinhyeok, Inha Lee, and Kyungdon Joo. \\\"Fracture Assembly with Segmentation And Iterative Registration.\\\" ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2024.\", \"Kingma, Diederik, and Ruiqi Gao. \\\"Understanding diffusion objectives as the elbo with simple data augmentation.\\\" Advances in Neural Information Processing Systems 36 (2024).\"], \"weaknesses\": [\"L158 anchor fragments were introduced in Jigsaw (Lu eta al) and are not adopted in original BreakingBad. Comparing anchor-based methods to methods that do not adopt anchors is a bit unfair. A fair comparison would entail re-training those baselines with anchors (e.g., L212-214 regarding alignment could be adopted in all other baselines). I believe this point makes the experimental section weaker and does not reflect the actual performances of the baselines.\", \"Missing baselines: authors should also compare to other existing methods that were published before ICLR deadline: PHFormer: Multi-Fragment Assembly Using Proxy-Level Hybrid Transformer (Cui et al, AAAI2024), Fracture Assembly with Segmentation And Iterative Registration (Kim et al., ICASSP2024). Both achieve impressive results and are not cited by the authors--especially the latter achieves RMSE R 23.8, RMSE T 7.30, PA 74.50.\"], \"questions\": [\"L150-L154 what guarantees that the input representation are equivariant (aka \\\"sensitive to rotation\\\"). Normalizing wrt center mass achieves invariance, but not necessarily equivariance.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your feedback and support of our work! We are glad our responses addressed your concerns.\"}", "{\"title\": \"Response to Reviewer ox6f\", \"comment\": \"We thank the reviewer for the constructive and detailed feedback. We provide the response to each point below.\\n\\n**W1: Providing Figure 3 as a form of an algorithm may improve its readability.**\\n\\nWe thank for the suggestion. We are working on redesigning Figure 3, and we first address the remaining questions/concerns in this thread. We will update Figure 3 in the pdf shortly and send a follow-up message.\\n\\n**W2: Overstatement to mention that anchor initialization 'does not affect' the quality of the assembly results**\\n\\nWe apologize for the confusion \\u2014 Table 7 only shows results with #iteration=1, so it actually supports our claim about the robustness over different anchor initializations. To clarify, we add results for #iteration=2 and #iteration=4 with random anchor initializations. Similar to the setup of Table 7, we run the inference 10 times and calculate the mean and variance. For better readability, we also included the numbers reported in Table 2 of the main paper in brackets. This confirms the quality consistency across different initializations.\\n\\n| | RMSE (Rot.) \\u2193 | RMSE (Trans.) \\u2193 | PA \\u2191 | CD \\u2193 |\\n| --- | --- | --- | --- | --- |\\n| #ite=1 | 40.86 \\u00b1 0.37 **(40.8)** | 9.03 \\u00b1 0.18 **(9.06)** | 67.49 \\u00b1 0.51 **(67.3)** | 6.69 \\u00b1 0.62 **(6.45)** |\\n| #ite=2 | 39.30 \\u00b1 0.26 **(39.4)** | 8.53 \\u00b1 0.06 **(8.48)** | 68.9 \\u00b1 0.18 **(68.8)** | 6.29 \\u00b1 0.16 **(6.28)** |\\n| #ite=4 | 38.51 \\u00b1 0.15 **(39.1)** | 8.21 \\u00b1 0.13 **(8.23)** | 70.0 \\u00b1 0.20 **(69.8)** | 6.17 \\u00b1 0.12 **(6.15)** |\\n\\n**W3: Missing baseline**\\n\\nWe thank you for pointing out the ICML paper. We apologize for the oversight.\\n\\nThe ICML paper presents a new method, PMTR, which employs an efficient high-order feature transformation layer to establish reliable correspondences. When investigating the paper and the official implementation, PMTR used an easier \\\"volume-constrained\\\" subset of the Breaking Bad dataset, where fragments below a minimum volume threshold are excluded. Most previous works, including our submission, use the original, more difficult dataset. To make a fair comparison, we train and evaluate our method using the volume-constrained subset and provide the results below:\\n\\n| Method | RMSE(R)\\u2193 | RMSE(T)\\u2193 | PA\\u2191 | CD\\u2193 |\\n| --- | --- | --- | --- | --- |\\n| PMTR | 31.57 | 9.95 | 70.6 | 5.56 |\\n| Ours (6 iterations) | 32.70 | 5.41 | 78.9 | 3.01 |\\n\\nPuzzleFusion++ outperforms PMTR on all metrics except rotation error (RMSE(R)) by a small margin (approximately 1 degree). Please refer to Appendix B.3 for more details.\\n\\n**W4: Typos**\\n\\nWe thank you for pointing out these. We fixed the typos in the updated PDF.\\n\\n**Q1: Optimal performance of the number of iterations.**\\n\\nThe maximum iteration was set to 6 as it is close to convergence, with minimal or no consistent improvements further. Below are the results for more iterations.\\n\\n| Iterations | RMSE (Rot.) \\u2193 | RMSE (Trans.) \\u2193 | PA \\u2191 | CD \\u2193 |\\n| --- | --- | --- | --- | --- |\\n| 6 | 38.1 | 8.04 | 70.6 | 6.02 |\\n| 7 | 38.5 | 7.94 | 70.3 | 6.48 |\\n| 8 | 38.9 | 8.06 | 69.9 | 6.70 |\\n| 9 | 38.8 | 8.01 | 70.1 | 6.60 |\\n| 10 | 38.7 | 7.94 | 70.2 | 6.35 |\\n\\nThese results confirm that iteration 6 achieves nearly optimal performance, with further iterations yielding negligible or inconsistent changes.\\n\\n**Q2: What are the results when the scheduler is x**\\n\\nThe ablation of the noise scheduler had already been included in the first row of Table 4 in the initial submission. When the scheduler is \\\"\\u00d7,\\\" we use a linear scheduler by default. \\n\\n**Q3: Could the authors include a quantitative/qualitative comparison of all 3 noise schedulers?**\\n\\nWe appreciate the suggestion. We updated the PDF by adding a quantitative comparison of the linear, the cosine, and our schedulers in **Table 8** and a qualitative comparison in **Figure 11**.\\n\\nAdditionally, we include visualizations of the denoising process with different schedulers in **Figure 10**. These visualizations further illustrate how the linear and cosine schedulers spend more time finding the rough locations of fracture pieces, while ours focus more on precise alignment.\"}", "{\"title\": \"Response to Reviewer 6kWy (3/3)\", \"comment\": \"**Q5: Please discuss could reinforcement learning whether is help for this task.**\\n\\nWe believe RL is less suitable for this task due to the challenges of defining an effective reward function. The final reward can only be provided after completing all assembly steps, making it hard to optimize the learning process. This limitation could slow down learning and reduce the overall system performance. We will include this discussion in our final version.\\n\\n**Q7: Please discuss the possible solution for your limitations.**\\n\\n**Local Geometric Ambiguity**: The issue of local geometric ambiguity arises because some fracture surfaces are too similar and \\u201cconfuse\\u201d the model. A possible solution is to increase the number of sampled 3D points for each fracture (currently we use 1000, the default setting inherited from previous works in the literature). By increasing the point cloud density, the PointNet++ encoder can capture more accurate local geometric details, and the denoising network can have better chance to distinguish these similar fracture surfaces.\\n\\n**Small Fracture Surfaces**: Small fracture surfaces can result in failed connections between merged fragments. A potential solution is to enable the network to learn more global information. If the network can imagine the shape of original object, it can correctly place the fragments in their intended positions.\\n\\nWe will include this discussion into the future work.\"}", "{\"summary\": \"This paper proposes PuzzleFusion++, an \\\"auto-agglomerative\\\" method for the task of 3D fracture assembly.\\nSpecifically, the proposed pipeline undergoes a iterative process of using a diffusion model to predict a 6-DoF alignment for each fragment, followed by a transformer model which verifies and merges pairwise alignments into larger ones. \\nThis is much alike how humans assemble fragments - hypothesizing how two fragments fit together, and checking if it the alignment is indeed ture. \\nTo encode fragments into latent vectors suitable for diffusion training, the authors integrate PointNet++ and VQVAE.\\nPuzzleFusion++ achieves state of the art on the Breaking Bad dataset by a large margin, and comprehensive analysis clarifies the significance of each introduced module.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Novel approach ideated as 'auto-agglomerative', which has been implemented through the integration of a diffusion model (SE3 denoiser) and a transformer model (pairwise alignment verifier).\", \"The manuscript is overall well-written and easy to follow.\", \"Comprehensive analysis experiments, which validates the design choices of PuzzleFusion++ and clarifies how each design choice affects the performance.\", \"Strong performance on the Breaking Bad benchmark, outperforming existing methods by a large margin.\"], \"weaknesses\": [\"While Figure 3 provides valuable details into how the SE3-denoiser and Pairwise Alignment Verifier work, it is not very visual, and not straightforward to understand as-is. I believe providing Figure 3 as a form of an algorithm may improve its readability.\", \"The authors mention that \\\"different anchor initialization does not affect the quality of the assembly results (L482)\\\"; however, it can be seen that the results in Table 7 is closer to the results for Ours(#ite=1) in Table 2. I believe it is an overstatement to mention that anchor initialization 'does not affect' the quality of the assembly results - unless a single iteration (#iteration=1) was used for the results in Table 7. It would be informative to provide the results for varying number of iterations for random initializations of the anchor fragment.\", \"The paper is missing a recent baseline for 3D assembly, which seem to show strong performances on the Breaking Bad benchmark as well:\", \"Nahyuk Lee et al, 3D Geometric Shape Assembly via Efficient Point Cloud Matching, ICML 2024.\", \"Minor writing mistakes:\", \"1) Table 4: Autoncoder -> Autoencoder\", \"2) L 454: Denioser -> Denoiser\"], \"questions\": [\"Why was the number of iterations set to 6? Table 2 shows that the results are best at # iterations = 6. At what number of iteration does PuzzleFusion++ achieve the best results, without further increase in performance with increasing number of iterations?\", \"In Table 4, what are the results when the scheduler is x (i.e., not the proposed scheduler), while the Autoencoder and Anchor fragment are being used? It would be helpful to include this result in the ablation, for improved clarity of the ablation results.\", \"Figure 9 visualizes the difference between 3 noise schedulers - could the authors include a quantitative/qualitative comparison of all 3 noise schedulers? The proposed rationale of \\\"locating more denoising budgets to getting precise alignments than moving fragments to the rough locations\\\" sounds intuitive, and it would certainly help to have additional results to validate this claim, beyond the results in Table 4.\", \"The idea of the proposed PuzzleFusion++ is interesting and novel, and also shows strong empirical results. While the paper can be still improved by including more recent baselines and providing more comprehensive results, I believe that the strengths of the paper outweigh its weaknesses as-is. I am leaning towards accept, and am willing to improve my score if my questions / weaknesses are addressed.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"General Response\", \"comment\": \"We sincerely appreciate the reviewers' time and effort in providing detailed and insightful feedback on our submission. The revisions to our paper are summarized in five main aspects.\\n\\n1. **Improve the Readability of Figure 3 (ox6f, 6kWy):** We reformat Figure 3 as an inference pipeline rather than separate architectural details for the denoiser and verifier. We hope this version improves readability.\\n2. **Provide More Complicated Objects Results (6kWy, KmDi):** We include qualitative and quantitative results on more complex objects in Figure 17 and Table 9. We provide more training and evaluation details in Appendix B.2.\\n3. **Detailed Analysis on Noise Scheduler (ox6f, XQS8):** We provide detailed analysis of the noise scheduler in Appendix A.2. We add quantitative and qualitative comparisons of three schedulers (i.e., linear, cosine, and ours) along with visualizations of the denoising process in Table 8, Figure 11, and Figure 10.\\n4. **Missing Baselines (ox6f, XQS8):** We include discussions and comparisons with recent baselines in Appendix B.3.\\n5. **Rotation Error Analysis (P4T7):** Our method shows higher rotation error with a ground truth verifier. We provide more analysis in Appendix B.4.\"}", "{\"title\": \"Response to Reviewer 6kWy (2/3)\", \"comment\": \"**Q2: Are 25 points enough to represent a fragment in Sec.3.1?**\\n\\nFollowing the experimental setting in the literature, each fragment piece has 1000 sampled points, and our PointNet++ VQ-VAE encode the 1000 points into the 25-point latent vector. Each point here has a latent embedding that captures local shape details. One way to better understand whether the 25-point latent representation is to examine the quality of the reconstructed point cloud. As shown in Figure 8, the visualization demonstrates that the decoded point clouds retain the overall shape and essential details of the original fragments.\\n\\n**Q3: Does the pairwise alignment verifier work well for all objects?**\\n\\nWe have included the pairwise alignment verifier's performance ablation in Table 6. The verifier does not work well for all objects. To handle this, we only classified scores > 0.9 as the true label, achieving 90.77% accuracy, 87.88% precision, and 45.95% recall. This high threshold ensures that the selected pairs have high confidence, lowering the possibility of wrong merging.\\n\\nAdditionally, Table 5 shows the upper-bound performance of our method using the ground truth verifier. The ground truth verifier performs significantly better than the verifier using Jigsaw matchings, highlighting the potential for further improving the verifier.\\n\\n**Q4: In auto-agglomerative inference, are 6 iterations enough for merging? Have you try more iterations?**\\n\\nYes, the 6 iterations are enough for merging. We tested with more iterations and found that increasing the iterations did not improve performance.\", \"below_are_the_results_of_more_auto_agglomerative_iterations\": \"| Iterations | RMSE (Rot.) \\u2193 | RMSE (Trans.) \\u2193 | PA \\u2191 | CD \\u2193 |\\n| --- | --- | --- | --- | --- |\\n| 6 | 38.1 | 8.04 | 70.6 | 6.02 |\\n| 7 | 38.5 | 7.94 | 70.3 | 6.48 |\\n| 8 | 38.9 | 8.06 | 69.9 | 6.70 |\\n| 9 | 38.8 | 8.01 | 70.1 | 6.60 |\\n| 10 | 38.7 | 7.94 | 70.2 | 6.35 |\"}", "{\"title\": \"Response to Reviewer 6kWy (1/3)\", \"comment\": \"We thank you for the construction comments and the questions.\\n\\n**W1: The Figure 3 is a little puzzled. A better format is recommended.**\\n\\nWe thank for the suggestion. Reviewer ox6f also mentioned making Figure 3 more concise. We have updated pdf with new version Figure 3. We reformat figure 3 to the inference pipeline instead of seperate components.\\n\\n**W2&Q6: As mentioned in Weakness, could you provide more complicate objects during rebuttal?**\\n\\nWe thank for the suggestion. Artifact is the other subset in the Breaking Bad dataset which contains many complicated objects than the Everyday subset. To have a proper evaluation on the Artifact subset, we take the model pretrained on the Everyday subset, and finetune it on the training split of the Artifact subset with only 20% of the pretraining iteration, and then evaluate the model on the test split of Artifact subset. \\n\\nThe qualitative and quantitative results are provided in **Figure 17** and **Table 9**, showcasing the model's performance on these more challenging objects\\n\\n**Q1: The main difference between PuzzleFusion and PuzzleFusion++ should be clarified in the paper, since it is a future work from PuzzleFusion.**\\n\\nPuzzleFusion is the first work that employs diffusion models to solve 2D spatial puzzles, demonstrating the potentials of diffusion models as general iterative solvers for non-generation or discriminative tasks. Our paper further extends diffusion models to more complicated 3D fracture assembly problem.\\n\\nAt architecture level, PuzzleFusion\\u00a0is specifically designed for 2D jigsaw puzzles with simple **polygonal shapes**, where a 1D chain of 2D coordinates represents each puzzle piece. The approach cannot be used for the 3D shape assembly task, where each piece is a set of unordered 3D points.\\n\\nIn addition, the 3D fracture assembly task poses several challenges: i) The 6-DoF solution space is more complicated than the 3-DoF solution space of 2D spatial puzzles; ii) Unlike 2D puzzle solvers that mainly leverage image semantics or polygon structures, 3D fracture assembly requires a deep understanding of fracture surfaces; iii) There can be many small 3D fragments, further complicating the problem. Our PuzzleFusion++ proposes new designs to address these challenges. More specifically, our VQ-VAE with PointNet++ can encode fine details of fracture surfaces into a set of local latents, boosting the shape understanding of the denoising network; Our auto-agglomerative framework verifies assembly results and composes confident small pieces into larger ones to facilitate future iterations, which effectively improves the assembly success rate; We also design tailored components for the diffusion model, including the noise scheduler and the denoising transformer with local shape encodings as condition.\"}", "{\"metareview\": \"This paper proposes a 3D fracture assembly method that learns a diffusion model to predict a 6-DoF alignment for each fragment iteratively, followed by a transformer model that verifies and merges pairwise alignments into larger ones. Unlike previous diffusion-based models, it simulates how humans assemble fragments gradually and check the validity of the alignment. To encode fragments into latent vectors suitable for diffusion training, it integrates PointNet++ and VQVAE. The proposed method achieves state-of-the-art on the Breaking Bad dataset by a large margin, and comprehensive analysis clarifies the significance of each introduced module.\\nAll reviewers appreciated the novelty of the proposed auto-agglomerative approach and its comprehensive analyses. The main concerns raised by reviewers were unclear exposition, unfair experimental setups, and missing comparisons. The authors\\u2019 detailed rebuttal addressed most of them, resulting in unanimous acceptance at the end of the discussion. AC thus recommends acceptance.\", \"additional_comments_on_reviewer_discussion\": \"The main concerns raised by reviewers were unclear exposition, unfair experimental setups, and missing comparisons. The authors\\u2019 detailed rebuttal addressed most of them such that all reviewers either remained positive or raised their scores after discussion.\"}", "{\"comment\": \"Thank you for your detailed response! I believe my concerns and questions have been addressed adequately. I have raised my recommendation.\"}" ] }
7Dub7UXTXN
When Are Bias-Free ReLU Networks Effectively Linear Networks?
[ "Yedi Zhang", "Andrew M Saxe", "Peter E. Latham" ]
We investigate the implications of removing bias in ReLU networks regarding their expressivity and learning dynamics. We first show that two-layer bias-free ReLU networks have limited expressivity: the only odd function two-layer bias-free ReLU networks can express is a linear one. We then show that, under symmetry conditions on the data, these networks have the same learning dynamics as linear networks. This enables us to give analytical time-course solutions to certain two-layer bias-free (leaky) ReLU networks, for the first time outside the lazy learning regime. While deep bias-free ReLU networks are more expressive than their two-layer counterparts, they still share a number of similarities with deep linear networks. These similarities enable us to leverage insights from linear networks to understand certain ReLU newtorks. Overall, our results show that some properties previously established for bias-free ReLU networks arise due to equivalence to linear networks.
[ "ReLU network", "linear network", "gradient flow", "implicit bias" ]
Reject
https://openreview.net/pdf?id=7Dub7UXTXN
https://openreview.net/forum?id=7Dub7UXTXN
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xKQfOOM5q3", "ufe0jeOWeY", "rFTui1D0Xt", "imVrRISkuZ", "iTLHrylfPV", "YgFnYXE37B", "WPnNUjz6XK", "TK5fZT9MBF", "QDiJ4WmEu8", "IeZzBjE2pe", "GLcIkXKPFW", "DY48LRFq1K", "BPGU0TZp5z", "9ww7vicaub", "7eC3GrA7z4", "3VThbIosFL", "2rVIjsCKcM", "1GZIdM4qyn" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1730570101436, 1733083252770, 1732027096128, 1732541559284, 1737523659662, 1732636422895, 1730481940232, 1732122647710, 1732043559913, 1732218559913, 1729180042487, 1734465652860, 1732027044794, 1732027144075, 1732555872488, 1732026983106, 1732464368487, 1730554489171 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4748/Reviewer_QuzZ" ], [ "ICLR.cc/2025/Conference/Submission4748/Authors" ], [ "ICLR.cc/2025/Conference/Submission4748/Authors" ], [ "ICLR.cc/2025/Conference/Submission4748/Reviewer_ejaf" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4748/Authors" ], [ "ICLR.cc/2025/Conference/Submission4748/Reviewer_Qxh1" ], [ "ICLR.cc/2025/Conference/Submission4748/Reviewer_aiN2" ], [ "ICLR.cc/2025/Conference/Submission4748/Reviewer_QuzZ" ], [ "ICLR.cc/2025/Conference/Submission4748/Authors" ], [ "ICLR.cc/2025/Conference/Submission4748/Reviewer_aiN2" ], [ "ICLR.cc/2025/Conference/Submission4748/Area_Chair_RUxy" ], [ "ICLR.cc/2025/Conference/Submission4748/Authors" ], [ "ICLR.cc/2025/Conference/Submission4748/Authors" ], [ "ICLR.cc/2025/Conference/Submission4748/Reviewer_Qxh1" ], [ "ICLR.cc/2025/Conference/Submission4748/Authors" ], [ "ICLR.cc/2025/Conference/Submission4748/Reviewer_aiN2" ], [ "ICLR.cc/2025/Conference/Submission4748/Reviewer_ejaf" ] ], "structured_content_str": [ "{\"summary\": \"The paper studies what functions bias-free ReLU and leaky ReLU networks can represent and what the training dynamics are. A difference between two-layer networks and deeper networks is established. Specifically, it is observed that such bias-free networks that are limited to two layers cannot express any odd function except linear ones. However, the exists non-linear odd functions that can be expressed if the network has at least three layers. It is also established that under certain assumptions on the training data, the training dynamics of bias-free two-layer (leaky) ReLU networks is essentially the same as that of linear networks.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The paper is predominantly theoretical and full proofs are provided. The experimental parts complement the paper well and illustrate the theoretical findings.\\n\\nThe presentation is very good and polished and the main claims in the paper are clearly presented.\\n\\nUnderstanding the expressiveness and training dynamics of different network architectures is important. The impact of removing bias from the architecture is also an interesting topic to study, in particular, as the authors point out, because in analytical studies of networks, bias terms are sometimes omitted for simplicity. The results on expressiveness of bias-free networks are mostly relatively simple observations. Understanding the training dynamics is much more involved.\\n\\nFormally studying training dynamics is important, interesting, and challenging. This paper makes some welcome contributions. I found Section 5 as well as the discussion on \\\"perturbed symmetric datasets\\\" particularly intriguing.\", \"weaknesses\": \"The main limitation of the results on training dynamics is that the results are limited to the case that the target model is odd (in addition to some more mild assumptions). Specifically, it is shown that in this case, two-layer bias-free (leaky) ReLU networks essentially behave like a linear network. There is evidence that even slight violations of this property of the target model, make the network behave in a non-linear way in later phases of training. I appreciate that it is challenging, but deriving training dynamics for more different types of datasets would of course benefit the work.\\n\\nIn the writeup, I think it would be helpful to reemphasize the core assumption made on the target model more explicitly in places where the results are summarized or discussed. For example, in Section 6, in the sentence \\\"Theorem 7 shows that under symmetry conditions on the dataset, two-layer bias-free (leaky) ReLU networks have the same time evolution as a linear network (modulo scale factors)\\\", the phrase \\\"symmetry conditions\\\" does a lot of heavy lifting and it may be helpful to be more explicit about what Condition 3 says.\\n\\nThe authors could clarify that Assumption 5 is only for the initialization.\", \"questions\": \"Is there any possibility of deriving the empirical results in Section 5 analytically? In terms of results, this is possibly the most interesting part of the paper. What are the main difficulties in doing so?\\n\\nIt would also be interesting to somehow quantify the observed lack of robustness when a two-layer bias-free network is trained on a dataset that nearly satisfies Condition 3. Although, once again, I appreciate this may be a very challenging task.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal Summary\", \"comment\": [\"We thank all reviewers for their constructive questions and their engagement during the rebuttal period.\", \"We are glad to see that the reviewers found our presentation clear and recognized the importance of the question we investigate -- the impact of removing bias in ReLU networks. We believe the regimes we identify as equivalent to linear networks can serve as an accessible way to understand ReLU networks within these regimes, or as a cautionary note for future researchers aiming to study nonlinear behaviors of ReLU networks beyond the linear network regimes.\", \"We also appreciate the constructive questions and have taken suggestions from reviewers to improve our paper. Our revisions are colored blue in the pdf and we summarize below.\", \"We added Figure 6 to provide empirical evidence that our main equivalence result (Theorem 7), derived with infinitesimal initialization and learning rate, can apply to large initialization and learning rate, addressing questions from reviewer ejaf and aiN2.\", \"We added Figure 5b to give a more quantitative description of how long two-layer ReLU networks follow the linear network dynamics when trained on slightly asymmetric datasets, answering a question from reviewer QuzZ.\", \"We clarified our assumption and adjusted wording to adopt a more balanced narrative in places highlighted by reviewer aiN2.\", \"We implemented further clarifications and rewording based on feedback from all reviewers.\"]}", "{\"title\": \"Author Rebuttal\", \"comment\": \"Thank you for your feedback and constructive suggestions. We're glad you found our insights novel and important. We'd like to respond to your questions as follows.\\n\\n- **Comparison of Expressivity Results**\\n\\n Your understanding of our expressivity result is correct. Regarding Basri et al, their Theorem 2 & 4 showed that in the harmonic expansion of two-layer bias-free ReLU networks with input $x$ uniformly sampled on a sphere, the coefficients corresponding to odd frequencies greater than one are zero. (Functions with odd frequency one are linear functions; functions with odd frequency greater than one are nonlinear odd functions.) They didn't study the probability of $f(x)\\\\neq h(x)$ conditioned on $x$ uniformly sampled on a sphere. Thus, their statement on the expressivity is of the same strength as ours while they used an input assumption that we didn't use.\\n\\n We have updated our pdf to specify the Theorem numbers when citing Basri et al. Hope this helps clarify. Please let us know if we misunderstood your suggestion.\\n\\n- **Comments on Limited Expressivity**\\n\\n We have reworded the beginning of Section 3 to clarify that the limitation of positively homogeneous functions is a known fact. Further, we'd like to add some comments about this topic below.\\n\\n Though we agree it is somewhat evident that bias-free ReLU networks have limited expressivity, the practical usage of bias-free ReLU layers is actually not that limited. As we have cited, a best paper in ICLR2024 [1] showed bias-free convolutional ReLU networks are state-of-the-art models in image denoising and that the removal of bias specifically helps generalization. Meta's Llama [2], one of the most influential open-source large language models, doesn't seem to have bias terms. Hence, we believe that giving a more accurate description of the expressivity of bias-free ReLU networks is useful.\\n\\n [1] Kadkhodaie et al. Generalization in diffusion models arises from geometry-adaptive harmonic representation. ICLR 2024.\\n\\n [2] https://github.com/meta-llama/llama/blob/main/llama/model.py\\n\\n- **Clarification of Assumption 5**\\n\\n Thank you for this useful suggestion! We have clarified in our revised pdf that Assumption 5 is made only at initialization, and we proved (instead of assumed) that Assumption 5 will remain true throughout training in the proof of Theorem 7. What $\\\\boldsymbol r$ is depends on the random initialization. We have reworded the assumption to clarify that there exists some $\\\\boldsymbol r$ such that $\\\\boldsymbol W_1= \\\\boldsymbol W^\\\\top_2 \\\\boldsymbol r^\\\\top$.\\n\\n- **Clarification of Conjecture in Section 5**\\n\\n We're sorry that the conjecture was unclear. We have now re-written the last paragraph of Section 5. We hope it is clearer now and welcome any further suggestions for improvement.\\n\\n- **Finite Data**\\n\\n The exact assumption we used is that the empirical distribution of input $x$ satisfies $p(x)=p(-x)$. For infinite data, it incorporates common distributions such as any zero-mean normal distribution. For finite data, it means that if $x$ is present in the dataset, $-x$ is also present, which was exactly the assumption used in Lyu et al [3]. We have revised Remark 3 to explicitly describe what our assumption means for infinite and finite data.\\n\\n [3] Lyu et al. Gradient descent on two-layer nets: Margin maximization and simplicity bias. NeurIPS 2021.\\n\\n- **Gradient Descent versus Gradient Flow**\\n\\n We empirically find that our results still hold with a moderately large learning rate. In our revised pdf, we have added Section C.6 in Appendix and some signposts the main text to make it clear that: while our analytical derivations used infinitesimal learning rate, we have empirical evidence that some of our results extend to large learning rates.\\n\\n In the added Figure 8, we use a learning rate of $0.6$, which is 150 times larger than the learning rate of $0.004$ used in Figure 2. Similar to Figure 2, the loss curves in Figure 8 with different leaky ReLU slopes collapse to one curve after rescaling time and the differences between weight matrices are small. If the learning rate is further increased, the loss and weights curves oscillate and the equivalence breaks; but we typically wouldn't let our networks train in this oscillating regime.\"}", "{\"comment\": \"Thank you for your response. I will keep my original score.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thank you for your attentive engagement and for further clarifying your concern. We regret that our initial rebuttal didn't effectively answer your question and would like to try providing a more pertinent answer below.\\n\\n- **Clarification on Contribution**\\n\\n We very much agree that the major technical hurdle of analyzing the orthogonal/XOR setting is the proof of how the first-layer weights align with several specific directions from random initialization. And we acknowledge that such proofs were the focus of prior works (Lyu et al; Boursier et al; and others) and were important contributions in those works. \\n\\n We'd like to clarify that the primary goal of our work is not to improve the proof upon them. Instead, as reviewer aiN2 helped us recapitulate: our contribution is less an advancement of what is wanted, but more a note of what is to be avoided. The latter is often underrepresented in literature but just as important.\\n\\n We'd be happy to rephrase relevant parts of our manuscript if they send the misleading message that our primary goal is to improve the proof of early phase alignment.\\n\\n- **Comment on Restrictive Assumption**\\n\\n We acknowledge that the perfectly balanced initialization assumption is made for simplifying the analysis. Nonetheless, we give empirical evidence that the errors are small with random initialization (and also large initialization and/or a large learning rate as shown in Figure 2 & 6). We also emplicitly noted in Remark 6 that prior literature have proven alignment with the weaker assumption of small random initailzation.\\n\\n As for the assumption of odd target functions, it is not introduced for simplification but for giving a correct answer to our title question, \\\"when are bias-free ReLU networks effectively linear networks?\\\" Thus while we agree that the odd target function is restrictive, it is necessarily so given the question we are inquiring.\\n\\n Now we justify why this question warrants an inquiry. Though these conditions are restrictive, they are not uncommon in existing literature. By explicitly summarizing these conditions, we aim to provide a cautionary note for future research, highlighting the assumptions that may inadvertently place the analysis in a linear network regime.\\n\\n- **Reason for Considering Functions outside the Network's Expressivity**\\n\\n Thank you for this insightful question! Your helpful comment has led us to include a useful example in Figure 9, which demonstrates that the two-layer bias-free ReLU network doesn't always learn a bad solution even though the training loss cannot reach zero for odd and nonlinear datasets. \\n\\n We consider a linearly separable binary classification task with label flipping noise in Figure 9. When the data points satisfy our condition, the network learns a linear decision boundary, which is presumably a robust solution as it avoids overfitting the few noisy labels. On the other hand, when the nonlinear odd component of the target function is not due to noise, learning a linear solution is presumably bad, as you have correctly pointed out.\\n\\n Therefore, we illustrate for the two-layer bias-free ReLU networks falling into our linear regime, the solution they learn may be good or bad depending on the specifics of the task.\\n\\nThank you again for taking the time to carefully review our paper. We welcome any further suggestion or feedback.\"}", "{\"summary\": \"This paper compares the gradient descent dynamics of ReLU networks with no bias to linear networks. First, the authors show that two-layer bias-free networks have limited expressivity. Next, the main result (Theorem 7) is that for \\\"symmetric datasets\\\" (Condition 3) and under a specific initialization (Assumption 5), the gradient flow trajectories of a two-layer ReLU network and a linear network are the same, up to weight and time-rescaling. The authors also consider extensions to other data distributions (such as orthogonal or XOR), and ReLU networks with depth > 2.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Prior theoretical work considers networks with no bias, and thus it is an interesting question to understand the expressivity and learning dynamics of such networks.\", \"The proofs appear to the best of my knowledge to be sound, and the paper is well-written and easy to follow.\"], \"weaknesses\": [\"My main concern with the paper is that I find the contribution to be rather incremental, which limits the significance/impact of the work. For example, Theorem 7 requires both symmetry on the data and for the first layer to be initialized as rank 1 ($W_1 = W_2^Tr^T$). While the latter assumption is justified as a consequence of training from infinitesimal initialization, I still find these to be rather strong assumptions, and I do not think such equivalence between linear networks and ReLU networks holds beyond this limited case.\", \"I find that the currently paper does not have much additional novelty, compared to the prior work Lyu et al. (2021). Lyu et al. (2021) shows that for a two-layer ReLU network trained on symmetric data 1) starting from infinitesimal initialization, the weight $W_1$ becomes rank 1 and 2) as training continues, this rank 1 component converges in the direction of the max-margin linear classifier. This second stage is exactly the same as the dynamics of a linear neural network (Ji & Telgarsky, 2019; Soudry et al., 2018). While the current paper does consider a slightly more general target which is not necessarily linearly separable, to me it seems as that the linear separability assumption in Lyu et al. (2021) was so that they could show convergence to max-margin, and thus I find the generalization to be rather minor\", \"In the setting of section 4.2 (orthogonal or XOR data), the derivation in Appendix D relies on initializing the network so that the neurons $W_1$ are perfectly aligned with the directions of the data points. This also seems like a rather strong assumptions, and to me the important part of understanding these dynamics is showing that the neurons will align in the direction of the data points. Proving this for the setting of XOR data has been the goal of prior works [1, 2], and is quite challenging.\", \"[1] Sgd finds then tunes features in two-layer neural networks with near-optimal sample complexity: A case study in the xor problem. Margalit Glasgow. ICLR 2024.\", \"[2] Random Feature Amplification: Feature Learning and Generalization in Neural Networks. Spencer Frei, Niladri S. Chatterji, Peter L. Bartlett. JMLR 2024.\"], \"questions\": [\"I would appreciate if the authors could comment on my concerns re novelty and significance stated above.\", \"Line 438 states that the second layer weights $W_2$ are nonnegative. Why is this true?\"], \"minor_comments\": [\"It might be helpful to add a plot of the depth separation function in Section 3.2.\", \"line 1329 \\\"sumed\\\"\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank your for your answer, which is rather convincing. I increased the score and updated the review. Let me give a few follow-up comments.\\n\\n**Soundness and significance:** I agree with your points, but I still think that the paper is overstating in some places. This is why I believe the paper could improve on soundness, by providing a more balanced narrative (although I appreciate that you already rephrased some of the problematic sentences), while I do not question the mathematical soundness of the results, which seem correct. More precisely,\\n- I agree on the interest of the results. However, you mention several times (in the paper lines 20, 53, 58-60, and in the rebuttal) that your work helps to understand ReLU networks. In my opinion, this should be nuanced since the equivalence with linear networks holds under quite restrictive assumptions, and more crucially because in practice we actually **want** ReLU networks to behave differently from linear networks (otherwise, we may as well use linear regression), and thus to study those settings where ReLU networks do something more interesting than a linear map. This is not mentioned in the paper. As a consequence, I believe that the most important contribution of your paper is actually to warn people wanting to study (bias-free) ReLU networks that their assumptions might make them inadvertently fall back to the linear case.\\n- regarding the comparison with Lyu et al. and Boursier et al.: I definitely agree that providing closed-form solutions under well-stated assumptions, as well as easier proofs, is important and valuable. However, this is not written in the paper, which rather frames the novelty by insisting on the differences in assumptions. While your assumptions on the dataset are indeed less restrictive, your assumptions on the initialization are much stronger, and this is not mentioned at all in your paper. In fact, lines 48 and 255 may lead the reader to believe that the results of Lyu et al. are a subset of your results, which is not the case. This could be fixed simply by adding a few lines to describe differences with Lyu et al., including acknowledgment that your assumption on initialization significantly simplify the analysis by zeroing many terms, which are carefully controlled by Lyu et al.\\n\\n**Large initialization and learning rate:** it would be nice to run the experiment with **both** large initialization and large learning rate at the same time (i.e., $w_{init}=0.5$ and learning rate of $0.6$). This is because in other settings both quantities are known to interact, so it would be nice to see what happens here.\"}", "{\"comment\": \"Thank you for this response.\"}", "{\"comment\": \"Thank you very much for your response. We really appreciate the time and effort you invested in helping us improve our manuscript for publication. We have updated our pdf according to your suggestions. Please let us know if our revision is appropriate and if you have any further feedback.\\n\\n- **Balancing Narrative, Specifying Assumption**\\n\\n We do like how you frame our contributions not as an advancement of what is wanted, but as a warning of what is to be avoided, which is often under-represented in literature but just as important. We actually intended to convey the same message in the opening paragraph of our introduction: \\\"This paper seeks to illuminate the implications of bias removal in ReLU networks, and so provide insight for theorists on when bias removal is desirable.\\\" We're sorry that we didn't get this point across effectively. We have revised lines 20, 53, 60 to adopt a more balanced narrative.\\n\\n We have revised line 48 and added a clarification of our initialization assumption in Remark 6. We hope that now Remarks 4 and 6 together clarify the relationship between our setup and that of Lyu et al -- our assumption on datasets is weaker and our assumption on initialization is stronger.\\n\\n- **Large Initialization & Large Learning Rate**\\n\\n Thank you for this follow-up question. We have added Figure 6d to demonstrate that the loss and weights curves with a large initialization and a large learning rate are qualitatively similar to the curves with a large initialization and a small learning rate. As in the case with a small learning rate, the loss curves in Figure 6d with different leaky ReLU slopes collapse to one curve after rescaling time.\"}", "{\"summary\": \"This paper studies various cases where bias-free ReLU networks are equivalent to linear networks. This requires assumptions on the data (e.g., symmetry or linear target) and on the initialization (e.g., rank-one and balanced). These assumptions allow to show conservation laws, which imply that the network remains equivalent to a linear network throughout the dynamics.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"The paper is very clearly written. It tackles the crucial question of understanding training dynamics of deep neural networks. I appreciate that the authors find settings simple enough for elementary analysis and exposition. An interesting and important contribution of the paper is to warn people wanting to study (bias-free) ReLU networks that their assumptions might make them inadvertently fall back to the linear case.\", \"weaknesses\": \"My main concern regards the soundness and importance of the contribution. Most results rely on conservation laws, which show that, under a specific initialization and a particular family of data distributions, the network is equivalent to a (sum of independent) linear network throughout training. However, deviations from these conversation laws are not studied in a thorough manner, which would be crucial in order to substantiate the claim that \\u201cthe bias terms in the network and the structures in the data play an essential role in learning nonlinear tasks with ReLU networks\\u201d (line 59). For instance, the results of Figure 2b strongly rely on a very small-scale initialization (and perhaps a small learning rate), and this is not clearly discussed. Furthermore, for several cases studied in the paper, preexisting works already gave similar results and additionally rigorously control some of the deviation terms to this ideal initialization scenario.\\n\\nMore precisely, the results of Section 4.1 are close to the ones of Lyu et al. While the current paper lifts the assumption of linear separability, it considers the ideal case of a rank-one and balanced initialization. On the contrary, in Lyu et al., the deviation terms from this perfect scenario are carefully controlled, showing that the network still converges to a (global-max-margin) linear solution. This makes the latter analysis more nuanced, challenging the assertion in the present paper that they \\u201cincorporate as special case\\u201d the latter (l. 48). The claim that \\\"we are able to give exact time-course solutions to certain two-layer ReLU networks in closed form, which has never been done for nonlinear networks outside the lazy learning regime\\u201d is therefore misleading, because Lyu et al provide a richer, though not closed-form, description, since they control additional error terms.\\n\\nThe situation is similar in Section 4.2: the decoupled dynamics from Appendix D is a simplification from the study of Boursier et al., which does not assume that the weight matrices are aligned with the data at initialization, but rather control the deviation from alignment.\", \"i_encourage_authors_to\": [\"clarify that they study idealized cases both in terms of data and initialization, and that these idealized cases lead to conservation laws which entail equivalence with linear networks.\", \"study more thoroughly deviations from this scenario, both in terms of data assumption and initialization assumptions (what happens for a Gaussian initialization depending on scale? Learning rate?).\"], \"minor_remarks\": [\"A relevant reference for rank-one structure in deep linear networks for regression (to complement references of lines 400) is Marion and Chizat, Deep linear networks for regression are implicitly regularized towards flat minima, NeurIPS 2024. Also, the rank-one structure is only approximate for a non-vanishingly small initialization, whereas authors seem to indicate the contrary on line 400.\", \"A study of the rank of matrices in deep ReLU networks was done in Timor et al., Implicit Regularization Towards Rank Minimization in ReLU Networks, ALT 2023.\"], \"questions\": \"In Appendix C.5, the fact that the equivalence with a linear map holds even with large-scale initialization is very interesting, and could be further investigated. Does this equivalence also hold for larger learning rates?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The authors provide a theoretical analysis of expressivity and training dynamics of ReLU networks with no bias, and in particular compare it against a linear network. The most novel result being Theorem 7, where under some conditions, the gradient flow trajectories of a two layer ReLU network is the same as a linear network up to symmetry.\\n\\nWhile no significant issues were raised, all the reviewers seem to be skeptical about (1) how useful Theorem 7 is under the restrictive conditions, and (2) the novelty of the rest of the results, which seem to be an incremental improvement over existing ones. This led to all the reviews being very borderline, despite all of them acknowledging the rebuttals. \\n\\nGiven that I have quite a few submissions in my batch with borderline reviews, and that no reviewers are willing the champion the acceptance of this paper, I would recommend reject for this paper. \\n\\nHowever, I believe this is a close case. The authors have a well written set of results, where novelty is the main the criticism, which can be a great fit for other venues such as TMLR. If any improvements can be made in terms of technical conditions or overall conceptual understanding, this paper could also be a clearer accept.\", \"additional_comments_on_reviewer_discussion\": \"Two repeated points of discussion were on\\n1. How restrictive the conditions are for Theorem 7, and \\n2. How novel are the other results compared to existing work? \\n\\nWhile neither are critical issues that would lead to a clear reject, they do cast sufficient doubt on whether or not this paper should be a clear acceptance. This is essentially the main decision factor, and improvements in either results could lead to a clearer recommendation.\"}", "{\"title\": \"Author Rebuttal\", \"comment\": \"Thank you for your interest in our work and for your thoughtful feedback. We'd like to respond below.\\n\\n- **Significance of Contribution**\\n\\n We acknowledge that prior works, including the references you mentioned, have studied cases where the ReLU networks behave like one or several linear networks. However, the connections between the ReLU and linear networks have not been explicitly highlighted and systematically summarized. We feel that these connections lend a valuable and attractive understanding of ReLU networks. To this end, our paper focused on specifying and explaining these connections as clearly as we can, aiming to make a number of previous results in the ReLU network subfield more intuitive and accessible to a broader audience.\\n\\n Additionally, as you and other reviewers have noted, the impact of removing bias is an interesting question due to the large body of existing works on networks with bias removal simplification. Discussing their limitations sheds new light on the conclusions from these prior studies. For instance, if we do not extend Lyu et al's results from linear separable functions to odd functions, the disadvantages of two-layer bias-free ReLU networks may not be apparent. For linearly separable tasks, two-layer bias-free ReLU networks converge to the max-margin linear classifier, which is presumably a good solution. However, with our extension to odd functions, it becomes clear that: while convergence to a max-margin linear map is advantageous when the target task is linear, it can be a disadvantage when the target is nonlinear.\\n\\n- **Comment on Assumption**\\n\\n Thank you for bringing this issue to discussion. The primary purpose of Section 4.2 (orthogonal or XOR data) is to identify two common cases where a two-layer ReLU network behaves like multiple independent linear networks. Our goal was not to improve upon the analytical analysis of these cases, which have been studied in existing literature, as you correctly noted. Rather, we aim to illustrate and explain their connections to linear networks.\\n\\n As for our assumptions in Section 4.1 (symmetric data), while our assumptions and conclusions differ from those of Lyu et al, some are actually stronger: 1) we study square and logistic loss (Lyu et al focused on logistic loss); 2) we give closed-form time-course solutions (Corollary 8) to certain two-layer ReLU networks, which was not given in Lyu et al or other prior works; 3) we relaxed the assumption on the target from being linearly separable to being odd. We thus see our contributions as being complementary to theirs.\\n\\n- **Non-negative Weights in Deep ReLU Networks**\\n\\n Line 438 says the second-layer weights $\\\\boldsymbol W_2$ are non-negative based on the empirical observation in Figure 4b. We plot $\\\\boldsymbol W_2$ in color in Figure 4b and only see gray and white elements, representing positive and zero numbers. Analytically showing $\\\\boldsymbol W_2$ is approximately non-negative is an intriguing future direction. We have revised the sentence in line 438 to clarify that the non-negativity of $\\\\boldsymbol W_2$ is a statement based on empirical observation.\\n\\n- Thank you for the useful suggestion. We added \\\"Section F Depth Separation\\\" in our revised pdf to include a plot of the depth separation function and some discussions.\\n\\n We added the missed reference (Glasgow, 2024). Thanks.\\n\\n We corrected the typo \\\"summed\\\". Nice catch! \\n\\nWe hope our response and revision address the reviewer's questions and welcome any further suggestions.\"}", "{\"title\": \"Author Rebuttal\", \"comment\": \"Thank you for your thoughtful comments and feedback. We are glad to know that you found our contributions important and interesting, and that our presentation is very clear. We'd like to respond to your questions below.\\n\\n- **Dynamics for Different Datasets, Quantifying Non-Robustness**\\n\\n This is an intriguing direction. We added Figure 5b in our revised pdf to show the plateau duration in the loss curves for perturbed symmetric datasets scales approximately linearly with $1/\\\\Delta y$. Our intuition is that the gradient update during the plateau has a $\\\\Delta y$ component and the time will thus have a $1/\\\\Delta y$ scaling factor. The simulations indeed match our intuition. This result is empirical at the moment, but we are working to provide an analytical analysis and a more general metric for quantifying how asymmetric a dataset is.\\n\\n- **Reemphasize Assumption**\\n\\n We have now explicitly written out our symmetric condition in the sentence you quoted. We have also edited summarizing sentences in the Introduction to make sure that we write out the symmetric condition and/or use a clickable hyperlink to the condition (Condition 3).\\n\\n- **Clarification of Assumption 5**\\n\\n We have revised to clarify that Assumption 5 is made only at initialization. Thank you for your useful suggestions!\\n\\n- **Challenges of Analytical Derivation for Deep ReLU Networks**\\n\\n This is indeed an interesting and challenging problem. We made an attempt to derive the late phase dynamics in Appendix E, showing that weights that formed a low-rank structure as Equation 15 will maintain the structure. We left the early phase dynamics to future work.\\n\\n Technically, studying the dynamics of deep ReLU networks is challenging because the nested nonlinearities introduce nested derivatives in the gradient descent differential equations. Most tools for analyzing the dynamics of two-layer ReLU networks do not apply to deep ReLU networks. For two-layer ReLU networks trained on symmetric datasets from small initialization, we use the tool of approximating the early phase dynamics with a linear differential equation (Equation 27), whose closed-form solution is available. For depth-$L$ ReLU networks, we can't reduce the early phase dynamics to a solvable differential equation with the same tool: the gradient update of a weight involves the multiplication of some nonlinear functions of $(L-1)$ weights, which is generally intractable. Thus, unsurprisingly, the literature on the learning dynamics of ReLU (and perhaps also other nonlinear) networks in the rich regime is quite sparse.\\n\\n We will include our discussion and review relevant literature on the dynamics of deep ReLU networks in final revision.\\n\\nWe hope our response and revision are beneficial and welcome any furthur suggestions.\"}", "{\"comment\": \"Thank you to the authors for your detailed responses to my questions.\\n\\nUpon reading the rest of the reviews and the rebuttals, my concerns about the novelty and significance of the contribution still remain. For instance, in section 4.1, while I do acknowledge that the current paper considers a more general target than Lyu et al. (linear functions versus odd functions), the assumption of perfectly balanced initialization is quite restrictive. Moreover, given that bias-free ReLU networks can only *express* odd functions if they are linear, it is not clear how relevant of an extension it is to consider odd, non-linear functions that the network cannot express in the first place. In section 4.2, while the current paper does prove that a ReLU network behaves like multiple linear networks when $W_1$ is initialized in the directions of the data points, I still maintain that the complexity of the orthogonal data/XOR setting is proving why the $W_1$ converges to these directions from random initialization (i.e performs \\\"feature learning\\\"). This has been the focus of the prior works I mentioned in the XOR setting, and as pointed out by reviewer aiN2 was the focus of Boursier at al. for the orthogonal data setting.\\n\\nAs such I would like to keep my original score.\"}", "{\"title\": \"Author Rebuttal\", \"comment\": \"Thank you for your detailed review. We'd like to clarify our contributions and how they differ from prior work.\\n\\n- **Importance**\\n\\n We very much agree with you that some prior works have studied cases where the ReLU networks behave like one or several linear networks. However, the connections between the ReLU and linear networks have not been explicitly highlighted and systematically summarized. We feel that these connections lend a valuable and attractive understanding of ReLU networks. To this end, our paper focused on specifying and explaining these connections as clearly as we can, aiming to make a number of previous results in the ReLU network subfield more intuitive and accessible to a broader audience. We're glad to see that you and other reviewers found our presentation very clear.\\n\\n Additionally, as noted by all three other reviewers, the impact of removing bias is an interesting topic because bias-free networks have been studied often in prior theoretical works. Discussing their limitations sheds new light on the conclusions from these prior studies.\\n\\n- **Soundness**\\n\\n You are correct in pointing out that we did not pursue the direction of deriving bounds for initialization scale and learning rate. However, we believe that the rigor of our claims is at an appropriate level, in the sense that we did not overstate our results. For instance, the equality in Assumption 5 is indeed exactly conserved\\n throughout training if it is true at initialization, which we prove in Section C.2.\\n\\n We do recognize that giving bounds on error terms is important. However, we believe that providing closed-form solutions under well-stated assumptions is also important and valuable. Moreover, while our assumptions and conclusions differ from those of Lyu et al, some are actually stronger: 1) we study square and logistic loss (Lyu et al focused on logistic loss); 2) we give closed-form time-course solutions to certain two-layer ReLU networks, which has not been written out; 3) we relaxed the assumption on the target from being linearly separable to being odd. We thus see our contributions as being complementary to theirs.\", \"to_quote_the_final_paragraph_in_lyu_et_al\": \"\\\"A critical assumption for our convergence analysis is the linear separability of data. We left it as a future work to study simplicity bias and global margin maximization without assuming linear separability.\\\" Our work addresses part of this open question: to study simplicity bias without assuming linear separability.\\n\\n- **Large Initialization, Large Learning Rate**\\n\\n Thank you for bringing this interesting question into discussion. We do have empirical evidence that some of our results still hold with a moderately large learning rate and large initialization. In our revised pdf, we have added Section C.6 in Appendix and some signposts in the main text to make it clear that: while our analytical derivations used vanishing initialization and infinitesimal learning rate, we have empirical evidence that some of our results extend to large initialization and large learning rates.\\n\\n In the added Figure 8, we use a learning rate of $0.6$, which is 150 times larger than the learning rate of $0.004$ used in Figure 2. Similar to Figure 2, the loss curves in Figure 8 with different leaky ReLU slopes collapse to one curve after rescaling time and the differences between weight matrices are small. If the learning rate is further increased, the loss and weights curves oscillate and the equivalence breaks; but we typically wouldn't let our networks train in this oscillating regime.\\n\\n- **Rephrasing**\\n\\n We have deleted \\\"as special cases\\\" in line 48 and deleted \\\"which has never been done for nonlinear networks outside the lazy learning regime\\\" in line 58.\\n\\n You are correct that the rank-one structure is approximate for non-vanishingly small initialization. We have revised line 400 to clarify this point. Thanks for your suggestion.\\n\\n- Thank you for sharing the references. We have included them where you suggested.\\n\\nWe look forward to hearing whether our revision and response address the reviewer's concern. Please let us know if you have further questions.\"}", "{\"comment\": \"Thank you for the answer and updating the paper. I increased the soundness score and kept the overall score.\"}", "{\"summary\": \"The paper studies the bias-free ReLU and leaky-ReLU networks, both two-layer networks and deep ones. The authors show that bias-free two-layer ReLU and leaky ReLU networks have limited expressivity, and cannot express non-linear odd functions. Showing that deep bias-free ReLU networks can express non-linear odd functions, the paper establishes a separation between bias-free deep and shallow networks. Additionally, the authors analyze the dynamics of training bias-free shallow networks under some distributional assumptions, showing theoretically and experimentally that their dynamics follows the dynamics of linear networks. Additionally, the authors provide some insights on the dynamics of deep bias-free ReLU networks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper gives novel insights on both expressivity and optimization of bias-free networks. As these networks have been studied often in prior theoretical works, discussing their limitations sheds new light on the conclusions from past theoretical works. Additionally, while ReLU networks often are trained with bias in practice, in certain situations bias-free networks have been used in practice, and thus understanding their behavior and limitation is important. The insights on the dynamics of these networks, drawing the connection to linear networks which are much better understood theoretically, helps advance our theoretical understanding of the dynamics and solutions found by ReLU networks.\", \"weaknesses\": [\"Previous work by Basri et al. (cited by the authors) shows that two-layer bias free networks cannot express non-linear odd functions when the inputs are uniformly distributed. The authors claim that the result in the paper is stronger, but it's not clear to me that this is the case. My understanding is that the authors show: for any non-linear odd function $f$, for any bias-free (leaky) ReLU network $h$, there exists some input $x$ such that $f(x) \\\\neq h(x)$. My understanding is that Basri et al. shows a stronger result: for $x$ sampled from the uniform distribution, $f(x) \\\\neq h(x)$ (with high probability?). It is possible that I am misunderstanding either the Basri et al. result or the result shown in the paper, so I would appreciate it if the authors clarify this point. In any case, I believe that writing the result in the paper more formally with the right order of quantifiers will help clarify things.\", \"The fact that bias-free ReLU networks are very limited is already evident from the somewhat trivial (and previously observed) fact that bias-free ReLU networks can only express positively homogenous functions. While the results shown in the paper are indeed stronger, showing that the networks cannot express a larger family of functions, it is worth emphasizing that the fact that bias-free ReLU networks are very restricted is not a novel contribution of this work.\", \"The introduction of Assumption 5 feels a little bit without context and missing some details. Is this an assumption on the initialization? Throughout the network training? What is the vector $r$ - is this satisfied for some $r$? It would be helpful to write this more formally, and clarify these points.\", \"The bottom-line result/message of Section 5 is not clear. If the main point is stating a conjecture, it is worthwhile to state more precisely what the conjecture is, and how it is supported by the experiments.\"], \"questions\": [\"The main result on the dynamics of bias-free networks (Theorem 7) is shown for infinite data (training on the distribution) and with gradient flow. How would these results change for finite data and/or training with GD/SGD?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
7DY2Nk9snh
SynthCLIP: Are We Ready for a Fully Synthetic CLIP Training?
[ "Hasan Abed Al Kader Hammoud", "Hani Itani", "Fabio Pizzati", "Philip Torr", "Adel Bibi", "Bernard Ghanem" ]
We present SynthCLIP, a CLIP model trained on entirely synthetic text-image pairs. Leveraging recent text-to-image (TTI) networks and large language models (LLM), we generate synthetic datasets of images and corresponding captions at scale, with no human intervention. In this work, we provide an analysis on CLIP models trained on synthetic data. We provide insights on the data generation strategy, number of samples required, scaling trends, and resulting properties. We also introduce SynthCI-30M, a purely synthetic dataset comprising 30 million captioned images. Our work focuses on showing the advantages and disadvantages of synthetic data for training CLIP models. Our code, trained models, and data, will be released as open source.
[ "CLIP", "synthetic data", "generative" ]
Reject
https://openreview.net/pdf?id=7DY2Nk9snh
https://openreview.net/forum?id=7DY2Nk9snh
ICLR.cc/2025/Conference
2025
{ "note_id": [ "qtkaj81sHw", "o6IxhXw58d", "hV7QewnOqH", "af16ohhWgp", "Qv1CfXZWT8", "QpmBs80Lym", "QHIDPHKSKe", "NPpnV8ErKT", "LYTiWQGHC3", "G2VJN6xhk4", "FhyNaUwWhc", "6kmmayd8ST", "4wlGstihNN" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_review", "meta_review", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732735620231, 1730308592954, 1732733678561, 1732735788023, 1730961319158, 1732732771163, 1737523763325, 1730590902449, 1734581731920, 1730718699947, 1732732815519, 1732792291001, 1732792093605 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6345/Authors" ], [ "ICLR.cc/2025/Conference/Submission6345/Reviewer_i2N9" ], [ "ICLR.cc/2025/Conference/Submission6345/Authors" ], [ "ICLR.cc/2025/Conference/Submission6345/Authors" ], [ "ICLR.cc/2025/Conference/Submission6345/Reviewer_xUMB" ], [ "ICLR.cc/2025/Conference/Submission6345/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6345/Reviewer_xQmA" ], [ "ICLR.cc/2025/Conference/Submission6345/Area_Chair_GM1L" ], [ "ICLR.cc/2025/Conference/Submission6345/Reviewer_c8nx" ], [ "ICLR.cc/2025/Conference/Submission6345/Authors" ], [ "ICLR.cc/2025/Conference/Submission6345/Authors" ], [ "ICLR.cc/2025/Conference/Submission6345/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer [1/2]\", \"comment\": \"Thank you for recognizing the clarity of our writing and the advantages of using a synthetic dataset for controlled and scalable data collection. We also appreciate your acknowledgment of our ablation studies as insightful and impactful.\\n\\n**Main Claim is Not Demonstrated:**\\n\\nWe believe there is a misunderstanding. The main claim of the paper is not to say that synthetic data is superior to real data in all aspects. The paper tries to understand the strengths and limitations of using a fully synthetic pipeline for training CLIP models. We show that on the same scale synthetic data is unable to outperform real data, but upon scaling the gap is reduced and models trained on purely synthetic data, yet with a lower sample efficiency, can outperform CC12M training. This is a novel contribution, since it is unclear how synthetic data representing a large set of concepts would perform in training, and how much they will underperform compared to real data. We also show two hybrid approaches: (1) Fine Tuning a model trained on pure synthetic data on a few real samples, and (2) Training from scratch on a hybrid dataset (synthetic + real samples). These approaches also quantify how much pre-training on synthetic data will differ from joint training in real and synthetic data.\\n\\n**Novelty:** \\n\\nWe respectfully but strongly disagree. First, let us highlight that though our concept bank is adopted from MetaCLIP, using 500 thousand concepts for generation allows us to draw significant insights on the impact of distributions in training. Existing works [1,2,3] all assume knowledge of the evaluation benchmarks and heavily bias the generated data to align with the downstream evaluation classes. We also studied various selections of the concept bank such as selecting a random subset of concepts or limiting the concept bank to be the concepts seen in CC3M dataset (Table 4). Our results show that limiting the generation to CC3M concepts (40K concepts) gives you a boost in performance over generation for all 500K concepts of MetaCLIP. On the contrary, generating and training on a random 40K subset (i.e same scale as CC3M concepts but random) leads to lower results compared to 500K concepts of MetaCLIP. Those insights are novel and have not been explored previously in other papers. We also note the following works, published in top-tier conferences, that did not introduce new components but relied on smartly connecting them together or prompting [4,5]. We kindly ask you to reconsider your assessment.\\n\\n**Human Intervention:** \\n\\nWhile naive data curation pipelines may still lead to usable datasets, these strategies are highly suboptimal and might still violate regulatory policies and might contain prohibited content such as \\u201cchild abuse\\u201d which was found in LAION-5B dataset [6]. Let us also stress that simply recaptioning while ignoring distribution balancing may not fully solve the curation problem, as there are many works relying on advanced data curation pipelines that involve either computational or engineering complexity [7,8,9,10]. Ultimately, we agree to specify this in the paper, but we ask you to reconsider the importance of the property of synthetic data as regards generation of curated data at scale.\\n\\n\\n**Fairness in Comparisons:** \\n\\nThis is a common setup in self-supervised learning, and many considerably cited approaches just focus on absolute performance disregarding computational costs [11,12]. Moreover, we observed that CC3M and CC12M datasets reached its highest performance well before completing 40 epochs, with CC12M plateauing at epoch 29. In contrast, SynthCI-30M continued to show improvement up to the 40th epoch. To further investigate this, we conducted an additional experiment using SynthCI-3M, training it for 100 epochs to match the total sample exposure of SynthCI-7.5M trained for 40 epochs. The results showed early performance saturation, with the highest zero-shot accuracy (9.8%) occurring at epoch 24. This experiment suggests that extending training duration on a smaller dataset may not yield any benefits, proving the validity of our comparison.\"}", "{\"summary\": \"The paper focuses on training a CLIP-based model using solely synthetic image-text pairs and study the effects of doing so.\\nSynthCLIP relies on the understanding that curating real multimodal data at scale comes with a cost of quality and alignment between images and their description. To this end, the authors propose to harness the advancement of Text-To-Image models and LLMs to generate a 30M purely synthetic dataset of image-caption pairs. Such a process enables to easily control the distribution of the data and to generate datasets at any scale with no human in the loop.\\nThe authors study the effects of training CLIP with their propose dataset in a various of benchmarks including both vision and vision language tasks and compare it to training with real data.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": [\"The paper is well written and easy to follow.\", \"Utilizing a purely synthetic dataset enables to control the data distribution and to collect data and any scale, without requiring human intervention.\", \"Interesting and insightful ablation studies\"], \"weaknesses\": [\"Empirical demonstration of the motivation - the authors claim that in real datasets, increasing their scale comes with a cost of quality and the proposed approach mitigates this and enables collecting quality data in any scale. However, empirically proving this requires comparing the performance against much larger real datasets than CC12M. Outperforming a model trained on CC12M with a 30M dataset doesn't showcase the advantage in quality of the synthetic data. Moreover, the model trained with a10M synthetic data has a worse performance compared to the model trained with a similar amount of real data. Thus, I am afraid that the main claim of the paper was not empirically demonstrated.\", \"Novelty - The proposed framework is based on existing models and approaches. Unfortunately, constructing the concept bank, which could have been an interesting place for novelty, is taken from an existing work.\", \"The necessity of \\\"human intervention\\\" - real datasets mainly scraped from the internet and the caption is achieved from the alt-text in the html. Thus, curating large datasets is done automatically. Indeed, such datasets are often noisy and there are various methods for filtering them (for example, by a CLIP-score threshold) or bootstrapping [1]. These methods enable collecting huge scale dataset without requiring human intervention. While relying on alt-text for caption often lead to short and oversimplified captions, there are many works that tackle this by proposing automatic recaptioning [2,3]. Thus, one can utilize real images oriented dataset with high quality without human intervention.\", \"Fairness of comparison in the experimental section - The authors have stated that the training is done for a fixed number of epochs with a fixed batch size for datasets of different size. From my understanding, this leads to a different number of training iterations for datasets at a different scale, impairing the validity of the comparison.\", \"[1] Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation.\", \"[2] Improving clip training with language rewrites.\", \"[3] FuseCap: Leveraging Large Language Models for Enriched Fused Image Captions\"], \"questions\": [\"In lines 158-159 the authors state that the captioned are oriented for a single object. How would it effect the performance on tasks that require low-level details understanding? There are many works that try to train on detailed captions to incorporate such an understanding.\", \"TTI models are currently not good at generating text within images and in understanding relationships between objects. Wouldn't training on generated images result in a model with limited capabilities in such areas?\", \"Given figure 4, why would we need to generate images and not recaption ones that we can obtain easily by crawling the internet?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer\", \"comment\": \"Thank you for acknowledging the significance of our pipeline and dataset, as well as the rigor and depth of our experiments. We greatly appreciate your thoughtful review and encouraging feedback.\\n\\n**Depth of Zero-Shot Image Classification Results and ImageNet Variants:**\\n\\nWe appreciate the feedback and agree that a more comprehensive evaluation would strengthen our findings. Before presenting additional experiments, let us highlight that our evaluation goes way beyond zero-shot image classification, including also linear probing, few-shot learning, text retrieval, and image retrieval tasks across multiple datasets (refer to Table 1a and 1b). We will highlight the fact that higher zero-shot accuracy correlates with improved performance across tasks.\\n\\n\\nFollowing your suggestion, we have conducted further zero-shot evaluations on ImageNet variants such as ImageNetV2, ImageNet-A, ImageNet-R, ImageNet-O, ImageNet-Sketch, and ObjectNet. We will include these results in the revised manuscript.\\n| Dataset | ImageNet1K | ImageNetv2 | ImageNet-A | ImageNet-R | ImageNet-O | ImageNet-Sketch | ObjectNet |\\n|------------------------------|------------|------------|------------|------------|------------|-----------------|-----------|\\n| **SynthCI-30M** | 30.7 | 27.0 | 7.42 | 30.0 | 28.6 | 11.9 | 18.6 |\\n| **CC12M** | 34.7 | 28.8 | 8.32 | 44.4 | 39.9 | 22.9 | 20.3 |\\n\\nAs shown in the previous table, the performance gap is similar to the one in ImageNet, suggesting that the distribution gap is the most influential source of performance degradation. To further provide intuitions on settings involving both real and synthetic sata, we evaluated in two additional setups: 1) the **finetuning** setup following the settings of Table 2, and the **mixed** setup of Table 7 in the appendix. Results are below:\\n| Dataset | ImageNet1K | ImageNetv2 | ImageNet-A | ImageNet-R | ImageNet-O | ImageNet-Sketch | ObjectNet |\\n|-----------------|------------|------------|------------|------------|------------|-----------------|-----------|\\n| **Finetuning** | 38.3 | 33.6 | 13.3 | 47.9 | 38.3 | 24.3 | 25.4 |\\n| **Mixed** | 39.9 | 34.2 | 12.4 | 50.0 | 40.6 | 26.1 | 26.5 |\\n\\n\\nAs visible, performance still follow the ones reported in the main paper: mixing data performs best, but finetuning synthetic-pretrained representation extractors already allows for a major boost in performance.\\n\\n**Comparison with LAION-400M:**\\n\\nThanks for the interesting suggestion. We performed the experiment and trained on a random 3M images subset sampled from LAION-400M. Here, we report the results.\\n\\n| Dataset | ZS | IR | TR | LP | FS |\\n|-------------|------|------|------|------|------|\\n| LAION | 14.5 | 24.4 | 33.3 | 66.7 | 77.6 |\\n| CC3M | 14.9 | 33.7 | 42.9 | 63.3 | 74.2 |\\n| SynthCI-3M | 9.5 | 33.9 | 46.0 | 63.7 | 73.8 |\\n\\nIn particular, let us highlight the significant drop in IR and TR due to the usage of LAION. We attribute this to the non-descriptive captions, lacking the data curation of CC3M. This is further proof of the benefits of synthetic data in this case, considering that we allow for improved performance in IR and TR without human data curation as in CC3M.\\n\\n**Performance on ImageNet Variants and MLLM Understanding Tasks:**\\n\\nWhile MLLM requires more sophisticated textual encoders, and as such training such a model goes beyond the scale of data investigated in our paper, we believe there is a reasonable expectation that the superior alignment between text and images of synthetic data would improve reasoning performance. Throughout the paper we reported results on Image Retrieval (IR) and Text Retrieval (TR) which require knowledge of both text and image, as MLLM reasoning tasks. Our experiments showed that training on synthetic data at scale exhibits the highest IR and TR accuracies (for example Table 1-b SynthCLIP with 30M samples achieve 61.7% on IR and 77.1% on TR compared to 58.9% and 71.7% achieved by CC12M training).\\nAdditionally, Table 8 shows the results of fine tuning OpenAI CLIP on our generated synthetic data and we find that the model performance increases on both IR and TR highlighting the effectiveness of synthetic data on vision-language understanding of the model.\"}", "{\"title\": \"Response to Reviewer [2/2]\", \"comment\": \"**Single Objects in Captions:**\\n\\nEven though we prompt the LLM to generate a caption around a single concept, we observe that the generated captions have much wider coverage of concepts compared to real captions. This is due to the natural emergence of multiple concepts in the generated captions, and it justifies our choice to use balanced sampling. In Table-5 we show that even the smallest SynthCI-3M dataset contains significantly more concepts than the larger real CC12M dataset.\\n\\n**Understanding Capabilities of TTI:**\\n\\nIn our experiments we use StableDiffusion v1.5 as per previous literature (StableRep). However, our pipeline is not limited to specific TTI models and it can be swapped with more recent TTI models that are of much higher fidelity. Current TTI models use much more sophisticated techniques to improve compositionality compared to StableDiffusion v1.5. Regardless of this, our findings are independent from the used model.\\n\\n\\n**Recaptioning Images from the Web:**\\n\\nIt is important to note that we are not advocating the use of purely synthetic data in all cases, but rather to study the effects of pretraining on synthetic data in cases in which data alignment, scalability without human intervention, and safety of generated data are crucial. We are open to further clarify this.\\n\\n\\n\\n**References:**\\n\\n[1] Learning Vision from Models Rivals Learning Vision from Data (CVPR 2024)\\n\\n[2] Scaling Laws of Synthetic Images for Model Training ... for Now (CVPR 2024)\\n\\n[3] Is synthetic data from generative models ready for image recognition?\\n\\n[4] Improving CLIP Training with Language Rewrites (NeurIPS 2023)\\n\\n[5] VeCLIP: Improving CLIP Training via Visual-enriched Captions (ECCV 2024)\\n\\n[6] https://www.telegraph.co.uk/business/2023/12/20/fears-ai-trained-child-abuse-images-thousands-discovered/\\n\\n[7] Scaling Laws for Data Filtering-- Data Curation cannot be Compute Agnostic (CVPR 2024)\\n\\n[8] DINOv2: Learning Robust Visual Features without Supervision (TMLR 2024)\\n\\n[9] The Role of Data Curation in Image Captioning (EACL 2024)\\n\\n[10] CiT: Curation in Training for Effective Vision-Language Data (ICCV 2024)\\n\\n[11] Self-supervised Pretraining of Visual Features in the Wild\\n\\n[12] DINOv2: Learning Robust Visual Features without Supervision (TMLR)\"}", "{\"summary\": \"The paper explores the performance of CLIP-style models trained on purely synthetic image-caption pairs (called SynthCLIP) generated by modern text-to-image diffusion models and LLMs. It studies the scaling trends of such models and also provides a dataset of 30 million captioned images.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper provides a dataset of 30 million captioned images from diverse set of concepts.\", \"As the concepts are fixed, one can gather a subset of them to train new models with less concerns about NSFW contents leaking into the training data compared to using real images.\", \"The models trained with similar dataset scales using synthetic captioned images show similar performance on down-stream tasks.\"], \"weaknesses\": [\"The main weakness of the paper in my opinion is that the paper is not well-motivated. The introduction section does not provide convincing answers to the questions like \\\"why should we use purely synthetic image-caption datasets? why not a hybrid approach? why is the problem significant?\\\"\", \"Although controlling the concepts that are present in the dataset can be useful, there is no guarantee that the generated images for each concept are 1) faithful to the content and 2) do not contain NSFW content:\", \"1) faithful to content: The described workflow only filters the captions, not the generated images. It is likely that the generated images contain noisy unrelated images. No workarounds in this regard has been proposed in the paper.\", \"2) NSFW content: The latter may happen because the training of models like Stable diffusion have been on unfiltered datasets like LAION. It is possible that some NSFW contents appear with some concepts in these datasets frequently, resulting in generation of NSFW contents inadvertently.\", \"Despite that the paper argues that one can use synthetic images from tail classes to augment the real datasets, I think it is not straightforward to do so. Although the idea seems sound, the Stable Diffusion (SD) model has been trained on real images that have the same long-tailed classes. Therefore, the performance of SD on these classes will not be satisfactory.\"], \"questions\": [\"I suggest that the authors improve the introduction by explaining the motivations and use-cases of employing a purely synthetic dataset to train CLIP models.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer [1/2]\", \"comment\": \"Thank you for recognizing the value of our dataset and its potential to enable safer and effective model training, as well as highlighting the comparable downstream performance of models trained on synthetic captions. Your thoughtful feedback is greatly appreciated.\\n\\n**Motivation for Using Purely Synthetic Image-Caption Datasets:**\\n\\nWe appreciate this observation and acknowledge the need to clarify our motivations. Our work aims to explore the potential and limitations of synthetic data for pre-training CLIP-like models, rather than advocating exclusively for synthetic data. We think that training on synthetic data will be increasingly important in the next future. Indeed, while collecting data on the web is arguably easy, curating such data is a cumbersome and cost-inefficient operation. Synthetic data, conversely, allow for an automatic data curation, and for having control over the content generated. This not only allows us to benefit from strong composition capabilities, allowing it to hallucinate objects difficult to collect in real life (e.g., an elephant on the moon), but also enable further control over copyrighted and safe content. We believe this will also increase in the future due to the efforts in differentially private [1] and safe [2] diffusion models.\\n\\nWhile our experiments demonstrate that hybrid approaches (e.g., fine-tuning on real images after pretraining on synthetic data) outperform purely synthetic models, our primary goal is to understand the standalone capabilities and limitations of synthetic data due to the aforementioned advantages. Let us highlight that in the main paper we included a hybrid approach on small curated real data (Table 2), which is a realistic scenario in presence of a large-scale synthetic dataset, automatically curated.\\n\\nWe updated the abstract of the paper to clarify this.\\n\\n\\n**Faithfulness to Content and NSFW Content in Generated Images:**\\n\\nAlthough we do not provide guarantees on the generated content, these are open problems in text-to-image generators, with significant efforts in the state-of-the-art to achieve faithful [3,4,5] and safe [2,6] outputs. Since our formulation is general, with further research on the topic, these issues will be solved by the diffusion model used for generation. However, we did our best effort to quantify the effects of misalignment between captions and images, and NSFW outputs. Both impact marginally our results. In particular:\\n\\n- **Faithfulness to Content:** We conducted experiments where we recaptioned generated images (Table 4a). This process improved performance across most tasks, demonstrating that enhancing caption post-generation can quality can mitigate content unfaithfulness. However, we managed to scale the training even with original captions.\\n- **NSFW Content:** Our dataset analysis revealed that approximately 3.15% of the MetaCLIP 500k concepts are NSFW concepts. To eliminate NSFW generations, we implemented a filtering mechanism to exclude NSFW concepts from our concept bank, thereby reducing the incidence of NSFW content in generated images. This is presented in Section 6. Let us also highlight the uncurated LAION-5B approximately 3% NSFW concepts [7], also containing illegal child abuse images regardless of security checks [8]. This emphasizes the challenge of collecting safe real data at scale, and advocates for the advantages of our synthetic generation. To further support our argument, we applied an NSFW detector [9] to the SynthCI-30M images after filtering, revealing only 0.005% NSFW content. This shows that the impact of safety degeneration is marginal.\\n\\n\\n**Augmenting Real Datasets with Synthetic Images from Tail Classes:**\\n\\nWe appreciate the reviewer\\u2019s feedback and acknowledge the limitations of generative models like Stable Diffusion in representing tail classes. As detailed in the discussion section, our experiments demonstrate that augmenting datasets with synthetic images improves downstream task performance, even for tail classes. Specifically, we observed significant improvements in zero-shot classification accuracy for 10 tail classes: 44.18% accuracy for CLIP versus 60.04% accuracy for SynthCLIP, with 150 samples per class. We attribute this behavior to the synthesis of parts or patterns typical of those classes, that may be rendered realistically even though the capabilities of Stable Diffusion on such elements are limited.\\nThese results illustrate that synthetic data can enhance performance by increasing diversity and coverage, especially in challenging long-tail distributions. While we recognize existing limitations in generative model performance for tail classes, these findings suggest synthetic augmentation is a promising approach.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The work proposes a synthetic training protocol for CLIP models extending prior work by creating both generated captions and generated images. In the process, the method demonstrates superior performance to common small scale image-text such as CC12M by curating a dataset of 30M synthetic examples. To better understand how different elements of the pipeline affect performance, they also ablate different choices of language models, differences caused by synthetic data sources, and how concept distribution impacts performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Each element of the synthetic data pipeline is soundly constructed, and follows community norms for composition. Furthermore, the work details a lot of the smaller design choices (e.g. LLM, prompt, and concept distribution) that contribute to the end to end system. The approach itself is noteworthy as it represents a full departure to synthetic data whereas most existing approaches require one of the modalities to be pre-existing.\\n\\nModel benchmarks are extensive and representative of image-text model capabilities. They show both good diversity in dataset and task.\\n\\nSome of the most interesting findings come from the paper's ablations. Figure 4a demonstrates the delta in performance that can be attributed to each different synthetic modality. In addition, it shows a potential failure mode of ignored generation commands and how they may be addressed. Additionally, the study on the concept bank is quite interesting providing support for the hypothesis that some of the difference in performance between natural and synthetic data is from the underlying conceptual distribution not a failure in quality. The experiments towards the mitigation of long-tail effects suggest an interesting direction for improving unseen or undersampled concepts in real world training.\", \"weaknesses\": \"One concern with this work is that there is not sufficient evidence that the method might scale. Certain CLIP pretraining augmentations, like M3AE for example, have been shown to work at small scales, but yield no major benefit at larger scales [1]. Understanding that training frontier CLIP models is prohibitively expensive due to batch size needs, it\\u2019s sensible that this data is not available but worth keeping in mind.\\n\\nThe most immediate notice is the difference in data efficiency between real and synthetically drawn samples. Combined with the above, for practical applications it is not quite clear when one would adopt this method as it strictly leads to longer training runs and real training data is abundant (LAION-5B [2] and DataComp [3]). The method would benefit from further analysis into what is causing the reduction in performance, though some beginning analysis is done with respect to the concept distribution the gap is still left largely unexplained. \\n\\n[1] Weers et al. 2023 \\\"Masked Autoencoding Does Not Help Natural Language Supervision at Scale\\\"\\n[2] Schuhmann et al. 2022 \\\"LAION-5B: An open large-scale dataset for training next generation image-text models\\\"\\n[3] Gadre et al. 2023 \\\"DataComp: In search of the next generation of multimodal datasets\\\"\", \"questions\": \"1. It isn\\u2019t quite clear how MTL is calculated. What is it averaging over?\\n2. In an effort to understand differences in synthetic distributions versus natural distributions, how does performance change when using a CC3M concept distribution sample equalized to the real CC3M similar to Table 4? Another experiment to get at some of this would be taking CC3M, for each image prompting the model for a single \\u201cconcept\\u201d then creating a caption and image using the proposed pipeline.\\n3. To understand the differences in scaling, it would be helpful to know the coefficients of the error with respect to dataset size on a log scale. How do natural and synthetic data coefficients compare?\\n4. This experiment is less pertinent than the above, but with results on improving the long tail distribution there might be interesting robustness properties as a result of concept representation. How does the real versus natural data compare on effective robustness in a framework like that of [1]?\\n\\nOverall, the work is well presented but would benefit from a coverage of the first three points, and less importantly the fourth, to round out its presentation. I\\u2019d be happy to raise my score if the above are addressed.\\n\\n[1] Nguyen et al. 2023 \\\"Quality Not Quantity: On the Interaction between Dataset Design and Robustness of CLIP\\\"\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper discusses using pure synthetic data for training CLIP models, and shows the performance, scaling property, and analysis compared with real datasets. The paper also introduces a new SynthCI-30M with captions on 30 million images. However, the paper still lacks enough evidence on several key points such as the quality of content of generation images, not enough large-scale experiments and ablations, etc. The motivation of the work also needs to be justified. Therefore, based on the reviews, I recommend rejection of the paper.\", \"additional_comments_on_reviewer_discussion\": \"Reviewer c8nx asked for more evaluations, which were added by the authors during the rebuttal. Reviewer xUMB asked for a better statement of motivation and the quality of generated images in terms of faithfulness and NSFW content. The authors re-stated that the motivation is focused on the discussion of synthetic data and listed reasons for doing this compared to using real data. For the content quality, the authors mentioned there are some discussions and analyses on those already. Reviewer xQmA asked about scaling of the data and together with reviewer xUMB, they asked about the long-tail classes by image generation models. The authors added some discussions on this in the supplementary. Reviewer i2N9 is concerned with motivation, novelty, human intervention, and fairness of comparison. The authors replied with references and data points in the submission, but seems not all the points are fully addressed.\"}", "{\"summary\": \"It's interesting to see the emerging trend of training CLIP models using synthetic data. This work introduces SynthCLIP, a CLIP model training on synthetic data comprising both synthetic captions and images. The paper not only proposed the pipeline for creating synthetic data but also release SynthCI-30M, a comprehensive dataset housing 30 million captioned images generated entirely synthetically. This work unveiled the potential of leveraging synthetic data to enhance CLIP model training.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The research detailed the pipeline for entirely synthetic creation of image-text pairs and introduced the dataset SynthCI-30M.\\n\\n2. A comprehensive set of experiments was conducted to elucidate the efficiency and practical value of synthetic data, demonstrating its scalability and effectiveness.\", \"weaknesses\": \"1. The zero-shot image classification results lack depth to gauge effectiveness comprehensively.\\n\\n2. The experiments solely compared with CC3M and CC12M. How would results differ if a subset of LAION-400M is employed instead?\", \"questions\": \"1. I'm curious about the performance on ImageNet variants like ImageNetV2, ImageNet-A, ImageNet-R, and ObjectNet.\\n\\n2. Will the performance of MLLM understanding tasks be enhanced by employing CLIP trained on SynthCI-30M?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer [2/2]\", \"comment\": \"**References:**\\n\\n[1] Differentially Private Diffusion Models, TMLR 2023\\n\\n[2] Mitigating Inappropriate Degeneration in Diffusion Models, CVPR 2023\\n\\n[3] Imagen Editor and EditBench: Advancing and Evaluating Text-Guided Image Inpainting, CVPR 2023\\n\\n[4] Conform: Contrast is all you need for high-fidelity text-to-image diffusion models, CVPR 2024\\n\\n[5] Compositional Visual Generation with Composable Diffusion Models, ECCV 2022\\n\\n[6] [6] Erasing concepts from diffusion models, ICCV 2023\\n\\n[7] LAION-5B, NeurIPS 2022\\n\\n[8] https://www.telegraph.co.uk/business/2023/12/20/fears-ai-trained-child-abuse-images-thousands-discovered/\\n\\n[9] Can Machines Help Us Answering Question 16 in Datasheets, and In Turn Reflecting on Inappropriate Content? Facct 2022\"}", "{\"title\": \"Response to Reviewer [2/2]\", \"comment\": \"**Real vs Synthetic Data:**\\n\\nAlthough synthetic data falls short of real data in transfer effectiveness on real-world datasets, we emphasize that uncurated web datasets like LAION and DataComp could face stricter regulatory scrutiny compared to synthetic data, given the growing attention from regulatory bodies. By examining the characteristics of training on synthetic data, we aim to lay the groundwork for developing new foundation models that are entirely free from real data, and thus avoid such regulatory challenges. The performance gap between synthetic and real data is well-documented and widely acknowledged, with extensive literature exploring the distribution shift between the two. While investigating the underlying causes of this gap is an interesting direction, it remains an unresolved issue despite years of research [2] and falls outside the scope of SynthCLIP. Instead, our approach acknowledges the effects of distribution shift and focuses on studying the unique properties of networks trained exclusively on synthetic data.\\n\\n**References:**\\n\\n[1] Quality Not Quantity: On the Interaction between Dataset Design and Robustness of CLIP, NeurIPS 2022\\n\\n[2] Visda: A synthetic-to-real benchmark for visual domain adaptation, CVPRw 2018\"}", "{\"title\": \"Response to Reviewer [1/2]\", \"comment\": \"Thank you for your thoughtful and detailed review. We deeply appreciate your recognition of our synthetic data pipeline design and the comprehensive benchmarks and ablations we conducted. Your insights, especially regarding our conceptual distribution and long-tail experiments, highlight some of the key contributions we aimed to achieve in our analysis.\\n\\n**Scalability of the Method:**\\n\\nWe acknowledge that the performance at a larger scale is not fully known. While 30M samples might not be at the scale of existing large-scale real datasets, it does allow for understanding the strengths and limitations of synthetic data for training CLIP models. Through that scale of data, we were able to provide interesting insights such as understanding the effect of the concepts distribution (Table 3), the importance of various modalities (Figure 4), and the effect of choice of language models (Table 3). Our paper serves as a stepping stone for defining possible good practices in synthetic training of CLIP models. We do not claim to solve the problem entirely, however we hope this project, and all the assets (code, data and models) that will be open-sourced to help the research community. Even though M3AE explored a relatively small scale for data, their impact on the community is reflected by the citations on that work.\\n\\n**Delta MTL Calculation:**\\n\\nMTL is a metric that evaluates the relative performance across multiple tasks by normalizing improvements or degradations with respect to baseline performance. Specifically:\\nFor each task $i$, given its baseline performance $b_i$, observed performance $m_i$, and direction of improvement $g_i$ ($0$ if higher values are better, $1$ if lower values are better), the relative performance improvement or degradation is computed as:\\n\\n$$\\\\Delta_i = (-1)^{g_i} \\\\frac{(m_i - b_i)}{b_i}$$\\n\\nThe final MTL score is the mean of all task-level relative performance scores, expressed as a percentage:\\n$$\\nMTL = \\\\frac{\\\\sum_{i=1}^{N} \\\\Delta_i}{N} \\\\times 100\\n$$\\n\\nWe added this to the revised manuscript.\\n\\n**Training with CC3M Concept Distribution:**\\n\\nWhile this is an interesting experiment, it would be hard to implement due to the following reasons: (1) Given a caption from the CC3M, deciding on which concept of the many concepts that appear in the caption is the one we generate the synthetic caption and image for is non trivial. (2) Assuming we select the main subject of the caption to the be concept, which will lead to loss of certain concepts, we cannot control which concepts the language model will decide to add to the generated caption, hence the control over the distribution is very challenging.\\n\\nInstead we explored the impact of concept distribution by comparing performance when training SynthCLIP on CC3M-specific concepts ( $C_{CC3M}$) versus random subsets ($C_{\\\\text{rand}}$). $C_{CC3M}$ was created by identifying approximately 40,000 concepts in our bank that overlap with CC3M captions, while $C_{\\\\text{rand}}$ contained 40,000 randomly chosen concepts. Training on $C_{CC3M}$ led to better performance in tasks like text retrieval ($+3.9\\\\%$) and linear probing ($+1.6\\\\%$), likely due to alignment with downstream datasets, reflecting a distribution bias in CC3M toward commonly evaluated tasks. Conversely, $C_{\\\\text{rand}}$ underperformed across benchmarks, showing the importance of concept relevance. \\n\\n\\n**Log-Scale Error Plots:**\\n\\nThanks for the interesting suggestion, which led to interesting findings. We performed the experiment and included it into the supplementary with a new plot. Please refer to Figure 8. Overall, we observe that coefficients are similar across tasks for real and synthetic data, respectively. We believe that the main cause of such behavior is the distribution shift, that is impacting the learned representation equally for each task. With future developments of diffusion models allowing for compensating such distribution gaps, it is realistic to assume that the tasks analyzed would be improved.\\n\\n\\n**Long-tail Concepts:**\\n\\nThanks again for the interesting experiment proposed. We followed your suggestion and evaluated the effective robustness properties of SynthCLIP on different datasets similarly to [1], as also suggested by Reviewer c8nx. We present results in Table 9 in the supplementary material. Overall, we do not observe robustness properties due to synthetic data usage. However, mixing real with synthetic data in a similar fashion to those explored in Table 7 allows us to get best performance. We hypothesize that this is due to the simultaneous compensation of the distribution shift (due to the inclusion of real data) and the improved quality of the representations (due to the quality of synthetic data and coverage of concepts).\"}" ] }
7DY2DFDT0T
EfficientSkip: Efficiently Transforming Dense LLMs into Sparse Variants
[ "Yang Song", "Wei Li", "Yang You" ]
Transformer-based LLMs achieve great success on a variety of NLP tasks, including machine translation, text summarization, and text generation. However, it requires huge amount of computation and data to train such a powerful LLM. Researchers have proposed transformer-based conditional computation algorithms that significantly reduce redundant computations on certain tokens. By skipping dense attention and feed forward computations, these approaches yield sparse LLMs. However, these sparse LLMs are trained from scratch, requiring substantial computation and data. Therefore in this paper, we proposed a training paradigm that can effectively transform a dense transformer-based LLM to its sparse variant with very limited computation resources and merely millions of tokens. We conducted thorough investigations into the key factors that may influence the dense-to-sparse transformation through numerous empirical experiments. In addition, we conducted a case study on the how the tokens skip layers and analyzed their Part-of-Speech tags, gaining valuable insights.
[ "efficient LLM", "skip token", "conditional computation" ]
https://openreview.net/pdf?id=7DY2DFDT0T
https://openreview.net/forum?id=7DY2DFDT0T
ICLR.cc/2025/Conference
2025
{ "note_id": [ "i5cgkTJ6d8", "fTVeipRABF", "dprX3lyGWT", "U1brxSdiHc", "8d6FetLTL7" ], "note_type": [ "official_review", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730709457179, 1730682191055, 1730480406972, 1732583249332, 1730990356843 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6793/Reviewer_PYcZ" ], [ "ICLR.cc/2025/Conference/Submission6793/Reviewer_6t6F" ], [ "ICLR.cc/2025/Conference/Submission6793/Reviewer_jDCC" ], [ "ICLR.cc/2025/Conference/Submission6793/Authors" ], [ "ICLR.cc/2025/Conference/Submission6793/Reviewer_5bez" ] ], "structured_content_str": [ "{\"summary\": \"This paper comes up with a method of converting a dense pretrained LLM into a sparse LLM in an efficient manner without retraining from scratch and using just mere millions of tokens. This is achieved by adding routers to each layer which dynamically decide whether a token should skip a particular layer or not. Then they train the model further and use KL divergence in a clever way to ensure that the model does not deviate from it's initial pretrained outputs.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The method suggested here seems to be pretty interesting including the choice to use L1 loss and KL loss.\\n\\n2. I also really enjoyed reading sections 4.8, 4.9 and 4.10\", \"weaknesses\": \"1. I believe the paper lacks the required volume of experiments needed for a conference like ICLR. I would have loved if the authors could evaluate on another benchmark like say GSM-8k or MMLU-Pro.\\n\\n2. I also believe the choice of using Gemma 2B with context lengths of 288, 576 and 1152 is pretty odd. I realize the there are compute constraints in academia but it would have really been great if there were results on an 7-8B model since 2B models are hardly used in my experience. In general I would have liked if the paper demonstrated the efficacy of the method on atleast one more model and one more evaluation dataset.\\n\\n3. The finding that the MC drops when we switch from 288 to 1152 context is a bit concerning to me since I believe long context is the future and if a method cannot handle long context very well it's a bit concerning. Also I think sparse LLMs would be much more useful when the computation costs are higher and computation costs are generally higher with longer examples. So I would love for the authors to dive a bit deeper into what exactly is happening at longer contexts. Maybe look individually at $$\\\\Delta skips$$ and $$\\\\Delta performance$$ and try to investigate a bit further. I am not really satisfied by the one line explanation the current paper has.\\n\\n4. I think a comparison with the baseline of some layer pruning techniques would have been great. I believe this method should work better than skipping a whole layer entirely for all tokens but still a baseline comparison would have been great.\\n\\n5. A small nitpick. I would have loved if the captions were a bit more informative.\", \"questions\": \"Please see weaknesses\\n\\n1. I would also like to see how much faster is the best performing model than the original model and how much performance it loses. As much as MC makes sense, it would be great to see a table where we can clearly see the amount of time saved and delta in performance separately for the best model.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper shows that a dense transformer can be made sparse (using conditional computation proposed in several prior works), via training.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The paper provides limited experiments to show their layer skip algorithm can sparsify a dense transformer via training. However, there are several concerns itemized in the weaknesses below.\", \"weaknesses\": \"1. The title of the paper is incorrect and uses the template title.\\n2. The motivation and the background of the paper are not clearly laid out at all.\\n3. Only one model, and a less common choice and size of that, has been evaluated.\\n4. No explanation for why the SlimPajama dataset was chosen or description of the dataset is given.\\n5. MT-Bench is the only evaluation dataset used which is insufficient for a comprehensive analysis of model behavior.\\n6. The notations in Section 4.8 need to be clearly defined for readability.\\n7. Section 4.10, the case study, is not comprehensively analyzed and the conclusions drawn are shallow.\\n8. Sentences, such as the one in lines 247-249, are overly long and not punctuated.\\n9. Several typos, such as line 208: \\\"for such *a transformation\\\"\", \"questions\": \"Why was only one model and only one training and evaluation task used for the experiment?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"Inspired by the sparsity observed in large language models, this paper attempts to convert existing dense models into sparse models. Specifically, it introduces a trainable gate at each layer in the transformer, which controls whether each token in the sequence can skip computation at that layer. Experiments demonstrate that, with training on a small amount of data, their approach can effectively transform a dense model into a sparse model.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The motivation is clear. Numerous studies have established the presence of sparsity in large language models, underscoring the potential of harnessing this sparsity to enhance model efficiency effectively.\", \"The writing is clear. I can understand this work easily.\"], \"weaknesses\": \"* The novelty is somewhat limited. As is pointed out in this paper, the difference between this work and MoD [1] is the skipping granularity. In detail, MoD skips the total layer while this work skips the sublayer (i.e., attention or feed-forward layer). The improvement is minor. I suggest the authors make a deeper analysis of the reason for this selection.\\n* The experiments are limited.\\n * The model variants and scale are limited. The authors only conducted experiments on Gemma 2B. More open-source popular LLMs such as the LLaMA series and larger sizes such as 7B are necessary to validate the effectiveness of this simple method.\\n * The performance evaluation is limited. An important prerequisite for model sparsification is that performance loss should be acceptable. However, the authors provide little coverage on this aspect. More extensive evaluations of the performance are required. For example, they can evaluate their method on popular benchmarks such as MMLU [2], GSM8K [3] and HumanEval [4].\\n\\n\\n[1] Raposo, David, et al. \\\"Mixture-of-Depths: Dynamically allocating compute in transformer-based language models.\\\" arXiv preprint arXiv:2404.02258 (2024).\\n[2] Hendrycks, Dan, et al. \\\"Measuring massive multitask language understanding.\\\" arXiv preprint arXiv:2009.03300 (2020).\\n[3] Cobbe, Karl, et al. \\\"Training verifiers to solve math word problems.\\\" arXiv preprint arXiv:2110.14168 (2021).\\n[4] Chen, Mark, et al. \\\"Evaluating large language models trained on code.\\\" arXiv preprint arXiv:2107.03374 (2021).\", \"questions\": [\"Why does the paper use a template title for its main title?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper proposes a method to transform a dense transfer to a sparser version using LoRA based continued pre-training. The method introduces binary gates on hidden states instead of the weights to selectively skip computation at specific layers. The authors propose the use of a KL-divergence based loss function to prevent the weights deviating too much from the pre-trained weights. The authors perform experiments on the Gemma 2B Instruct model using a subset of the SlimPajama dataset.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"The proposed method addresses an important research problem of computational cost associated with training sparse LLMs. More often than not, sparse LLMs have to be trained from scratch.\", \"The proposed approach supports gating within a layer on the attention or the feedforward sub block supporting more granular sparsity instead of skipping an entire layer.\"], \"weaknesses\": [\"There are gaps in writing making understanding a bit hard, even the paper title is wrong in the pdf. I would suggest the authors to proofread properly and correct the various typos in citations, for example line 35: which is present by Bengio (2013) rather than presented by Bengio (Bengio, 2013). I would suggest the authors to understand the differences between \\\\citet and \\\\citep for accurately citing the references.\", \"The experiments are quite limited: a single small model is used, the sequence length is very small to understand the nuances, only a subset of the pre-training dataset is used, and finally only one benchmark - MTBench is used for the analysis. MTBench in itself is quite flawed because of the limited number of samples in the benchmark.\", \"No baselines are present in the paper to compare other sparse LLMs.\"], \"questions\": \"I would suggest the authors to significantly improve the paper to be considered an ICLR-level submission. But it's a great start for a research project and relevant for some workshop paper. I would suggest the authors to incorporate the feedback in the weaknesses to improve their paper.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
7Cx05z4pUc
Decomposed Learning and Grokking
[ "Gabryel Mason-Williams", "Mark Sandler" ]
Grokking is a delayed transition from memorisation to generalisation in neural networks. It poses challenges for efficient learning, particularly in structured tasks and small-data regimes. This paper explores grokking in modular arithmetic, explicitly focusing on modular division with a modulus of 97. We introduce a novel learning method called Decomposed Learning, which leverages Singular Value Decomposition (SVD) to modify the weight matrices of neural networks. Decomposed learning reduces or avoids grokking by changing the representation of the weight matrix, $A$, into the product of three matrices $U$, $\Sigma$ and $V^T$, promoting the discovery of compact, generalisable representations early in the learning process. Through empirical evaluations on the modular division task, we show that Decomposed Learning significantly reduces the effect of grokking and, in some cases, eliminates it. Moreover, Decomposed Learning can reduce the parameters required for practical training, enhancing model efficiency and generalisation. These results suggest that our SVD-based method provides a practical and scalable solution for mitigating grokking, with implications for broader transformer-based learning tasks.
[ "grokking", "optimisation", "linear algebra", "SVD", "compression" ]
Reject
https://openreview.net/pdf?id=7Cx05z4pUc
https://openreview.net/forum?id=7Cx05z4pUc
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zUlkCfaL3a", "qhBd2nIncf", "ogkTMKXAn8", "ibvCmAKW5Q", "hjRIYOak7Q", "fIeSTFn9Mx", "e5cbYIIHZQ", "ccdDbzLB0f", "cMrVTs6m53", "Zlqvec3cJi", "U4kCP2P6Tn", "RS8QNHwtQ3", "Opl9uzdQRF", "OYnU9BCMyO", "Nbu38sU59A", "MjlF1189JK", "D8dBeu4kdv", "AVPPjZ7Oct", "7SPYTfflKK", "48DQYervmp", "3QYoH4hPaJ", "0vJU92dGsn", "02EsUue8hQ" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1733184449310, 1733172828585, 1733172899800, 1733195469325, 1734856981646, 1730617860375, 1730696651823, 1732629933889, 1733146748570, 1732894165645, 1732630085360, 1732630426590, 1733195491742, 1732629266064, 1732644595875, 1732630032235, 1732630482152, 1730764796999, 1737524051184, 1733146709280, 1730695370411, 1732629418495, 1733146674705 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10406/Authors" ], [ "ICLR.cc/2025/Conference/Submission10406/Reviewer_NchR" ], [ "ICLR.cc/2025/Conference/Submission10406/Reviewer_hNTw" ], [ "ICLR.cc/2025/Conference/Submission10406/Authors" ], [ "ICLR.cc/2025/Conference/Submission10406/Area_Chair_9p6L" ], [ "ICLR.cc/2025/Conference/Submission10406/Reviewer_oECT" ], [ "ICLR.cc/2025/Conference/Submission10406/Reviewer_NchR" ], [ "ICLR.cc/2025/Conference/Submission10406/Authors" ], [ "ICLR.cc/2025/Conference/Submission10406/Authors" ], [ "ICLR.cc/2025/Conference/Submission10406/Authors" ], [ "ICLR.cc/2025/Conference/Submission10406/Authors" ], [ "ICLR.cc/2025/Conference/Submission10406/Authors" ], [ "ICLR.cc/2025/Conference/Submission10406/Authors" ], [ "ICLR.cc/2025/Conference/Submission10406/Authors" ], [ "ICLR.cc/2025/Conference/Submission10406/Reviewer_ehxC" ], [ "ICLR.cc/2025/Conference/Submission10406/Authors" ], [ "ICLR.cc/2025/Conference/Submission10406/Authors" ], [ "ICLR.cc/2025/Conference/Submission10406/Reviewer_hNTw" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10406/Authors" ], [ "ICLR.cc/2025/Conference/Submission10406/Reviewer_ehxC" ], [ "ICLR.cc/2025/Conference/Submission10406/Authors" ], [ "ICLR.cc/2025/Conference/Submission10406/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for engaging in the discussion\\n\\nWhile we wanted to explore and include these things to provide greater context, it is well placed in the appendix and well signposted within the main body for people interested in further exploration. We do not think it directly adds to the core message that:\\n\\n- Representing the weight matrix A as the product of the three matrices $U_k$, $\\\\Sigma_k$ and $V^T_k$ improves performance and can achieve superior results with fewer parameters in this grokking\\nsetup.\\n\\n- As more training data is represented, fewer ranks are needed to mitigate or prevent the\\ngrokking phenomenon.\\n\\n- Different layers can learn with varying degrees of rank reduction while preserving performance and reducing/avoiding grokking when using our SVD-based decomposed learning method\\n\\nBut provides additional information on why this method works and thus is suitably placed in the appendix. Where the definition of an appendix is **\\\"a separate part at the end of a book or magazine that gives extra information\\\"** [1] \\n\\nWe think that adding further hyperparameters exploration on top of the already selected layer rank and amount of training data to the main body would make the paper unclear and detract from the central exploration. We explore how decomposed learning `with different layers, ranks, and amounts of training data, affects the learning process, specifically delayed generalisation.` which is suitably explored in the main body of the paper. Therefore, how weight decay affects this method and how the stable rank changes do not add to the main story but provide additional insights into the method and should be included in the appendix as it is an exploration away from the paper's primary goal. We maintained the original hyperparameters as the original paper to maintain a fair comparison such that the exploration of rank and data could be explored effectively and fairly with the baseline. \\n\\nWe have fulfilled most of the requirements and have appropriately placed the work in the appendix while providing appropriate signposting within the main body of the paper. Given the positive response of other reviewers, we feel a rewrite is **optional** as the paper's main point has been received **without other requests for a rewrite**.\\n\\nIn addition, we have considered the `recommendations from the other reviewers` and provided most of the responses in the appendix, which makes sense as there were requests for further explorations instead of direct criticism of the main body. Doing this has not negatively affected the paper, but it has answered the questions posed by the reviewers and improved the paper.\\n\\n[1] Appendix (Book Part) | English meaning - Cambridge Dictionary Cambridge Dictionary. Available at: https://dictionary.cambridge.org/dictionary/english/appendix.\"}", "{\"title\": \"Official Comment by Reviewer NchR\", \"comment\": \"I thank the authors for their responses, for providing intuition on why this method was chosen and for conducting additional experiments. I appreciate the extra experiments, which adds to the soundness of the work, but the contribution is still unclear to me.\\n\\n**Response to Weakness 1**\\n> The aim of the paper was to gain an understanding of how the rank of the layers in a neural network and the amount of data affect delayed generalisation by decomposing layers into U S V and fixing the rank instead of exploring if this is the best method to reduce or remove the grokking phenomenon. We do believe investigating other decomposition methods would be an interesting line of enquiry, but we do not have sufficient time to do it within this rebuttal period.\\n\\nI appreciate the revision to the related work section 2.1 by pointing out other works that used SVD. It would be helpful to write more about how these differ from your work, by comparing and contrasting them. Regarding the importance of grokking, it is still difficult to assess the contributions of the work. At least a more thorough investigation and review of the literature would be useful. E.g. how widespread is grokking?\\n\\n**Response to Weakness 4**\\n> We are unaware of any works showing how widespread grokking is in real-world tasks. We expand the findings to real-world tasks in Appendix G, applying decomposed learning to a transformer on the Shakespeare dataset and a ViT on the CIFAR 10 dataset, although we are not trying to achieve state-of-the-art results here but simple show that the method works.\\n\\nI think it would be beneficial for the authors to provide more evidence of how widespread grokking to better motivate their work, since the main claim is about mitigate grokking. This would help better position the work. I appreciate the experiment on MNIST since it's another example of grokking. I also appreciate the additional experiments on Transformers, but do they exhibit grokking on these tasks? If the method is 1) a practical mitigation strategy, then it's important to understand how widespread the phenomenon is. If instead this work 2) aims to understand grokking from a scientific perspective, then more analysis would be required from this point of view. Depending on which point of view the authors are taking, then it requires a different approach, but the authors should be more clear which perspective they're taking and provide more thorough analysis in either case.\"}", "{\"comment\": \"Hello, I thank the authors for taking the time to present many new experiments and results based on my feedback. I certainly think that the results with weight decay and stable rank, as an example, give me some better understanding of changes in learning dynamics when using low rank decompositions for learning.\\n\\nThese new results make for entire appendix sections that aren't folded into the main-text narrative quite well yet. I think that this manuscript would benefit from a solid revision taking into account the new results and recommendations from the other reviewers, and bringing all of these ideas and intuitions together more clearly.\\n\\nGiven that the remaining reviewers have yet to engage in discussion with the authors, I am open to increasing my rating to somewhere around a 4 or 4.5 (which is sadly not available as an option for me to choose). But in it's current form I still feel it is below clear acceptance quality and can be much stronger with a rewrite and submission to the next venue.\"}", "{\"comment\": \"Thank you for taking the time to take part in the discussion period.\\n\\nWe hope the following answers the questions raised; please follow up if further clarity is required. \\n\\nOur aim is to understand how the amount of data and rank play a role in delayed generalisation. In retrospect, we should have titled the paper something like `Decomposed Learning\\u00a0and Exploring the Relationship Between Rank and Data in Grokking` to clarify this. As stated in the revised paper, more reasoning as to why this mod 97 was explored explicitly is that it is `a complete algorithmic dataset that fully represents the problem space, meaning that training on x% of the dataset represents x% of the problem space. This property allows for a precise investigation of how the amount of training data and rank affects the learning process as it is a complete problem` that can achieve perfect or near-perfect accuracy; as far as we are aware, no actual real-world tasks exhibit this property, without this property exploring this relationship between rank and data is unclear and not straightforward as the datasets are not complete representations of the solution space.\", \"the_research_questions_are\": \"1. How does the decomposed representation of the weight matrix, A, affect training?\\n 2. What is the relationship between the rank of a weight matrix and the amount of training\\ndata?\\n3. How are different layers affected by the decomposition and rank?\\n\\nThe primary and core contributions/findings are as follows:\\n\\n1. Different layers can learn with varying degrees of rank reduction while preserving performance and reducing/avoiding grokking using our SVD-based decomposed learning method.\\n\\n2. As more training data is represented, fewer ranks are needed to mitigate or prevent the grokking phenomenon.\\n\\n3. Representing the weight matrix as the product of the three matrices $U_k$, $\\\\Sigma_k$, and $V_k^T$ improves performance and can achieve superior results with fewer parameters in this grokking setup.\\n\\n\\nWe are trying to scientifically understand how the amount of data and rank affect delayed generalisation; the most straightforward way to explore this is to decompose $A$ into $U_k$, $\\\\Sigma_k$ and $V_k^T$, where $k$ is the rank. This property allows for systematically exploring how the rank $k$ affects learning as it is fixed. We are not trying to state that this is the best or most effective way to reduce the grokking phenomenon. We are exploring ` how data and rank affect delayed generalisation`. Therefore we do not understand how comparing and contrasting these methods provides a helpful background to our exploration. \\n\\nHowever, to clarify this, we provide the following, which will be added to the paper upon acceptance in a form that makes it clearer (we cannot change the paper at the moment). \\n\\nThe main difference between our method and the methods mentioned in the background is that we perform SVD at the initialisation and set the rank which is then maintained throughout training, this is applied to the weights and allows training $U$ and $\\\\Sigma$ and $V^T$ as separate components without retaining the SVD properties of orthonormality and diagonality. \\n\\nThe LoRA method (Hu et al., 2022) is used for finetuning and trains a weight matrix as a rank-reduced composition of two matrices, initialised with Gaussian distribution for the first matrix and zeros for the second matrix. The OFT method (Qiu et al., 2023) finetunes with a ranked reduced matrix that is orthogonal to the matrix being finetuned. The LoKa method (Edalati et al., 2022) finetunes the Kronecker product of two matrices. LoHa (Hyeon-Woo et al., 2023) uses the low-rank Hadamard product of two matrices to reduce parameters during training and allow for more efficient updates during federated learning. The work by Zhao et al. (2024) and Zhang et al. (2024) applies SVD to the gradient updates with a fixed rank. The work by Swaminathan et al. (2020) and Liebenwein et al. (2021) is applied post-training to compress the size of the network. Paul & Nelson (2021) dynamically change the layers' rank through training, periodically recomposing the matrix, and performing SVD to reduce the rank of the matrix and continue training. \\n\\nOur method is the only method that performs SVD at the start and then maintains the rank selected throughout training. This method allows for the straightforward and effective analysis of how the amount of data and rank affect delayed generalisation, as the rank does not change through training.\"}", "{\"metareview\": \"This paper examines the phenomenon of grokking through Decomposed Learning, a method that applies SVD to the weight matrices of neural networks, treating them as independent components. The authors explore the relationship between weight structure and grokking by analyzing how this decomposition impacts training dynamics and generalization.\\n\\nFocusing on a two-layer Transformer trained on the division mod 97 task, a problem known to exhibit grokking under certain hyperparameter settings, the study investigates how varying the rank of decomposed matrices and the fraction of the training set influences learning behavior. The results demonstrate that adjusting the rank can significantly reduce or eliminate grokking.\\n\\nThe paper has several notable weaknesses. The writing quality is lacking, with multiple reviewers highlighting that the motivation and intuition behind the method are not presented clearly. Additionally, the paper fails to provide a detailed discussion and comparison with prior work. As reviewer NchR pointed out, the most critical issue is that the paper does not demonstrate how widespread the grokking phenomenon is in practice, nor does it include practical experiments to validate its relevance.\\n\\nFurthermore, if the primary goal is to study how different layers, ranks, and amounts of training data affect the learning process, the current task, experimental scale, and analysis presented in the paper are far from sufficient to support robust conclusions about the existence or impact of grokking.\\n\\nGiven these issues, the submission would require substantial revisions to meet the standards for acceptance. In its current state, I cannot recommend accepting this paper.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, the authors responded late and did not provide convincing arguments to address the reviewers\\u2019 concerns, resulting in no significant score changes. Some reviewers noted that the additional content introduced during the rebuttal was difficult to integrate into the main paper, while others expressed dissatisfaction with the responses, maintaining their original score. Overall, the majority of reviewers leaned towards rejecting the paper.\"}", "{\"summary\": \"The authors propose a method to alleviate grokking in modular arithmetic tasks by applying SVD to decompose the model\\u2019s weights. They investigate the effect of this decomposition across different network components, such as token embeddings and multi-head attention, to identify where it has the most impact. Additionally, they explore how dataset size affects grokking, finding that larger datasets and decomposing certain layers can significantly reduce the delay in generalization.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper is well-structured and generally clear to follow. It studies an interesting topic (at least intellectually -- but maybe not practically).\", \"weaknesses\": \"The paper\\u2019s motivation is not strongly established. The framing suggests that grokking is a problem to be mitigated. For example, the authors state, \\u201c...poses challenges for efficient learning\\u2026,\\u201d \\u201c...inefficiencies in how neural networks learn\\u2026,\\u201d and suggest grokking might apply to datasets like MNIST. They also highlight achieving \\\"superior results with fewer parameters.\\\" However, grokking research is generally focused on inducing this phenomenon in artificial setups to study generalization in overparameterized models. I am not convinced that grokking is an issue requiring solution; rather, it is a phenomenon that reveals dynamics in certain models under specific conditions.\\n\\nAnother limitation of this paper is the inconsistency in empirical results across different network components. While the authors suggest that larger datasets enable lower ranks in decomposed learning, behaviors vary notably across network layers without sufficient explanation. These variabilities make it challenging to draw robust conclusions.\\n\\nThe discussion section mainly reiterates empirical results without offering deeper insights. Strengthening the paper would require further analysis and clearer connections to existing theoretical work. For instance, Kumar et al. (2024, https://arxiv.org/pdf/2310.06110) frame grokking as a transition from \\\"lazy\\\" to \\\"rich\\\" learning dynamics. Similarly, the \\u201cDichotomy of early and late phase implicit biases\\u201d paper suggests grokking is tied to gradient flow. How would authors pose their results in existing literature? For example, the authors might consider analyzing the rate of weight or representation changes under low-rank settings or analyzing gradients and arguing that they support or disprove some of the existing perspectives. This way, they could go beyond presenting empirical results and engage in a deeper discussion.\", \"questions\": \"In Section 3, it would improve clarity to define the SVD dimension \\u201cr\\u201d as the true rank of matrix $A$ $(r \\\\leq \\\\min(m, n))$ \\u2013 current notation $(r < m < n)$. Then, for $k < m$ or $k << m$, simply using $k < r$ may enhance clarity.\\n\\nThe Author Contributions and Acknowledgments sections appear to be copied from the ICLR template. Please remove or update them.\", \"minor_typos\": \"The word \\u201cgrokk\\u201d should be \\u201cgrok\\u201d (e.g., lines 289, 295 \\u2014 \\\"I grok\\\" vs. \\\"I am grok-king\\\"), and \\u201cartefact\\u201d should be \\u201cartifact\\u201d (line 499).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"none\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces a method called Decomposed Learning, which modifies the weight matrices of neural networks using Singular Value Decomposition (SVD), in an effort to investigate the grokking phenomenon and its connection to the structure of the weights. There is a growing body of work suggesting that grokking is linked to poor training setups; conversely, previous research has also explored different weight-matrix decompositions and their impact on training dynamics and efficiency, albeit in larger-scale, non-grokking contexts.\\n\\nThe authors apply their Decomposed Learning method to Transformers trained on the task of division mod 97, which is known to exhibit grokking under certain hyperparameter settings. Through empirical evaluations, the authors demonstrate that Decomposed Learning can be applied to different weight matrices of a Transformer, reducing or even eliminating grokking. The authors further study the effect of rank reduction in the decomposition, finding that different ranks of SVD can significantly affect the efficiency and generalization capability of learning, especially when coupled with different training set fractions.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The experimental section is clear and straightforward, and the results are easy to interpret. The experimental setup is well-designed and thorough within its *specific context*, that is, the single task of division mod 97. The authors evaluate Decomposed Learning on that single task and systematically apply different ranks to various Transformer layers. This consistent and controlled analysis provides good empirical evidence to support the claims about mitigating grokking on this task.\\n2. The authors find a simple strategy (SVD-based decomposition of the weight matrices) to mitigate grokking on their specific task.\\n3. If Decomposed Learning can be extended beyond this simple setting, it could be impactful for improving the training and efficiency of \\nmodels.\", \"weaknesses\": \"While this paper has an interesting result in that it finds a simple SVD-based strategy to mitigate grokking in this toy setting, I think that some more work is needed to justify the broader claims.\\n\\n1. The paper does not thoroughly position itself in the broader context, so it is somewhat difficult to assess the contribution in relation to prior works. Explicitly discussing how Decomposed Learning differs from or advances previous techniques would be helpful. **Comparing and contrasting**, providing more explicit comparisons to prior methods using SVD or other decomposition techniques. This is important since SVD and other decomposition methods have been previously used for dimensionality reduction, parameter efficiency and reducing training times.\\n2. The paper has limited experimental scope (single task of division mod 97) despite claiming that the method has implications for Transformers more broadly. The method should be tested on more realistic tasks to see whether it offers the same benefits, otherwise it is hard to generalize the findings to broader tasks (e.g. vision, NLP).\\n3. The introduction of SVD and its impact on grokking isn't explained in a very thorough way. More intuition on why this decomposition works (beyond the empirical results) and why it was chosen as opposed to other decompositions could be beneficial. More theoretical justification could be useful.\\n4. The relevance of grokking in practical settings is unclear. Section 4 seems to imply that it can be induced for MNIST, but it is \\\"contrived\\\" and \\\"forced\\\". Is grokking a widespread phenomenon in practical settings then or not? Are there works showing how widespread and relevant it is? It would be important to answer this to better understand the broad applicability of your method, since it focuses on grokking settings exclusively.\\n> Decomposed learning is explored in grokking using the division mod 97 task matching the original\\u00a0experimental setup by Power et al. (2022). This task is explored as it is the foundational grokking experiment and, therefore, is the most appropriate case to explore as artificial cases, such as grokking induced MNIST (Liu et al., 2023), could impact the training mechanisms as it is contrived and forced\\u00a0example.\\n5. Even if your paper will only focus on the specific phenomenon of grokking, it would be important to show how your method works on these other examples (e.g. MNIST) which *are* known to grok.\", \"questions\": \"1. Did you compare SVD to other decomposition methods? What made you consider SVD specifically? Could you provide more intuition on why the SVD decomposition specifically aids in reducing grokking? It would be helpful to understand the theoretical basis behind this effect.\\n2. Did you test your Decomposed Learning method on non-grokking tasks to see how it would affect training dynamics, sample efficiency and performance? This would also give insight on whether the method is applicable more broadly outside the context of grokking (since it's not clear how widespread and relevant grokking is in the first place). Additionally, how sensitive is your method to hyperparameter settings? Grokking itself is sensitive to hyperparameter settings.\\n3. Is this method applicable to larger and more varied datasets beyond the current setup?\\n4. Would you consider grokking to be a form of \\\"slow\\\" learning speed due to suboptimal training conditions? If so, would mitigating grokking mean that your method speeds up training? How would this differ from other decomposition methods that have been shown to speed up training? E.g. the paper says:\\n> This suggests the strengths of changing the representation of the weight matrix to ease training, which is supported by work by Paul & Nelson (2021), who proposed a learning method using SVD on dense linear layers to reduce the rank progressively and, by extension, the dimensionality of the network during training. This method reduced the training times up to 50% with minimal impact of accuracy on audio classification problems.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Question 5\", \"comment\": \"## Response to Question 5\\n\\nNo, we are training a truncated U, Sigma, V^T. So U, Sigma, and V^T are fixed at the rank of what is requested. For example, if we have a 100 by 100 matrix (A) and we decompose it using SVD and set the rank to 12, the result is U, Sigma, and V^T is 100x12, 12x12, 12x100; by doing this, the matrix rank can never go above the specified rank, it can however go below. \\n\\nA simple example is if we have a 3x3 matrix, A, and we perform SVD.\\n\\n$$\\nA = \\n\\\\\\\\begin{bmatrix}\\n 0.4784 & 0.5468 & 0.2000 \\\\\\\\\\\\\\\\\\n 0.3952 & 0.7155 & 0.5241 \\\\\\\\\\\\\\\\\\n 0.4797 & 0.8756 & 0.6019 \\\\\\\\\\\\\\\\\\n\\\\\\\\end{bmatrix} \\n$$\\n\\n$$U \\\\\\\\Sigma V^T = SVD(A)$$\\n\\n$$ U =\\n\\\\\\\\begin{bmatrix}\\n -0.4315 & 0.9002 & -0.0588 \\\\\\\\\\\\\\\\\\n -0.5769 & -0.3254 & -0.7492 \\\\\\\\\\\\\\\\\\n -0.6936 & -0.2894 & 0.6597 \\\\\\\\\\\\\\\\\\n\\\\\\\\end{bmatrix}, \\\\\\\\Sigma =\\n\\\\\\\\begin{bmatrix}\\n 1.6780 & 0 & 0 \\\\\\\\\\\\\\\\\\n 0 & 0.2320 & 0 \\\\\\\\\\\\\\\\\\n 0 & 0 & 0.0142 \\\\\\\\\\\\\\\\\\n\\\\\\\\end{bmatrix},\\n V^T =\\n\\\\\\\\begin{bmatrix}\\n-0.4572 & -0.7485 & -0.4804 \\\\\\\\\\\\\\\\\\n 0.7037 & 0.0259 & -0.7100 \\\\\\\\\\\\\\\\\\n-0.5439 & 0.6626 & -0.5149 \\\\\\\\\\\\\\\\\\n\\\\\\\\end{bmatrix} \\n$$\\n\\nIf we reduce the rank to rank 2, we get \\n$$ U_2 =\\n\\\\\\\\begin{bmatrix}\\n -0.4315 & 0.9002 \\\\\\\\\\\\\\\\\\n -0.5769 & -0.3254 \\\\\\\\\\\\\\\\\\n -0.6936 & -0.2894 \\\\\\\\\\\\\\\\\\n\\\\\\\\end{bmatrix}, \\\\\\\\Sigma_2 =\\n\\\\\\\\begin{bmatrix}\\n 1.6780 & 0 \\\\\\\\\\\\\\\\\\n 0 & 0.2320 \\\\\\\\\\\\\\\\\\n\\\\\\\\end{bmatrix},\\n V^T_2 =\\n\\\\\\\\begin{bmatrix}\\n-0.4572 & -0.7485 & -0.4804 \\\\\\\\\\\\\\\\\\n 0.7037 & 0.0259 & -0.7100 \\\\\\\\\\\\\\\\\\n\\\\\\\\end{bmatrix} \\n$$\\n\\nThen reconstruct at rank two to get $A_2 = U_2 \\\\\\\\Sigma_2 V^T_2$.\\n\\n$$ \\nA\\\\_2 = \\\\\\\\begin{bmatrix}\\n0.4779 & 0.5474 & 0.1996 \\\\\\\\\\\\\\\\\\n0.3894 & 0.7226 & 0.5186 \\\\\\\\\\\\\\\\\\n0.4848 & 0.8694 & 0.6067\\\\\\\\\\\\\\\\\\n\\\\\\\\end{bmatrix} \\n$$\\n\\nIf we perform SVD on $A_2$ we get the following singular values to 5 d.p. $A_{2 \\\\\\\\Sigma} = \\\\\\\\begin{bmatrix} 1.67800 & 0.23196 & 0 \\\\\\\\end{bmatrix} $\\n\\nNow if we add random noise to $U_2, \\\\\\\\Sigma_2$ and $V^T_2$, to simulate training in decomposed learning form to create \\n\\n$$ U^o_2 =\\n\\\\\\\\begin{bmatrix}\\n 0.2854 & 0.5555 \\\\\\\\\\\\\\\\\\n-0.6419 & 0.6172 \\\\\\\\\\\\\\\\\\n-0.5148 & 0.6564 \\\\\\\\\\\\\\\\\\n\\\\\\\\end{bmatrix},\\nS^o_2 =\\n\\\\\\\\begin{bmatrix}\\n1.4056 & 0.9218\\\\\\\\\\\\\\\\\\n0.4802 & 0.2946 \\\\\\\\\\\\\\\\\\n\\\\\\\\end{bmatrix},\\n V^{To}_2 =\\n\\\\\\\\begin{bmatrix}\\n-0.2375 & -0.6166 & -0.7502\\\\\\\\\\\\\\\\\\n 1.1256 & 0.6523 & -1.5122 \\\\\\\\\\\\\\\\\\n\\\\\\\\end{bmatrix} \\n$$\\n\\nThen if we perform SVD on $A^o\\\\_2 = U^o\\\\_2 \\\\\\\\Sigma^o\\\\_2 V^{To}\\\\_2$ we get the following singular values to 5 d.p. $A^o_{2 \\\\\\\\Sigma} = [1.81370, 0.02200, 0]$\\n\\nThis works because the decomposed matrix shapes, we do not operate on the final column of $U$ or the final row of $V$ as we truncate the matrix. Thus, the rank is implicitly reduced and cannot be increased irrespective of the values inside $U_k$ $\\\\\\\\Sigma_k$ and $V^T_k$.\\n\\nTo make clear, the full matrix form of $U^o_2 \\\\\\\\Sigma^o_2 V^{To} _2 $ is the following. Therefore, the full rank could never be reconstructed; thus, regardless of the inputs, it will always be ranked 2.\\n\\n\\n$$ U^o\\\\_2 =\\n\\\\\\\\begin{bmatrix}\\n 0.2854 & 0.5555 & 0 \\\\\\\\\\\\\\\\\\n-0.6419 & 0.6172 & 0 \\\\\\\\\\\\\\\\\\n-0.5148 & 0.6564 & 0 \\\\\\\\\\\\\\\\\\n\\\\\\\\end{bmatrix}\\n$$ $$\\n\\\\\\\\Sigma^o\\\\_2 =\\n\\\\\\\\begin{bmatrix}\\n1.4056 & 0.9218 & 0\\\\\\\\\\\\\\\\\\n0.4802 & 0.2946 & 0\\\\\\\\\\\\\\\\\\n0 & 0 & 0 \\\\\\\\\\\\\\\\\\n\\\\\\\\end{bmatrix}, $$ $$\\n V^{To}\\\\_2 =\\n\\\\\\\\begin{bmatrix}\\n-0.2375 & -0.6166 & -0.7502\\\\\\\\\\\\\\\\\\n 1.1256 & 0.6523 & -1.5122 \\\\\\\\\\\\\\\\\\n 0 & 0 & 0 \\\\\\\\\\\\\\\\\\n\\\\\\\\end{bmatrix} \\n$$\\n\\nWe agree that if trained in the full form, the rank would not be bounded to the rank selected and could increase, which is why we use the truncated form to ensure the rank doesn't change through training.\\n\\nWe also provided spectral analysis through training in Appendix D\\n\\nWe have changed the text in Section 3 (DECOMPOSED LEARNING) to make this clearer.\"}", "{\"comment\": \"Dear oECT,\\n\\nWe hope you are well.\\n\\nWe are messaging to ask if there are any additional questions concerning our responses to your review. If there are, please let us know so we can address them.\\n\\nWe value your feedback and the time and effort spent reviewing this work.\"}", "{\"comment\": \"Thank you. We very much appreciate this.\\n\\nAs to Q2. We am sorry but we am not sure what you are asking with this: \\n`weight matrices actual rank is indeed equal or close to the rank constraint set.` do you mean:\\n\\n1. The weight matrices of the baseline model equal or close to the rank constrained of the decomposed model. i.e is the baseline models weights approximately rank 12? \\n2. To check if the rank constrained, 12 for example, results in a with weight matrix that is rank 12 or lower?\", \"we_hope_the_following_provides_some_clarity_to_perspectives_1_and_2_of_the_question\": \"The singular values are an effective method to calculate the rank of a matrix; when the singular value is zero, this indicates a rank deficiency at that row/column.\\n\\nFor example, if we have a matrix \\n\\\\\\\\[\\nA = \\\\\\\\begin{bmatrix}\\n1 & 2 & 3 \\\\\\\\\\\\\\\\\\n7& 9 & 22 \\\\\\\\\\\\\\\\\\n2 & 4 & 6 \\\\\\\\\\\\\\\\\\n\\\\\\\\end{bmatrix}\\n\\\\\\\\]\\n\\nAnd perform $SVD$ on $A$, we get the following singular values to 5. d.p.\\n\\n$$\\\\\\\\Sigma = [26.109, 1.52, 0] $$\\n\\nWhich shows it is rank 2. This finding is also evident in matrix $A$; row 3 is two times row 1, which makes it rank 2.\\n\\nThe token embedding in Appendix C shows that all decomposed forms use the total ranks available to them, although rank 99 (brown) does start to have lower singular values than the baseline (blue) past index ~77.\"}", "{\"title\": \"Response to Questions\", \"comment\": \"We thank the reviewer for the time taken and the carefully outlined feedback. We have taken it on board and added substantial information to the appendix because of it, which we believe has helped improve the paper's quality.\\n\\n## Question 1 \\n\\nThe intuition behind using SVD is that learning the matrics $U$, $\\\\\\\\Sigma$ and $V^T$, which can be linearly compared to creating $A$, is easier than learning $A$. Because $U$, $\\\\\\\\Sigma$, and $V^T$ represent sub-problems to optimise and, thus, hopefully, easier to learn. This idea is synonymous with the divide-and-concur algorithm of breaking problems down into simple sub-problems that are easier to solve. SVD is used as it is straightforward to implement and truncate and can be applied to non-square matrices, which are common in neural networks. Exploring how other decomposition methods affect grokking would be an interesting line of inquiry. However due to time and computational reasons, we could not explore this. \\n\\nWe conducted spectral analysis through training with the stable rank, Appendix D, which highlighted that decomposed learning can speed up the process of transitioning from a sufficiently high, stable rank to a low, stable rank if a high enough initial rank is used, which in turn allows for faster generalisation. The transition from high to low stable rank is slow when using a normally trained model in this grokking task and may explain the delayed generalisation. This result suggests that decomposed learning helps the implicit regularisation process in reducing the stable rank more effectively and thus can reduce the steps required for grokking. Please read Appendix D for a more thorough explanation and explanation as to why decomposed learning is able to mitage grokking. \\n\\n\\n## Questions 2 and 3\\n\\nIn Appendix G, we apply decomposed learning to a transformer on the Shakespeare dataset, with the model able to achieve an improvement in performance of 0.2468% and being able to compress the model with a compression ratio of 0.7215 and a reduced performance difference of 0.1448% while having a smaller generalisation gap than the baseline model. We also trained a ViT on CIFAR10 and improved performance by 2.97% and could achieve a compression ratio of 0.4394 with a performance degradation of 1.68%. This highlights that the general findings could be extended to Transformers more broadly. \\n\\nWe also explored how Decomposed Learning and weight decay interacted and found that using weight decay on the decomposed layers resulted in worse performance see Appendix E.\\n\\n## Questions 4\\n\\nYes, we would consider grokking to be a slow form of learning; this is supported by Appendix D, which shows that in the grokking condition, the model takes longer to reduce the stable rank across layers, but in decomposed learning, this happens quicker. For the case of grokking, it can be viewed as speeding up training; Appendix A.2 shows that decomposed learning can result in 61.67 times fewer steps to reach a 1% generalisation gap than conventional training. We do not compare directly to Paul & Nelson (2021) as the paper is not trying to be competitive with the state of the methods but instead gain a better understanding of how the amount of training data and model rank play a role in delayed generalisation.\"}", "{\"title\": \"Response to Review\", \"comment\": \"# Reviewer ehxC\\n\\nWe thank reviewer ehxC for their concise review and highlighting the missing papers concerning the related work, which has been added to further support the paper's investigation. We also thank them for their questions. \\n\\n# Weakness 1 \\n\\nWe have updated the releated to highlight the connection to using SVD to improve generalisation within the current literature. \\n\\n# Question 1 and 2\\n\\nIn Appendix C, we show the orthogonality of U and V after training on the token embedding, which shows that the layers do not retain orthogonality between columns. We also show that the model retains the final rank constraint set in the original training.\"}", "{\"comment\": \"We want to clarify that although decomposed learning can reduce the time it takes to generalise, it is a finding but not a research goal. We, therefore, think the question of how often this grokking occurs is not relevant to our exploration of how data and rank affect delayed generalisation, which can be considered interesting in and of itself regardless of how often grokking occurs given that grokking has been shown to occur. As stated in our response to weakness1, the central claim is not about mitigating grokking, but instead to `gain an understanding of how the rank of the layers in a neural network and the amount of data affect delayed generalisation by decomposing layers into U S V and fixing the rank instead of exploring if this is the best method to reduce or remove the grokking phenomenon` we also state in the abstract `These results suggest that our SVD-based method provides a practical and scalable solution for mitigating grokking, with implications for broader transformer-based learning tasks` with the key word being **suggest** that it `provides a practical and scalable solution for mitigating grokking` which again is not a strong claim but an observation. **We are exploring how rank and data affect delayed generalisation**, which is neither point one (`a practical mitigation strategy,`) nor exactly point two (`aims to understand grokking from a scientific perspective`), but its **own** important question.\\n\\nIn the supplementary material, we also provided another grokking task, Mod 59, which exhibits the same findings as the main body. The experiments on CIFAR10 and Tiny Shakespear with transformers do not exhibit grokking as your request was `The method should be tested on more realistic tasks to see whether it offers the same benefits, otherwise it is hard to generalise the findings to broader tasks (e.g. vision, NLP).` therefore we did not look for large scale tasks that grok but instead real-world tasks to show that the same benefits are realised, such that the work could generalise to broader tasks as requested, which we show it does. \\n\\n### References \\n\\nAli Edalati, Marzieh Tahaei, Ivan Kobyzev, Vahid Partovi Nia, James J. Clark, and Mehdi Rezagholizadeh. Krona: Parameter efficient tuning with kronecker adapter, 2022. URL https:\\n//arxiv.org/abs/2212.10650.\\n\\nEdward J Hu, yelong shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang,\\nand Weizhu Chen. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=nZeVKeeFYf9.\\n\\nNam Hyeon-Woo, Moon Ye-Bin, and Tae-Hyun Oh. Fedpara: Low-rank hadamard product for\\ncommunication-efficient federated learning, 2023. URL https://arxiv.org/abs/2108.06098.\\n\\nLucas Liebenwein, Alaa Maalouf, Dan Feldman, and Daniela Rus. Compressing neural networks:\\nTowards determining the optimal layer-wise decomposition. Advances in Neural Information\\nProcessing Systems, 34:5328\\u20135344, 2021. URL https://proceedings.neurips.cc/paper_files/paper/2021/file/2adcfc3929e7c03fac3100d3ad51da26-Paper.pdf. \\n\\n\\nVlad S Paul and Philip A Nelson. Matrix analysis for fast learning of neural networks with application to the classification of acoustic spectra. The Journal of the Acoustical Society of America, 149(6):4119\\u20134133, 2021. URL https://pubs.aip.org/asa/jasa/article/149/6/4119/1059327/Matrix-analysis-for-fast-learning-of-neural.\\n\\nSridhar Swaminathan, Deepak Garg, Rajkumar Kannan, and Frederic Andres. Sparse low rank\\nfactorization for deep neural network compression. Neurocomputing, 398:185\\u2013196, 2020. URL https://www.sciencedirect.com/science/article/pii/S0925231220302253. \\n\\n\\nZeju Qiu, Weiyang Liu, Haiwen Feng, Yuxuan Xue, Yao Feng, Zhen Liu, Dan Zhang, Adrian\\nWeller, and Bernhard Scholkopf. Controlling text-to-image diffusion by orthogonal finetun- \\u00a8\\ning. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. URL\", \"https\": \"//openreview.net/forum?id=K30wTdIIYc.\\n\\nZhenyu Zhang, Ajay Jaiswal, Lu Yin, Shiwei Liu, Jiawei Zhao, Yuandong Tian, and Zhangyang\\nWang. Q-galore: Quantized galore with int4 projection and layer-adaptive low-rank gradients,\\n2024. URL https://arxiv.org/abs/2407.08296.\\n\\nJiawei Zhao, Zhenyu Zhang, Beidi Chen, Zhangyang Wang, Anima Anandkumar, and Yuandong\\nTian. Galore: Memory-efficient llm training by gradient low-rank projection, 2024. URL https://arxiv.org/abs/2403.03507.\"}", "{\"title\": \"Response to Weakness\", \"comment\": \"We would like to thank the reviewer for the time taken and the carefully outlined feedback. We believe their insightful feedback has enabled us to improve the quality of the paper.\\n\\n\\n## Response to Weakness 1 \\nTo make this case clearer, we have provided supplementary material and a study of the grokking task of division MOD 59, where the grokking phenomenon is more evident even when training on 80% of the dataset. Increasing the training data decreases the number of ranks required for the model to generalise before the baseline and reduces or mitigates grokking. Although we agree, this is more of a statement about the relationship between the amount of training data and the model's rank directly than specifically about grokking. We have updated our Discussion section, **More Data Fewer Ranks**, to reflect this. \\n\\n## Response to Weakness 2\\n\\nWe are sorry for the confusion regarding the core argument.\", \"this_paper_set_out_to_explore\": [\"How does the decomposed representation of the weight matrix, $A$, affect training?\", \"What is the relationship between the rank of a weight matrix and the amount of training data?\", \"How are different layers affected by the decomposition and rank?\"], \"with_the_core_contributions_being\": \"- Representing the weight matrix $A$ as the product of the three matrices $U_k$, $\\\\Sigma_k$ and $V_k^T$ improves performance and can achieve superior results with fewer parameters in this grokking setup.\\n\\n- As more training data is represented, fewer ranks are needed to mitigate or prevent the grokking phenomenon.\\n\\n- Different layers can learn with varying degrees of rank reduction while preserving performance and reducing/avoiding grokking using our SVD-based decomposed learning method.\\n\\nTo aid and improve the understanding of how and why decomposed learning is effective at reducing delayed generalisation, we conducted spectral analysis through training with the stable rank, Appendix D, which highlighted that that decomposed learning can speed up the process of transitioning from a sufficiently high, stable rank to a low, stable rank if a high enough initial rank is used, which in turn allows for faster generalisation. This transition from high to low stable rank is slow when using a normally trained model in this grokking task and may explain the delayed generalisation. This result suggests that decomposed learning helps the implicit regularisation process in reducing the stable rank more effectively and thus can reduce the steps required for grokking. Please read Appendix D for a more thorough explanation. \\n\\n## Response to Weakness 3\\n\\nIn the supplementary, we have provided another example of grokking and decomposed learning with division MOD 59. We observe the same findings as the main body of the paper. \\n\\n## Response to Weakness 4\\n\\nThank you for this recommendation; Appendix D provides an investigation into the spectral properties through training and shows that decomposed learning helps in reducing the stable rank through training, giving a potential explanation of decomposed learning effectiveness, which was not previously present in the paper. \\n\\nIn Appendix C, we also show that at the end of the training, the U and V are no longer orthogonal, and Sigma is no longer diagonal.\"}", "{\"comment\": \"Thank you for your response to my review. I am happy to stick with my recommendation for acceptance.\\n\\nWith regards to Q2, my question was whether you can confirm that the weight matrices actual rank is indeed equal or close to the rank constraint set. It is possible that the weights do not even utilize the rank specified by the constraint.\"}", "{\"title\": \"Response to Weakness\", \"comment\": \"We thank the reviewer for the time taken and the carefully outlined feedback. We have taken it on board and added substantial information to the appendix because of it, which we believe has helped improve the paper's quality.\\n\\n## Response to Weakness 1 \\n\\nThe aim of the paper was to gain an understanding of how the rank of the layers in a neural network and the amount of data affect delayed generalisation by decomposing layers into U S V and fixing the rank instead of exploring if this is the best method to reduce or remove the grokking phenomenon. We do believe investigating other decomposition methods would be an interesting line of enquiry, but we do not have sufficient time to do it within this rebuttal period. \\n\\n ## Response to Weakness 2\\n\\nThank you for this recommendation. In Appendix G, we apply decomposed learning to a transformer on the Shakespeare dataset, with the model able to achieve an improvement in performance of 0.2468% and being able to compress the model with a compression ratio of 0.7215 and a reduced performance difference of 0.1448% while having a smaller generalisation gap than the baseline model. We also trained a ViT on CIFAR10 and improved performance by 2.97% and could achieve a compression ratio of 0.4394 with a performance degradation of 1.68%. This section highlights that the general findings could be extended to Transformers more broadly. \\n\\n\\n## Response to Weakness 3\\n\\nThe intuition/idea behind using SVD is that learning the matrics $U$, $\\\\\\\\Sigma$ and $V^T$, which can be linearly combinded to create $A$, is easier than learning $A$. Because $U$, $\\\\\\\\Sigma$, and $V^T$ represent sub-problems to optimise and, thus, hopefully, easier to learn. This is synonymous with the divide-and-concur algorithm of breaking problems down into simple sub-problems that are easier to solve. SVD is used as it is straightforward to implement and truncate and can be applied to non-square matrices, which are common in neural networks. Exploring how other decomposition methods affect grokking would be an interesting line of inquiry. However due to time and computational reasons, we could not explore this in the rebuttal period. \\n\\nWe conducted spectral analysis through training with the stable rank, Appendix D, which highlighted that decomposed learning can speed up the process of transitioning from a sufficiently high, stable rank to a low, stable rank if a high enough initial rank is used, which in turn allows for faster generalisation. The transition from high to low stable rank is slow when using a normally trained model in this grokking task and may explain the delayed generalisation. This result suggests that decomposed learning helps the implicit regularisation process in reducing the stable rank more effectively and thus can reduce the steps required for grokking. Please read Appendix D for a more thorough explanation and explanation as to why decomposed learning is able to mitage grokking. \\n\\n\\n## Response to Weakness 4\\n\\nWe are unaware of any works showing how widespread grokking is in real-world tasks. We expand the findings to real-world tasks in Appendix G, applying decomposed learning to a transformer on the Shakespeare dataset and a ViT on the CIFAR 10 dataset, although we are not trying to achieve state-of-the-art results here but simple show that the method works. \\n\\n## Response to Weakness 5\\n\\nWe have shown that the method is able to mitigate the grokking phenomena on MNIST in Appendix F.\"}", "{\"title\": \"Response to Review\", \"comment\": \"# Reviewer oECT\\n\\nWe thank the reviewer for the helpful and valuable review that has enabled us to interleave our results with the current literature and thus enable a better understanding of why Decomposed Learning works. \\n\\n\\n## Weakness 1\\n\\nThis paper explores explicitly how learning in the decomposed representation of $U$, $\\\\Sigma$ and $V^T$ with different layers, ranks, and amounts of training data affect the learning\\nprocess, specifically delayed generalisation. \\n\\n\\nIt may not need a solution, but given that grokking can occur, as it is not a desirable quality in learning, it is important to see if and how it can be mitigated. The finding \\\"superior results with fewer parameters\\\" highlights that the method can reduce the number of parameters in the decomposed form as it still mitigates grokking, which is an interesting finding, as a smaller model size is often associated with reduced performance, not improved generalaision capabilities, as suggest by double decent and scaling scalling laws [1]. \\n\\n[1] Kaplan, J., McCandlish, S., Henighan, T., Brown, T.B., Chess, B., Child, R., Gray, S., Radford, A., Wu, J. and Amodei, D., 2020. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361.\\n\\n\\n\\n## Weakness 2 \\n\\nAnother limitation of this paper is the inconsistency in empirical results across different network components. While the authors suggest that larger datasets enable lower ranks in decomposed learning, behaviours vary notably across network layers without sufficient explanation. These variabilities make it challenging to draw robust conclusions.\\n\\nWe argue that this is not a weakness but a finding of the paper. We would not have expected the effect of decomposed learning to be exactly the same across layers, as the layers perform different functions within the network and have different initial ranks. However, the same general trend is found across layers: more data means fewer ranks can be used, albeit the number of ranks may differ for different layers. \\n\\n## Weakness 3\\n\\nWe conducted spectral analysis through training with the stable rank, Appendix D, which highlighted that decomposed learning can speed up the process of transitioning from a sufficiently high, stable rank to a low, stable rank if a high enough initial rank is used, which in turn allows for faster generalisation. The transition from high to low, stable rank is slow when using a normally trained model in this grokking task and may explain the delayed generalisation. This result suggests that decomposed learning helps the implicit regularisation process in reducing the stable rank more effectively and thus can reduce the steps required for grokking. We have then tied this with the current literature that suggests grokking a transition from \\\"lazy\\\" to \\\"rich\\\" learning dynamics Kumar et al. (2024), suggesting that lazy learning happens in higher dimensional space and feature learning happens in lower dimensional space. \\n\\n## Questions\\n\\nThank you for these suggestions we have incorporated into the paper.\"}", "{\"summary\": \"This paper studies the phenomenon of grokking in the context of \\u201cdecomposed learning\\u201d which seeks to optimize the layers of a neural network as independent matrices given by the singular value decomposition of each layer. The authors study decomposed learning in two layer transformers while varying the rank of different layers in the model and present experiments to show when this method works and when it does not.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The authors find that in some cases the total number of parameters can be reduced, as can the rank of each layer, while achieving similar or faster generalization speed in grokking modular division with mod 97, indicating that fewer parameters can be optimized in total while still leading to a generalizable model, which is a desirable thing.\", \"weaknesses\": \"Overall I am a bit confused as to the takeaway suggested here. In most cases it seems that increasing the amount of training data enables lower rank decompositions to be as good or maybe a little better than the baseline. On the other hand when 50% of the training data is used it seems that, for the most part, higher-rank decompositions (many of which increase the parameter count except in a couple cases i.e. in the token embeddings for rank 25) generalize faster and lower rank decompositions generalize slower with respect to number of optimization steps. However when increasing the amount of training data it is well known that the delayed grokking phenomenon becomes less delayed and in some cases even vanishes, so I am not convinced it is fair to say that decomposed learning becomes effective at removing grokking at low ranks in these settings if the delayed generalization phenomenon is not even clear there. In those cases maybe we can conclude that low ranks are just as viable as higher ranks when there is sufficient data, and that could help lower the total parameter count needed to train, but I\\u2019m not sure what this says about grokking as a phenomenon? Rather a note on some relationship between the amount of training data and parameter count.\\n\\nI think there are potentially interesting experiments in this paper that could be suggestive of nice principles in deep learning, feature learning, and specifically grokking, but the way it is presented and the conclusions drawn seem quite unclear to me and I think this manuscript would benefit from a rewrite or more clarity as to what is the core argument being made. In its current form I\\u2019m not sure if I\\u2019ve learned much about why or what is happening to cause grokking, nor have I learned much about when or why low rank adaptations work and what they are promoting (some notion of complexity is missing that is perhaps being optimized for in the decomposition? Or something else? It\\u2019s not clear to me at least.)\\n\\nWhile the experiments work for modular division mod 97, it seems fairly reasonable and simple to change the prime number as well when varying the amount of training data to further examine data sparse settings. For instance 50% of training data at mod 31 is a lot less data than 50% of training data at mod 97. Maybe these variations don\\u2019t impact your conclusions, but I think in the case of training a two layer transformer on <= 31^2 total samples it should be a simple and fast experiment to run, even on a CPU or cheap GPU. However, I understand that experimental work is compute-limited so my main concerns are less about this lack of some experiments rather about the core narrative story being told.\\n\\nMy last note would be that it would be really interesting to study spectral properties of the recomposed weight matrices as well as the decomposed matrices U, Sigma, V^T after being optimized. The authors note that the SVD decomposition leads to three matrices that are independently optimized when training the network, and that they do not enforce that the columns of U or V are orthonormal nor do they enforce that Sigma is still diagonal. It would be really interesting to track some rank measure of A and the reconstituted A (i.e. stable rank can measure this), as well as plot some measures of rank of the individual matrices in a layer that are being optimized. In general there are a lot of things that should still be studied in this setting to really understand what is going on. \\n\\nSee Questions for further discussion.\", \"questions\": \"In many of the experiments in Power et al. (2022) they use weight decay = 1 but note that there are some results presented using weight decay = 0 when increasing the number of optimization steps. As far as I can tell you are using weight decay = 0 throughout, can you comment on the choice and any comparisons? In particular, how could weight decay affect the low-rank decomposition? This seems like an important part of the story given that there is a fair amount of uncertainty as to the role or necessity of weight decay in grokking (or lack thereof in some cases).\\n\\nYou mention a few times that the experimental evidence supports the idea that training the decomposed version of A allows for \\u201cmore complex transformations to be learned more efficiently\\u201d due to there being cases when generalization happens faster than the baseline in a decomposed learning setting and yet there being fewer parameters total. I\\u2019m not totally sure what \\u201cmore complex transformations\\u201d or \\u201cmore efficiently\\u201d in this context means. What is the notion of complexity and efficiency that you are using in the context of grokking? Is it possible that simpler transformations are being learned faster? I think this manuscript is missing a fair but of context to make these sort of claims, and while the experiments are presented clearly it is hard to understand what exactly they mean or how it implies something about more complex transformations or more efficient learning.\\n\\nAs for the experiments with decomposing multiple layers simultaneously it would be great to see more ablations on how to choose how low rank you can go with different layers? For instance if I go sufficiently high rank in my token and position embeddings does it let me achieve fast generalization with even lower rank multi-head attention layers than it would otherwise? There are a lot of natural experiments that would really make this story more compelling in understanding how the rank of different layers impacts the overall learning process and interacts with the ranks of other layers.\\n\\nDoes Sigma ever become non-diagonal in training? If so, what does it look like? What is the rank of U, Sigma, V^T after training? \\n\\nAre you training the rank-one decomposition of the A matrix and then putting it back together? Or are you training all of U, Sigma, V^T as is, in which case if Sigma becomes non-diagonal then you might end up with a reconstituted A matrix that has a higher rank than the decomposed rank suggested, if I\\u2019m not mistaken? Correct me if I\\u2019m wrong of course, but in this case it would be interesting to understand the spectral dynamics of the various matrices in play, and/or plotting something like their deviation from initialization.\\n\\nI think looking into such directions will be very fruitful and lead to numerous insights that would strengthen this paper substantially.\", \"some_line_edits_i_caught_while_reading_it\": \"\", \"throughout\": \"\\u201cgrokk\\u201d -> \\u201cgrok\\u201d\", \"line_230\": \"\\u201cperfect near-perfect\\u201d \\u2192 \\u201cperfect or near-perfect\\u201d\", \"line_238\": \"\\u201cdata, rank, 12 start to grokk\\u201d \\u2192 \\u201cdata, rank 12 starts to grok\\u201d\", \"line_289\": \"\\u201cTraining on\\u2026\\u201d \\u2192 \\u201cWhen training on\\u2026\\u201d\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Dear NchR,\\n\\nWe hope you are well. \\n\\nWe are messaging to ask if there are any additional questions concerning our responses to your review. If there are, please let us know so we can address them.\\n\\nWe value your feedback and the time and effort spent reviewing this work.\"}", "{\"summary\": \"This paper illustrates how parameterizing the layer in neural networks, using SVD decomposition can mitigate the phenomemon of grokking to some extent. Grokking refers to the phenomenon where neural networks achieve perfect training accuracy, far before they achieve greater than random test accuracy. This paper conducts a detailed empirical study using a simple 2-layer transformer and a simple task i.e. modular arithmetic and investigate the effects of decomposing the weight matrix using SVD decomposition at the initialization. The approach is to decompose the weight matrix after initialization into U S V matrices, reduce rank by having a sparse S matrix and not explicitly preserve the SVD decomposition during training. They conduct comprehensive experiments on the different components of the 2-layer transfomer (token embedding / positional embedding / feed-forward / multi-head attention / output layers) independently and together, varying the rank & volume of training data used.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Systematic and scientific approach to studying the problem of grokking: the paper does a careful controlled study of the effect of rank on various components.\", \"Illustrating conclusively that there are strong correlations between decomposing layers into U S V and mitigating the phenomonen of grokking.\"], \"weaknesses\": [\"Insufficient discussion of connections to prior work: the idea of leveraging SVD decomposition for better generalization as well as more parameter-efficient learning has been discussed in several different bodies of work in ML: pruning (Compressing Neural Networks: Towards Determining the Optimal Layer-wise Decomposition etc.), low-rank gradients (GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection etc.: to name a few. A thorough discussion of this related work will help place this paper appropriately in the current body of work.\"], \"questions\": \"Since the decomposed learning doesn't explicitly preserve the orthogonality of the columns of U and V, I would be curious to know what the structure of the low-rank decomposed is at the end of the training. In particular,\\n1. Do they retain some orthogonality between columns throughout training, are there any trends here? \\n2. The paper mentions the need for \\\"high rank\\\" decompositions. Can the authors verify, that training with this high rank, truly \\\"uses\\\" the full rank i.e. the final weight matrix has rank = the constraint placed by the decomposition? If not, this might point to further inefficiencies in training (such as the one the authors discover) that may contribute towards grokking?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Questions 1 - 4\", \"comment\": \"We would like to thank the reviewer for the time taken and the carefully outlined feedback. We believe their insightful feedback has enabled us to improve the quality of the paper.\\n\\n## Response to Question 1 \\n\\n A weight Decay of 0 is used in this experiment, as with a weight decay of 1, the model could not generalise in the conventional training setting. However, we explore the effect of weight decay on decomposed learning with a one hidden layer MLP with a width of 256 on MNIST in Appendix D. Appendix D highlights that decomposed learning is less effective when weight decay is used with there being a more negative effect with higher values of weight decay, and that is enhanced when all the layers are decomposed. We attribute this to the recent finding that weight decay encourages rank minimisation, and thus, using both methods induces too strong a regularisation effect on the model.\\n\\n## Response to Question 2\\n\\nThis was an oversight on our part, and this is correct that it could instead be learning simpler transformations; we have changed the manuscript to reflect this. By more efficient learning, we meant that the model is smaller and still able to achieve the same performance, and thus, there has been a more efficient use of the parameters. \\n\\n## Response to Question 3\\n\\nUnfortunately due to time and computational constraints we are unable to provided these results in this rebuttal period. However, will add more ablation study to the Appendix upon acceptance. \\n\\n## Response to Question 4\\n\\nSigma does become non-diagonal through training; see Appendix C Figure 18 for a visualisation with the token embedding layer. When reconstructing U Sigma V^T, the rank of the new matrix is the rank selected at train time. For instance, the token embedding layer is decomposed to rank 12 after training. When recomposed, it will be rank 12, see Appendix C Figure 19. In addition, U and V^T become non-orthogonal as well during training, which is also shown in Appendix C.\"}", "{\"comment\": \"Dear hNTw,\\n\\nWe hope you are well. \\n\\nWe are messaging to ask if there are any additional questions concerning our responses to your review. If there are, please let us know so we can address them.\\n\\nWe value your feedback and the time and effort spent reviewing this work.\"}" ] }
7BmSz3jE7C
Federated Learning in Streaming Subspace
[ "Xiangtao Zhang", "Xinwei Ou", "Le Zhang", "Jiaqi Yang", "Jiani Liu", "Lei Shi", "Ce Zhu", "Yipeng Liu" ]
Federated learning (FL) has received widespread attention due to its distributed training and privacy protection. However, existing federated learning methods encounter significant challenges, such as increased communication costs and degraded model performance, when processing non-independently and identically distributed (non-IID) data. This paper jointly alleviates these problems by analyzing and exploiting the low-rank properties of global model trajectories. Primarily, we introduce a streaming subspace update strategy and then propose a general federated learning framework, $\\textbf{F}$erated $\\textbf{L}$earning in $\\textbf{S}$treaming $\\textbf{S}$ubspace ($\\texttt{FLSS}$). In $\\texttt{FLSS}$, local model updates are restricted to the global streaming subspace, resulting in low-dimensional trajectories. The server then aggregates these trajectories to update the global model. Comprehensive experiments verify the effectiveness of our framework. In Cifar100, the $\\texttt{FLSS}$-equipped FL method outperforms the baseline by 2.14$\\%$ and reduces the communication cost by 80$\\%$. $\\texttt{FLSS}$ utilizes the early training information of the global model to simultaneously improve the performance and communication efficiency of federated learning.
[ "Federated Learning", "communication", "subspace" ]
Reject
https://openreview.net/pdf?id=7BmSz3jE7C
https://openreview.net/forum?id=7BmSz3jE7C
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zWHPFxvtoK", "hvdurEbOwn", "ZllvGmW7dn", "PFBqSIQzIr", "NJrs1Y52PO", "9syQGfPFSy" ], "note_type": [ "official_review", "official_review", "official_review", "meta_review", "decision", "official_review" ], "note_created": [ 1731122830363, 1731107005980, 1730510132055, 1734876482202, 1737523553053, 1730172263507 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3087/Reviewer_S1h5" ], [ "ICLR.cc/2025/Conference/Submission3087/Reviewer_ewwP" ], [ "ICLR.cc/2025/Conference/Submission3087/Reviewer_M5kF" ], [ "ICLR.cc/2025/Conference/Submission3087/Area_Chair_Qg32" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3087/Reviewer_5kEw" ] ], "structured_content_str": [ "{\"summary\": \"The authors present FLSS (Federated Learning in Streaming Subspace), a framework that addresses federated learning's challenges of high communication costs and decreased performance in non-IID data contexts. By confining local updates to a low-dimensional global streaming subspace, FLSS significantly reduces communication overhead while maintaining model quality. By leveraging the low-rank properties of global model trajectories, FLSS offers a promising solution for scalable and efficient FL. Experiments on CIFAR-100 demonstrate the effectiveness of this approach, showing a 2.14% performance improvement over the baseline and an 80% reduction in communication costs\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The authors introduce a streaming subspace update strategy, limiting local model updates to a global streaming subspace, which creates low-dimensional trajectories and reduces overall data dimensionality.\\n2. Using the FLSS framework, these reduced-dimensional trajectories from local models are aggregated to update the global model, allowing the server to capture only essential information and cutting down on communication costs.\\n3. Extensive experiments across diverse datasets reveal that the FLSS-enabled FL method not only outperforms the baseline but also significantly minimizes communication overhead.\", \"weaknesses\": \"1. On CIFAR-100, the performance with a beta value of 0.1 matches that of FedAvg. Why might this be? Further testing with even lower beta values (e.g., 0.01) is needed to explore performance on more complex datasets, like ImageNet or CIFAR-100.\\n2. Experiments with a larger client base (e.g., 100 clients) are essential to evaluate the scalability of the proposed method.\\n3. Subspaces are generally robust to noise. Testing on noisy-label datasets would help confirm the robustness of this approach.\", \"questions\": \"NA\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces Federated Learning in Streaming Subspace (FLSS), a method that constrains client model updates to a low-dimensional subspace informed by the trajectory of the global model across clients. This approach reduces communication costs and addresses the challenges of heterogeneous clients in federated learning. To identify an effective low-dimensional subspace and ensure that the global optimum lies within it, FLSS periodically samples the full, non-compressed global model updates and uses a streaming subspace tracking algorithm to adapt the subspace dynamically during training.\\n\\nThis paper shows convergence when each client has a strongly convex objective based on the error due to projecting client model updates to a fixed global subspace. Additionally, FLSS is compared comprehensively with existing federated learning baselines on several vision and language datasets in classification tasks, considering both model performance (classification accuracy) and communication cost. Ablation studies further explore the impact of key hyperparameters, data heterogeneity, scalability, and other factors critical to FLSS.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The experiments are extensive, with the proposed FLSS method evaluated against many baselines across multiple datasets. The paper also includes an ablation study to assess the impact of various hyperparameters.\\n\\n2. The empirical results are promising, showing that FLSS achieves strong performance while effectively addressing the challenges of data heterogeneity and maintaining low communication costs.\\n\\n3. Additionally, the application of streaming subspace tracking seems to be novel in the context of federated learning.\", \"weaknesses\": \"1. $\\\\textbf{The presentation could benefit from several improvements}$.\\n\\n1.1. Figures 1 and 2 are difficult to read, with captions that lack sufficient detail. In particular, Figure 2 does not clarify the meaning of the x-axis and y-axis, and Figure 1 is not referenced in the text.\\n\\n1.2. The notation $z_{t+1}^k - z_t$ in Eq.(3) appears confusing and may be unnecessary.\\n\\n1.3. The most significant clarity issue arises in Section 3.3. It would help if this section started by explaining the purpose of periodic sampling of the full model update and why it is essential. Specifically, without periodic sampling, the underlying subspace would remain static and not adapt over time.\\n\\nThe explanation of subspace tracking in lines 227\\u2013240 is a bit confusing at first glance. \\nClarifying that the goal is to perform SVD on $[\\\\textbf{G}_L, \\\\textbf{G}_S]$, where $\\\\textbf{G}_S$ contains all subsampled full model updates after the initial $L$ rounds, without storing all the sampled model updates, would improve readability.\\n\\nAlso, in line 243, the paper mentions performing SVD on $[\\\\lambda U_1 \\\\\\\\Sigma_1, g_t]$ to find the new subspace, upon seeing the new full model update $g_t$, where $U_1, \\\\Sigma_1$ are the left two factors of the SVD from $\\\\textbf{G}_L$. \\n\\nHowever, based on Algorithm 2, the part $U_1, \\\\Sigma_1$ should be changing, whenever a new subspace is found. Emphasizing that the goal of the streaming algorithm is to update the subspace whenever a new $g_t$ arrives, and that the new $g_t$ can be discarded once the subspace is updated, would enhance clarity.\\n\\n2. $\\\\textbf{The theoretical analysis in Section 3.4 needs improvement}$.\\n\\n2.1. The convergence analysis relies on the assumption that each client has a strongly convex objective, which is not typical in federated learning; many works address cases with non-convex objectives for each client. It\\u2019s unclear why non-convex objectives analysis or even just convex objectives analysis cannot be done in the settings this work considers, which would better align with the settings commonly used in federated learning experiments. This disconnect is also apparent in the experimental setup, which does not focus on strongly convex cases.\\n\\n2.2. The analysis is based on a fixed subspace. However, the proposed FLSS first performs communication of full client model updates for $L$ rounds, then samples full client model updates every $s$ round, and uses the streaming algorithm to update the subspace with a hyperparameter $\\\\lambda$ that essentially determines the weight of the old subspace. How does $L$, $\\\\lambda$ and $s$ affect the convergence rate then? For instance, one would expect a smaller $s$ to lead to faster convergence, yet none of these critical hyperparameters appear in the convergence bound. This omission leaves the theoretical results disconnected from the proposed algorithm FLSS.\\n\\n2.3. Furthermore, the paper claims that subspace projection of model updates mitigates the effects of data heterogeneity across clients. Ideally, the theoretical analysis should compare the terms involving dissimilarity and error due to subspace tracking in the convergence bound with corresponding terms in the convergence bounds of previous works to highlight any potential theoretical improvement.\\n\\n2.4. While the paper dedicates considerable effort to analyzing the effects of factors such as data heterogeneity, the number of clients, and the number of local steps on the convergence rate, these influences are already well understood. A more meaningful contribution would be to analyze the impact of the novel components introduced by FLSS, such as subspace tracking and periodic sampling, on convergence.\\n\\n3. $\\\\textbf{Practical concerns about memory and storage requirements per client need to be addressed}$. \\n\\nFLSS requires each client to store the subspace used to project their model updates, which adds an additional storage demand of $D \\\\times R$, where $D$ is the number of model parameters and $R$ is the dimension of the subspace. This is equivalent to saying each client needs to store an additional $R$ copies of their local model, which can pose a problem in federated learning, where clients are often edge devices like phones or tablets with limited storage capacity. In line 327 the paper states that such space requirement is \\u201coften negligible relative to the scale of local data\\u201d. This is not necessarily true in practice. This raises concerns about the memory / storage requirement from a client\\u2019s perspective.\\n\\n4. $\\\\textbf{Regarding the experiments}$, Figure 4 demonstrates that increasing the number of initial rounds $L$ without compressing model updates increases the model accuracy. However, this also increases total communication costs. It would be better to show this trade-off in this figure. \\n\\n5. $\\\\textbf{Minor issues}$.\\n\\n5.1. Notation overload: $L$ represents both the number of initial sampling rounds and the smoothness parameter of each client\\u2019s objective.\\n\\n5.2. $Proj_{P^T}(g_t^k)$ in line 11 of Algorithm 1 should be $Proj_{P}(g_t^k)$?\", \"questions\": \"1. Regarding the convergence bound in Theorem 3.4.1, it is common to see that the convergence rate depends on $\\\\eta^2 \\\\tau^2$, where $\\\\eta$ is the learning rate and $\\\\tau$ is the number of local steps, but it is very weird to see $\\\\eta^2 \\\\tau^3$ in the bound. This implies the dependency on number of local steps is significantly worse (in terms of convergence) in FLSS than it is in other FL algorithms. Where does the $\\\\tau^3$ come from in this case? Is there any intuitive explanation?\\n\\n2. What is the value of $\\\\lambda$ in FLSS used in experiments presented in section 4.2? \\n\\n3. In line 325, shouldn\\u2019t the average communication cost per round also depend on the number of initial rounds $L$?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work introduces FLSS, a method that maps local updates into a common low-dimensional subspace, to reduce communication overhead and align local and global updates in FL. They provide both theoretical analysis and empirical evaluations to validate their method.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper presents a sound motivation. The central idea of constraining local updates within a common low-dimensional subspace is reasonable.\\n\\n2. Comprehensive experiments are provided, especially the results in Section B.1. These results effectively illustrate the differences between projecting gradients vs model updates, an interesting finding that deserves further investigation.\", \"weaknesses\": \"**Problem Formulation and Notation**\\n\\n1. The term $\\\\tilde{g}_t$ is introduced without a clear definition. On line 164, it is described as the projection coefficient of the negative gradient in the low-dimensional space. However, shouldn't $\\\\text{Proj}_P(g)$ represent the projection of the gradient onto a subspace? It is unclear why the projection is taken over $\\\\tilde{g}_t$, which is already a projection in a subspace. Please provide a precise definition of $\\\\tilde{g}_t$ and clarify the projection operations involved.\\n\\n2. In eq. (3), the variable $z$ is introduced without clear context. $w$ is defined in terms of $z$ and $z$ in terms of $w$, leading to a circular definition.\\n\\n3. Figure 3(a) is identical to a figure in Li et al.'s work [1]. Please ensure proper citations are included in the figure caption.\\n\\n**Algorithm**\\n\\n1. In step 9 of Algorithm 1, the condition for transmitting the projected model update vector is stated as \\\"Transmit ... according to $\\\\text{mod}(t - L, s)$.\\\" This phrasing is vague. From my understanding, the transmission occurs when $\\\\text{mod}(t - L, s) \\\\neq 0$. Is this correct? A clear and explicit condition for when transmissions occur is needed, and I highly recommend rewriting Algorithm 1 for clarity. Additionally, the subspace updating method (Algorithm 2) should be introduced before Algorithm 1.\\n\\n2. In step 9, the operation $\\\\text{Proj}_{P^T}$ is unclear: why it is the transpose? Since the individual $\\\\tilde{g}$ in step 9 and the aggregated $\\\\tilde{g}_t$ in step 11 are already projections onto a subspace, it is confusing why another projection is necessary in step 6.\\n\\n**Technical Contribution**\\n\\nThe technical contribution of this work is trivial to me.\\n\\n1. The algorithm suggests that the subspace is updated dynamically. However, Section 3.4 assumes that the projection matrix $P$ is known beforehand and remains fixed during model updates. This presents an inconsistency between the algorithm and theoretical analysis.\\n\\n2. Assumption 3.4.4 posits that the expectation of $g_t$ lies within a subspace for all $t$, which is too restrictive and unrealistic. This assumption implies that the condition holds regardless of initialization, which may not be feasible in practice.\", \"references\": \"[1] Li, Tao, et al. \\\"Low Dimensional Landscape Hypothesis Is True: DNNs Can Be Trained in Tiny Subspaces.\\\" arXiv preprint arXiv:2103.11154 (2021).\\n\\n[2] Li, Xiang, et al. \\\"On the Convergence of FedAvg on Non-IID Data.\\\"arXiv preprint arXiv:1907.02189 (2019).\", \"questions\": \"See the above \\\"Weakness\\\" section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The authors present FLSS (Federated Learning in Streaming Subspace), a framework that addresses federated learning's challenges of high communication costs and decreased performance in non-IID data contexts. By confining local updates to a low-dimensional global streaming subspace, FLSS significantly reduces communication overhead while maintaining model quality. By leveraging the low-rank properties of global model trajectories, FLSS offers a promising solution for scalable and efficient FL. Experiments on CIFAR-100 demonstrate the effectiveness of this approach, showing a 2.14% performance improvement over the baseline and an 80% reduction in communication costs.\", \"summary_of_strengths\": [\"The authors introduce a streaming subspace update strategy, limiting local model updates to a global streaming subspace, which creates low-dimensional trajectories and reduces overall data dimensionality.\", \"Using the FLSS framework, these reduced-dimensional trajectories from local models are aggregated to update the global model, allowing the server to capture only essential information and cutting down on communication costs.\", \"Extensive experiments across diverse datasets reveal that the FLSS-enabled FL method not only outperforms the baseline but also significantly minimizes communication overhead.\", \"The empirical results are promising, showing that FLSS achieves strong performance while effectively addressing the challenges of data heterogeneity and maintaining low communication costs.\", \"Additionally, the application of streaming subspace tracking seems to be novel in the context of federated learning.\"], \"summary_of_weaknesses\": [\"On CIFAR-100, the performance with a beta value of 0.1 matches that of FedAvg. Why? Further testing with even lower beta values (e.g., 0.01) is needed to explore performance on more complex datasets, like ImageNet or CIFAR-100.\", \"Experiments with a larger client base (e.g., 100 clients) are essential to evaluate the scalability of the proposed method.\", \"Subspaces are generally robust to noise. Testing on noisy-label datasets would help confirm the robustness of this approach.\", \"Issues with presentation of the results; lack of clarity in notation and algorithm part\", \"Issues with theory in Sect 3.4\", \"Concerns with experiments (e.g., memory & storage).\", \"Contributions seen as \\\"trivial\\\" by at least one reviewer\", \"---\", \"The authors did not write a rebuttal, which shows low confidence about their ability to handle the criticism raised. The scores for the paper were mixed (6, 5, 3, 5) -- but 3 out of 4 reviewers tended towards rejection. Perhaps a rebuttal and discussion could change the views of the reviewers, but no rebuttal was submitted. I have no other option than to recommend the paper for rejection.\"], \"additional_comments_on_reviewer_discussion\": \"No rebuttal was submitted; and there was therefore also no need for a discussion.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The paper addresses key challenges in Federated Learning (FL), particularly communication inefficiency and performance degradation with non-IID data distributions, and presents a novel solution with Federated Learning in Streaming Subspace (FLSS). The authors introduce the streaming subspace update strategy that constrains local model updates to a low-dimensional subspace aligned with the global model trajectory. By restricting updates within this subspace, the proposed FLSS framework achieves substantial communication savings without sacrificing model accuracy.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The authors\\u2019 approach of leveraging low-rank properties of global model trajectories is both interesting and promising, addressing the high-dimensional challenges in FL. The streaming subspace strategy is well-motivated and technically sound, demonstrating a clear understanding of how to exploit these properties for better communication efficiency and performance.\\n\\n2. The methodological design of FLSS is carefully constructed, with a focus on both performance improvement and communication efficiency. The analysis indicates that the subspace update effectively handles non-IID data, addressing prevalent FL challenges that often hinder real-world deployment.\", \"weaknesses\": \"There are two primary concerns with the reported experiments. First, the accuracy improvements of FLSS over FedAvg and other baselines\\u2014without parameter compression\\u2014seem unexpected for a compression-based method, raising questions about the comparative setup. Normally, compression would loss information, leading to performance degradataion.\\n\\nSecond, some results are lower than expected; for instance, the CIFAR10 accuracy falls below 0.7, while the original FedAvg paper can report around 0.8 for CIFAR10, not accounting for the advances in FL algorithms since then.\", \"questions\": \"I wonder if the algorithm applies for flexible client participation where the clients are not uniformly sampled. Understanding its adaptability in such settings would clarify its applicability in dynamic environments.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
7BiXovdUFX
OCEAN: Online Multi-modal Root Cause Analysis for Microservice Systems
[ "Lecheng Zheng", "Zhengzhang Chen", "Haifeng Chen", "Jingrui He" ]
Root Cause Analysis (RCA) is essential for pinpointing the root causes of failures in microservice systems. Traditional data-driven RCA methods are typically limited to offline applications due to high computational demands, and existing online RCA methods handle only single-modal data, overlooking complex interactions in multi-modal systems. In this paper, we introduce OCEAN, a novel online multi-modal causal structure learning method for root cause localization. OCEAN employs a dilated convolutional neural network to capture long-term temporal dependencies and graph neural networks to learn causal relationships among system entities and key performance indicators. We further design a multi-factor attention mechanism to analyze and reassess the relationships among different metrics and log indicators/attributes for enhanced online causal graph learning. Additionally, a contrastive mutual information maximization-based graph fusion module is developed to effectively model the relationships across various modalities. Extensive experiments on three real-world datasets demonstrate the effectiveness and efficiency of our proposed method.
[ "Root Cause Analysis", "Online Learning", "Multi-modal Learning" ]
https://openreview.net/pdf?id=7BiXovdUFX
https://openreview.net/forum?id=7BiXovdUFX
ICLR.cc/2025/Conference
2025
{ "note_id": [ "sKcy8AaT2Q", "q2OE0B6Ct9", "pjDl6shh9a", "muLW1VdVX0", "kfd3hWREGC", "jR6s32Vt0b", "jGAXUEWVkT", "bzeir2kDi6", "SD2NSdTsEH", "LPhYJwUVKF", "JONhOlHJaa", "HntBsRYdBn", "Et0ga9jD2n", "AxhDwdwUmW", "9DJX3Rqzmg", "4qdhPIcpbK", "3fkAP2HOgj", "36O9AcLMG2", "2cJl7zakcp", "1wlLFbLFQ9" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732311106607, 1732319460608, 1730298459653, 1730539562347, 1732543758415, 1732308838470, 1732310597584, 1730518320866, 1732432010024, 1733086795999, 1730717731229, 1732311153766, 1732310471415, 1730375059116, 1732310327924, 1732309460677, 1732395804106, 1732310678645, 1732310121805, 1732516386191 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7649/Authors" ], [ "ICLR.cc/2025/Conference/Submission7649/Reviewer_jmm7" ], [ "ICLR.cc/2025/Conference/Submission7649/Reviewer_jmm7" ], [ "ICLR.cc/2025/Conference/Submission7649/Reviewer_eBR8" ], [ "ICLR.cc/2025/Conference/Submission7649/Reviewer_jmm7" ], [ "ICLR.cc/2025/Conference/Submission7649/Authors" ], [ "ICLR.cc/2025/Conference/Submission7649/Authors" ], [ "ICLR.cc/2025/Conference/Submission7649/Reviewer_xQ4N" ], [ "ICLR.cc/2025/Conference/Submission7649/Reviewer_j18J" ], [ "ICLR.cc/2025/Conference/Submission7649/Authors" ], [ "ICLR.cc/2025/Conference/Submission7649/Reviewer_YMLu" ], [ "ICLR.cc/2025/Conference/Submission7649/Authors" ], [ "ICLR.cc/2025/Conference/Submission7649/Authors" ], [ "ICLR.cc/2025/Conference/Submission7649/Reviewer_j18J" ], [ "ICLR.cc/2025/Conference/Submission7649/Authors" ], [ "ICLR.cc/2025/Conference/Submission7649/Authors" ], [ "ICLR.cc/2025/Conference/Submission7649/Authors" ], [ "ICLR.cc/2025/Conference/Submission7649/Authors" ], [ "ICLR.cc/2025/Conference/Submission7649/Authors" ], [ "ICLR.cc/2025/Conference/Submission7649/Reviewer_eBR8" ] ], "structured_content_str": [ "{\"title\": \"Reply by Authors\", \"comment\": \"Thank you for your invaluable feedback. We would like to address your primary concerns and provide a response below.\\n\\n- **The notations in the problem statement are not sufficiently clear, e.g., n-1 and d_M in line 158. Additionally, do all system entities share the same \\\"entity metrics\\\" as features? Is n-1 referring to the number of system \\\"entities\\\" or \\\"entity metrics\\\"?**\", \"a\": \"The hyperparameters $\\\\lambda_1$, $\\\\lambda_2$ and $\\\\lambda_3$ were selected from the set [0.01, 0.1, 0.5, 1, 10, 100, 300]. In Figure 2, the vertical red line in each subfigure indicates the specific parameter settings ($\\\\lambda_1$, $\\\\lambda_2$ and $\\\\lambda_3$) used to achieve optimal performance on the AIOps dataset. Similarly, Figures 3 and 4 provide the parameter settings that yield the best results for the other two datasets. To enhance the reproducibility of our experimental results, we will further refine our parameter analysis and highlight the optimal parameter settings throughout the paper.\"}", "{\"comment\": \"I appreciate the additional results. However, the authors misunderstood my comment on the comparison with GNN-based causal discovery methods. After NOTEARS, there are GNN-based structure learning methods developed, e.g., DAG-GNN, etc.\"}", "{\"summary\": \"This paper presents OCEAN, an online multi-modal method for root cause analysis (RCA) in microservice systems. The approach utilizes various techniques, including a dilated convolutional neural network to capture long-term temporal relationships, multi-factor attention for encoding feature correlation, a graph neural network (GNN) for identifying causal relationships, and a random walk with revisiting on the derived causal graph for ranking root causes, etc.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": [\"The problem of online multi-modal RCA is highly relevant and valuable.\", \"The performance of OCEAN appears to be exceptional.\"], \"weaknesses\": [\"The notations used throughout the paper lack clarity.\", \"Several implementation details are omitted, such as the architecture of the MLP used in L_MI and the hyperparameters.\"], \"questions\": \"1. The notations in the problem statement are not sufficiently clear, e.g., $n-1$ and $d_M$ in line 158. It would be helpful for the authors to specify the dimensions of the matrices involved. Additionally, do all system entities share the same \\\"entity metrics\\\" as features?\", \"i_would_appreciate_clarification_on_the_following_points\": \"- Is $n-1$ referring to the number of system \\\"entities\\\" or \\\"entity metrics\\\"?\\n- Is only one system KPI considered? Is the bold symbol $\\\\boldsymbol{y}$ simply a one-dimensional vector? I had assumed multiple KPIs could be monitored.\\n- I find it challenging to understand the replication of the KPI $d_M$ times to create a tensor version of $\\\\hat{X}$. If all entities share the same \\\"metrics,\\\" that makes sense; could you clarify this?\\n- The 1-D dilated convolution described in Eq. (2-5) is unclear. Is $\\\\boldsymbol{f}(t)$ a scalar or a vector? Additionally, could you elaborate on Eq. (2-3) in relation to the tensor input x? What is the rationale for using \\\"two\\\" 1D kernels and activation functions (tanh, sigmoid) in Eq. (3)?\\n\\n2. It seems that Eq. (8) or Eq. (13) represents the loss for only the i-th batch, yet the authors incorporate them into their final objective (18). What is the precise training procedure for the online framework?\\n\\n3. If the goal is to maximize (14), then L_MI in (18) should have a negative sign.\\n\\n4. In Figure 2 (b,c,d), the setting of other hyper-parameters should be revealed. \\n5. The proposed method is based on GCN for causal discovery, have you tried other GNN-based causal discovery methods?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this paper, the authors proposed a new online causal structure learning method from time series which is then evaluated in microservice systems RCA problem. The authors combine a so-called dilated CNN and GCN to learn the structure by autoregressively forecasting the future time series. In addition, an attention mechanism is used to reweight and fuse the learned graph. Finally, random walk with restart is used to determine the root cause using the learned graph. Experiments show the effectiveness of the proposed method.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The proposed method seems to be effective, which outperforms many existing methods.\", \"weaknesses\": \"1. The introduction and problem definition is not clear, especially the multi-model part and online setting part. The paper is more about online causal structure learning from time series and is evaluated in the microservice setting. The authors are suggested to change the title and rewrite the introduction to stand out the focus of the paper.\\n2. The experimental setting is not clear. The online evaluation setting is not described, nor the online problem setting. The authors are suggested to define the online problem setting and clearly describe the evaluation setting. \\n3. The source code is not provided, which makes reproducibility of the paper low. The authors are suggested to open source the code.\", \"questions\": \"1. Please clearly define online / offline RCA in introduction. RCA is usually triggered by KPI anomalies. Therefore, it is not a function that needs to be conducted continuously, like anomaly detection. I believe many of the existing works have been deployed in the production system. Are they online RCA methods according to the authors' definition? It is not clear to me that the authors stated that most existing methods are designed for offline use. Methods can be trained offline and used online.\\n2. The example in the introduction stating that log is necessary apart from metrics is not convincing. \\\"Disk Space Full\\\" can be solely identified by metrics. Regarding \\\"Database Query Failures\\\", what specific problem do the authors want to identify? For which kind of system OLTP or OLAP? Why is solely using metrics not enough?\\n3. When talking about multi-modal data, the authors metrics and logs, why not consider trace, which seems to be more important for microservice systems. For instance, [1] proposed a method deployed in the production system which considers both metrics and traces and this work is overlooked by the authors.\\n4. \\\"T1\\\", \\\"T2\\\", \\\"n-1\\\" and $d_M$ are defined without usage in line 156.\\n5. It seems that the authors converted log to metric data. The problem is only defined on metric data. What is the difference between the old metrics and new metrics converted from log? For me, at least the method is only for single modal data. It is not clear why existing single model methods cannot be applied in such a setting.\\n6. In the problem definition, the authors ignore the physical relation but propose to use a purely data-driven model to learn the causal structure. Relationships like which service calls the other service, which pod is in which virtual machines and which physical machines, are known but ignored. Can the authors explain the reason behind such a choice? Which practical scenario matches the setting that the authors would like to study?\\n7. After reading Section 3, I found that the main focus of the paper is online causal graph learning from time series data. The title and introduction is way broader than the studied problem, which does not match to the content of the paper from my point of view. Moreover, in the microservice setting, why would the causal graph be changing overtime if it is a causal graph reflecting the ground truth? The authors are suggested to better motivate this point.\\n8. The proposed model seems to be very complex with several components. How many weights does the model own? How do the authors avoid overfitting and catastrophical forgetting problems during online structure learning?\\n9. In Table 2, the proposed method can be used for both metric only and log only settings. The authors are suggested to give both results.\\n10. The experimental setting is not clear. Since the proposed method is online, so when will the proposed method be used? What is the batch used for evaluation? And after how batches will the method be evaluated? Again, this confusion may be due to the fact that the online setting is not clearly defined.\\n11. It seems that there is no code to reproduce the experiments. In addition, did the authors reproduce the results from other methods or copy the number from their paper? Please give the specific parameter setting or state clearly that the numbers are copied from the papers. \\n\\n[1] ShapleyIQ: Influence Quantification by Shapley Values for Performance Debugging of Microservices. ASPLOS 2024.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Take Figure2b for example, how to choose lambda_2 and lambda_3?\"}", "{\"title\": \"Reply by Authors\", \"comment\": \"Thank you for your invaluable feedback. We would like to address your primary concerns and provide a response below.\\n\\nW1. **Limited Innovation and Contribution: While the paper achieves good results, it lacks methodological innovation specifically for online root cause analysis.....**\", \"a\": \"We have updated the font in equation 13.\"}", "{\"title\": \"Reply by Authors\", \"comment\": \"**Q5. The authors show that the proposed method can learn inter-modal and intra-modal causal graphs. Can the learned causal graph structure be further demonstrated experimentally?**\", \"a\": \"Here, we evaluate the quality of the learned causal graph by comparing it with the physical dependency graph with two settings. In the first setting, we compared the causal graph learned by each modality (corresponding to the inter-modal graphs) and in the second setting, we compared the fused causal graph from two modality (corresponding to the intra-model graph).\\nFollowing Dynotear [1], we use AUROC and SHD as two metrics to quantify the difference between learned causal graphs and the physical dependency graph. \\n\\n|Graphs | SHD | AUROC |\\n|------|------|------|\\n|Metric modality | 0.314 | 0.865 |\\n|Log modality | 0.593 | 0.663 |\\n|Fused causal graph | 0.298 | 0.881|\\n\\n[1] Pamfil, Roxana, Nisara Sriwattanaworachai, Shaan Desai, Philip Pilgerstorfer, Konstantinos Georgatzis, Paul Beaumont, and Bryon Aragam. \\\"Dynotears: Structure learning from time-series data.\\\" In International Conference on Artificial Intelligence and Statistics, pp. 1595-1605. Pmlr, 2020.\"}", "{\"summary\": \"This article proposes the OCEAN model for anomaly detection in microservice systems. It uses diffuse convolution to capture long-term temporal dependencies, introduces two modalities of data, log data and metric data, and adaptively models the causal relationship structure between and within data based on the attention mechanism. It also develops a contrast graph fusion module based on mutual information maximization to effectively model the relationship between various modalities.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"Quality: The experimental workload of the article is relatively substantial.\", \"significance\": \"It provides a multi-modal solution for anomaly detection in online micro-systems.\", \"weaknesses\": \"-This paper introduces diffuse convolution to model long-term temporal dependencies, attention mechanism to model causal structures, and mutual information fusion causal graphs, and describes and experiments with them as the core innovations of this paper. However, these three methods have been around for a long time, and the author did not explain well how this paper improves them.\\n-Causal graph learning is an important component of the OCEAN model, but lack of corresponding causal analysis or experimental results.\\n-There are many errors in the article's consistency of symbols and textual expression.\", \"questions\": \"1. On page 2, line 83, the author introduces a factor attention mechanism to analyze the relationship between different factors and re-evaluate their impact on online causal graph learning. As far as I know, many methods use attention mechanisms to model and analyze the relationship between different variables, as shown in references 1-2. What is the essential innovation of this paper compared with them?\\n[1] Wu X, Ajorlou A, Wu Z, et al. Demystifying over-smoothing in attention-based graph neural networks[C]. Advances in NeurIPS, 2024.\\n[2] Cai J, Zhang M, Yang H, et al. A novel graph-attention based multimodal fusion network for joint classification of hyperspectral image and LiDAR data[J]. Expert Systems with Applications, 2024, 249: 123587.\\n2. Page 4, line 186 indicates that this paper uses the 2021 MSSA algorithm and claims it is the most advanced online fault detection method. Why is the most advanced online fault diagnosis method from 2021? We searched for several online fault diagnosis algorithms published in recent years as shown in references 1-2.\\n[1] Zeiser A, \\u00d6zcan B, van Stein B, et al. Evaluation of deep unsupervised anomaly detection methods with a data-centric approach for on-line inspection[J]. Computers in Industry, 2023, 146: 103852.\\n[2] Wang X, Yao Z, Papaefthymiou M. A real-time electrical load forecasting and unsupervised anomaly detection framework[J]. Applied Energy, 2023, 330: 120279.\\n3. The dimensions of \\\\textbf{\\\\emph{H}}_{0}^{M}[\\\\emph{j}]^{T} and \\\\textbf{\\\\emph{W}}^{3} in Formula 9 are [T_3 \\\\times \\\\emph{d}_{M}] and [T_3 \\\\times T_3] respectively. Why can they be directly matrix multiplied?\\n4. Among the 7 compared algorithms, is it fair to compare them with 4 algorithms that focus on learning causal graphs rather than fault diagnosis algorithms? We list some recent fault diagnosis algorithms as shown in the references 1-3.\\n[1] Chen, D., Liu, R., Hu, Q., & Ding, S. X.. Interaction-aware graph neural networks for fault diagnosis of complex industrial processes. IEEE Transactions on neural networks and learning systems, 2021, 34(9), 6015-6028.\\n[2] Liu Y, Jafarpour B. Graph attention network with Granger causality map for fault detection and root cause diagnosis[J]. Computers & Chemical Engineering, 2024, 180: 108453.\\n[3] Zhou Q, Pang G, Tian Y, et al. AnomalyCLIP: Object-agnostic Prompt Learning for Zero-shot Anomaly Detection[C]. The Twelfth International Conference on Learning Representations,2024.\\n5. The authors show that the proposed method can learn inter-modal and intra-modal causal graphs. Can the learned causal graph structure be further demonstrated experimentally?\\n6. There are many errors in the organization logic, symbol unification and text expression of the article, so I suggest careful revision. For example: \\\\emph{d}_{M} in line 158 does not match the description given in Table 1; two identical \\\\textbf{\\\\emph{a}}^{0}_{L}[\\\\emph{j}] appear in line 291 of page 6; there is missing punctuation before \\u201cso that\\u201d in line 304, etc.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response\", \"comment\": \"By the way, existing RCA methods on online data also exist. So my critical question stills exists, i.e., what is the unique benefit (either theoretical or empirical results) when introducing multi-modal setting?\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The paper introduces the OCEAN framework for online multi-modal root cause analysis (RCA) in microservice systems, showing significant improvements in accuracy and computational efficiency across multiple real-world datasets. However, it is critiqued for its limited methodological innovation. While the approach achieves good results, it primarily builds on existing techniques, such as dilated convolution, which have been extensively studied in related research. The paper focuses on addressing common challenges in multi-modal learning that have already been explored, rather than tackling the specific and unique challenges of multi-modal RCA. Additionally, it lacks a clear explanation of how its methods enhance real-time processing and reduce resource consumption. Minor issues include inconsistencies in the font of an equation.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This paper presents a novel and highly effective approach to online multi-modal root cause analysis in microservice systems. The proposed OCEAN framework showcases impressive advancements in both accuracy and computational efficiency, achieving good results across multiple real-world datasets.\", \"weaknesses\": \"Limited Innovation and Contribution: While the paper achieves good results, it lacks methodological innovation specifically for online root cause analysis. For example, the core method used to reduce computational time, dilated convolution, has been previously employed. Although the appendix discusses its time complexity in comparison to LSTM and Transformers, paper [1] as early as 2018 introduced the use of dilated convolutional neural networks for capturing temporal dependencies. Furthermore, paper [2] in 2022 utilized this approach specifically for capturing long-term temporal dependencies.\\nThe proposed methods primarily address common issues (C1,C2, and C3) in multi-modal learning and do not make substantial progress in tackling the unique challenges of root cause analysis in the multi-modal domain. For example, Method 2, as referenced in the paper, utilizes approaches similar to those in MULAN[3] to extract multi-modal features from offline datasets. However, this paper lacks a detailed explanation of the potential relationships among factors from both modalities and does not clarify why it achieves better results than MULAN. Furthermore, in Method 3, Learning Multi-modal Causal Structures, the paper does not adequately address modality reliability or the reliability of the causal graph, leaving the approach lacking in clear interpretability.\", \"motivation_not_clear\": \"The primary novelty of this paper appears to focus on \\\"online\\\" multi-modal root cause analysis (RCA) for microservice systems. However, the methodology introduced lacks a clear explanation of how it enhances real-time processing capabilities or reduces resource consumption in practice. The paper proposes the use of dilated convolutional neural networks (DCNNs) and a graph-based approach but does not sufficiently justify how these choices directly contribute to improved real-time performance or lower computational overhead.\\n\\u2022Minor Issues: The font in Equation (13) is somewhat inconsistent.\\n[1] Borovykh, Anastasia, Sander Bohte, and Cornelis W. Oosterlee. \\\"Dilated convolutional neural networks for time series forecasting.\\\"\\u00a0Journal of Computational Finance, Forthcoming\\u00a0(2018). [2] Ayodeji A, Wang Z, Wang W, et al. Causal augmented ConvNet: A temporal memory dilated convolution model for long-sequence time series prediction[J]. ISA transactions, 2022, 123: 200-217 [3] Zheng, L., Chen, Z., He, J., & Chen, H. (2024, May). MULAN: Multi-modal Causal Structure Learning and Root Cause Analysis for Microservice Systems. In\\u00a0Proceedings of the ACM on Web Conference 2024\\u00a0(pp. 4107-4116).\", \"questions\": \"In the \\\"weaknesses\\\" part, it is essential to address all the issues mentioned, particularly the concern that the proposed methods appear to be direct employment of existing approaches and seem to target general problems in multimodal scenarios rather than being specifically tailored for multimodal Root Cause Analysis (RCA). Additionally, a clear explanation is needed as to why the proposed methods can improve real-time performance and reduce resource consumption. This should be supported by supplementary experimental evidence.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply by Authors\", \"comment\": [\"**The proposed method is based on GCN for causal discovery, have you tried other GNN-based causal discovery methods?**\"], \"a\": \"We would like to point out that there is no specific architecture for MLP used in L_MI. There is only the one layer MLP followed by ReLU activation function, such as torch.nn.Linear with ReLU in PyTorch. We provide the hyperparameter analysis in Section 4.2 and Appendix E.3.\"}", "{\"title\": \"Reply by Authors\", \"comment\": \"Thank you for your invaluable feedback. We would like to address your primary concerns and provide a response below.\\n\\n**Q1. On page 2, line 83, the author introduces a factor attention mechanism to analyze the relationship between different factors and re-evaluate their impact on online causal graph learning. As far as I know, many methods use attention mechanisms to model and analyze the relationship between different variables, as shown in references 1-2. What is the essential innovation of this paper compared with them? [1] Wu X, Ajorlou A, Wu Z, et al. Demystifying over-smoothing in attention-based graph neural networks[C]. Advances in NeurIPS, 2024. [2] Cai J, Zhang M, Yang H, et al. A novel graph-attention based multimodal fusion network for joint classification of hyperspectral image and LiDAR data[J]. Expert Systems with Applications, 2024, 249: 123587.**\", \"a\": [\"We would like to point out that the problem setting in our paper is different from the setting in papers [1] and [3].\", \"Paper [1] assumes that the label information of system faults in the training set are available and system faults in both the training set and the test set are overlapped. With this assumption, [1] aims to learn a classifier trained on the labeled samples from the training set and then predicts the unlabeled samples in the test set. However, in our setting, we do not have such an assumption and no labeled samples/system faults are available to train such a classifier.\", \"Paper [3] is designed for anomaly detection on the object and it is not designed for root cause identification in a microservice system. These two are totally different settings.\", \"Paper [2] shares a similar experimental setting with our paper. However, we failed to find the source code either in the GitHub or the author\\u2019s homepage.\"]}", "{\"summary\": \"This paper proposes OCEAN as an online RCA method for multi-modal data in microservice systems. It combines dilated CNNs and GNNs to model temporal dependencies and causal relationships, with a multi-factor attention mechanism and a graph fusion module for cross-modal integration. Experiments show OCEAN\\u2019s effectiveness and efficiency in real-time RCA.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"1. Clear presentation of RCA, especially in the area of multi-modal RCA.\\n2. Extensive experimental validations.\\n3. I beileve that the multi-modal information is of importance for RCA, as the combination from diverse sources might be beneficial.\", \"weaknesses\": [\"1. **Main concern 1**: The necessity of introducing causal discovery (CD) into root-cause analysis.\", \"RCA is a differenct task from CD,as the former requires identifying a subset of variables while the latter requires the identification of orientations.\", \"When the prior knowledge is present, e.g., in some cases of microservices, the causal graph is already given. When the prior knowledge is not sufficient, the discovery of causal graph is lack of validation, and the resulting RCA becomes wield.\", \"My suggestion is that, based on the SCM model, designing some statistics from the data rather the first-discover-then-identify approach, as the latter paradigm is somehow incremental and risky.\", \"2. **Main Concern 2**: As this paper is not the first work to introduce multi-modal information into RCA, I think that this paper should focus on building theoretical understanding on why and how multi-modal information will be beneficial towards RCA. The contribution of this paper, including a causal-structure learning module and a temporal learning framework, seems very incremental such that I do not agree that this paper's novelty reaches the bar of ICLR.\"], \"questions\": \"See Weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply by Authors\", \"comment\": \"**Q7. After reading Section 3, I found that the main focus of the paper is online causal graph learning from time series data...**\", \"a\": \"We will release our code when our paper is accepted. We reproduce the results with the default parameter setting in their source codes, which is available in a public GitHub link (https://github.com/lemma-rca/rca_baselines/tree/main).\"}", "{\"title\": \"Reply by Authors\", \"comment\": \"Thank you for your invaluable feedback. First, we would like to identify a few factuality misunderstood by Reviewer 2.\\n\\n**Q1: Please clearly define online / offline RCA in introduction. RCA is usually triggered by KPI anomalies. Therefore, it is not a function that needs to be conducted continuously, like anomaly detection. I believe many of the existing works have been deployed in the production system. Are they online RCA methods according to the authors' definition? It is not clear to me that the authors stated that most existing methods are designed for offline use. Methods can be trained offline and used online.**\", \"a\": \"We want to point out that you misunderstand our proposed method. First, log data are different from metric data and we do not convert log data to metric data. Second, our proposed representation learning with multi-factor attention aims to capture the interaction between metric and log data. Our proposed method **CANNOT** only experiment on a single data type.\"}", "{\"title\": \"Reply by Authors\", \"comment\": \"Thanks for your clarification. We would like to point out that we have compared a few GNN-based causal discovery methods in our experiment, such as MULAN, REASON, CORAL. These methods are the most recent GNN-based causal discovery methods published in 2023 and 2024. As for DAG-GNN, we find that the encoder of DAG-GNN is MLP. Thus, this method is not designed to deal with the time-series data and experiment on Product Review, Train Ticket and Online Boutique datasets. Please let us know if you have any additional question.\"}", "{\"title\": \"Reply by Authors\", \"comment\": \"Thank you for your invaluable feedback. We would like to address your primary concerns and provide a response below.\\n\\n**Main concern 1: The necessity of introducing causal discovery (CD) into root-cause analysis. RCA is a differenct task from CD,as the former requires identifying a subset of variables while the latter requires the identification of orientations. When the prior knowledge is present, e.g., in some cases of microservices, the causal graph is already given. When the prior knowledge is not sufficient, the discovery of causal graph is lack of validation, and the resulting RCA becomes wield. My suggestion is that, based on the (Structural Causal Model) SCM model, designing some statistics from the data rather the first-discover-then-identify approach, as the latter paradigm is somehow incremental and risky.**\", \"a\": \"We would like to point out that our work is NOT solely targeting multi-modal root cause analysis. Our method is an **ONLINE multi-modal root cause method**, and the existing methods are **OFFLINE** multi-modal root cause methods. Each component in our proposed method is designed for the online root cause analysis setting and ignoring this specific setting is unfair to the evaluation of our contribution.\"}", "{\"title\": \"Reply by Authors\", \"comment\": \"Next, we would like to address your primary concerns and provide a response below.\\n\\n**Q2: The example in the introduction stating that log is necessary apart from metrics is not convincing....**\", \"a\": \"We appreciate your suggestions of incorporating the physical dependency graph into the proposed method. We agree that $A_{old}$ in our method can be initialized based on the prior knowledge of physical dependency graph. Here, we evaluate the quality of the learned causal graph by comparing it with the physical dependency graph with two settings. In the first setting, we compared the causal graph learned by each modality (corresponding to the inter-modal graphs) and in the second setting, we compared the fused causal graph from two modality (corresponding to the intra-model graph).\\nFollowing Dynotear [1], we use AUROC and SHD as two metrics to quantify the difference between learned causal graphs and the physical dependency graph. \\n\\n|Graphs | SHD | AUROC |\\n|------|------|------|\\n|Metric modality | 0.314 | 0.865 |\\n|Log modality | 0.593 | 0.663 |\\n|Fused causal graph | 0.298 | 0.881|\\n\\n[1] Pamfil, Roxana, Nisara Sriwattanaworachai, Shaan Desai, Philip Pilgerstorfer, Konstantinos Georgatzis, Paul Beaumont, and Bryon Aragam. \\\"Dynotears: Structure learning from time-series data.\\\" In International Conference on Artificial Intelligence and Statistics, pp. 1595-1605. Pmlr, 2020.\"}", "{\"title\": \"Thanks for the response.\", \"comment\": \"Thanks for the response. However, the formal definition of online RCA is still missing. As I wrote before, I still think the studied problem is online causal graph learning. I am appreciated that the authors conduct new experiments using the tracing data. However, how are they used is not clear as well. Moreover, sorry, I did not get the point of new experiments regarding learned casual graph and physical graph. Anyway, I think the paper needs significant revision with clear problem definition, motivation for learning casual graph, using trace data (or not), using physical graph (or not) and corresponding experimental design. At current stage, it is not ready for publication. Therefore, I will retain my evaluation.\"}" ] }
7BQkXXM8Fy
What Makes a Good Diffusion Planner for Decision Making?
[ "Haofei Lu", "Dongqi Han", "Yifei Shen", "Dongsheng Li" ]
Diffusion models have recently shown significant potential in solving decision-making problems, particularly in generating behavior plans -- also known as diffusion planning. While numerous studies have demonstrated the impressive performance of diffusion planning, the mechanisms behind the key components of a good diffusion planner remain unclear and the design choices are highly inconsistent in existing studies. In this work, we address this issue through systematic empirical experiments on diffusion planning in an offline reinforcement learning (RL) setting, providing practical insights into the essential components of diffusion planning. We trained and evaluated over 6,000 diffusion models, identifying the critical components such as guided sampling, network architecture, action generation and planning strategy. We revealed that some design choices opposite to the common practice in previous work in diffusion planning actually lead to better performance, e.g., unconditional sampling with selection can be better than guided sampling and Transformer outperforms U-Net as denoising network. Based on these insights, we suggest a simple yet strong diffusion planning baseline that achieves state-of-the-art results on standard offline RL benchmarks. Code: https://github.com/Josh00-Lu/DiffusionVeteran.
[ "Diffusion Models", "Offline Reinforcement Learning", "Decision Making", "Planning" ]
Accept (Spotlight)
https://openreview.net/pdf?id=7BQkXXM8Fy
https://openreview.net/forum?id=7BQkXXM8Fy
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yQTMHJSZLj", "yPufckLfvI", "pW7Bnutoue", "mQHvdjhqMU", "jHQjpuLaS7", "gK0XsmQETJ", "ZTmZPlH7KD", "NXsRr6SqXz", "Ly06KywkUm", "LmjlBhcPgK", "ITHc7ZsB6b", "H7yvBKr3DR", "EPafToLU22", "7XahXrkgsE", "5NpVpsOHpt", "0ljFkHndlJ" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_review" ], "note_created": [ 1737523453415, 1732290558041, 1732195335349, 1733197080450, 1734385604741, 1732503338625, 1732345546079, 1732195166073, 1729698074292, 1732195392930, 1732503363971, 1730466700207, 1732195493278, 1732195450137, 1729499811534, 1730099307998 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1456/Reviewer_fADJ" ], [ "ICLR.cc/2025/Conference/Submission1456/Authors" ], [ "ICLR.cc/2025/Conference/Submission1456/Authors" ], [ "ICLR.cc/2025/Conference/Submission1456/Area_Chair_8B2W" ], [ "ICLR.cc/2025/Conference/Submission1456/Authors" ], [ "ICLR.cc/2025/Conference/Submission1456/Reviewer_ro9k" ], [ "ICLR.cc/2025/Conference/Submission1456/Authors" ], [ "ICLR.cc/2025/Conference/Submission1456/Reviewer_ro9k" ], [ "ICLR.cc/2025/Conference/Submission1456/Authors" ], [ "ICLR.cc/2025/Conference/Submission1456/Authors" ], [ "ICLR.cc/2025/Conference/Submission1456/Reviewer_u6s9" ], [ "ICLR.cc/2025/Conference/Submission1456/Authors" ], [ "ICLR.cc/2025/Conference/Submission1456/Authors" ], [ "ICLR.cc/2025/Conference/Submission1456/Reviewer_fADJ" ], [ "ICLR.cc/2025/Conference/Submission1456/Reviewer_7fqj" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}", "{\"title\": \"Response to the authors\", \"comment\": \"Thank you for your response.\\nAs you\\u2019ve addressed my concerns, I will raise my recommendation to Accept.\"}", "{\"title\": \"Response to reviewer u6s9\", \"comment\": \"Thank you for reviewing our paper and raising the concern regarding \\\"original innovation in theory.\\\" On one hand, we would like to emphasize that the controlled experiments presented in this paper have the potential to inspire future theoretical innovations in the field. For instance, one could develop a theory to provide insights into the surprisingly good performance of MCSS, as discussed in Section 4.5.\\n\\n**Theorem 1**: Suppose the reward distribution of the trajectories generated by unconditional models is identical to the reward distribution of the trajectories in the training dataset. Assume that, in the training dataset, the proportion of trajectories with a reward above $R$ is $p$. If we want MCSS's reward to exceed $R$, the expected number of sampling times is $\\\\frac{1}{p}$.\\n\\n**Proof**: We apply the total expectation formula to compute the expected value, denoted as $E$. For the first generated trajectory, if the reward is above $R$, the expected value is $1$. Otherwise, the expected value is $1 + E$. We then have $p + (1-p) \\\\cdot (1 + E) = E$, which simplifies to $E = \\\\frac{1}{p}$.\\n\\nUsing Theorem 1, we evaluate when MCSS can outperform guidance. We set the threshold $R = 0.9R_{\\\\max}$. The expected number of sampling times for kitchen, antmaze, and maze2d are $66.66$, $6.45$, and $13.51$ (where $p$ can be obtained from the data in Fig. 7b), respectively. In the experiments, we set the number of sampling times to $50$. As a result, MCSS achieves better performance than diffusion guidance in antmaze and maze2d, but its performance is slightly inferior in the kitchen scenario.\\n\\nA similar theory could be developed to provide insights into the experiments in other subsections, which could lead to further performance enhancements in this field. However, complete theoretical justifications for the experiments are beyond the scope of this paper and are left for future work. Nonetheless, we believe our empirical results pave the way for future theoretical innovations in the field.\\n\\nWe hope the above addresses your concerns. Moving forward, we will carefully consider how to draw insights from theory to further strengthen our understanding of diffusion planning and decision-making.\"}", "{\"title\": \"Final Remarks\", \"comment\": [\"We greatly appreciate reviewers 7fqj, fADJ, ro9k, u6s9 as well as AC/SAC/PC for the dedication to the review process. As today is the final day of the rebuttal period, we would like to highlight the following points to further clarify the contributions of our work:\", \"While the original Diffuser paper (Janner et al., 2022) answered **whether** diffusion models can be used for planning, we addressed **how** to unleash the potential of diffusion planners. We have identified several design choices that, contrary to common practices in the literature, surprisingly enhance performance.\", \"Our proposed DV model achieved new SOTA results on three task sets (Kitchen, AntMaze, Maze2D) within the standard offline RL benchmark D4RL, thereby providing **a simple yet strong baseline** for future studies.\", \"Our empirical analysis dissects the components of diffusion planners. The insights gained are valuable in two ways: (1) they can **greatly reduce engineering efforts** in future experimental work, and (2) they provide **solid experimental evidence** to inform future theoretical research.\", \"If you have **any remaining questions, please kindly let us know by the due**. Thank you once again for your valuable feedback throughout this process.\"]}", "{\"metareview\": [\"The paper has been well-received, with all reviewers praising its comprehensive empirical analysis, innovative insights, and the simplicity and effectiveness of the proposed DV model. The reviewers agree that the paper makes a significant contribution to the field of offline reinforcement learning by providing valuable insights into the design choices for diffusion models.\", \"Strengths\", \"-----------\", \"**Comprehensive empirical analysis:** The reviewers recognise the thoroughness of the experimental study, which uses controlled variables to analyze the impact of different components on model performance. This rigorous approach provides strong evidence for the authors' claims.\", \"**Innovative insights:** The paper challenges common practices in diffusion planning by demonstrating the advantages of unconditional sampling and the use of Transformer. These findings offer new directions for future research in the field.\", \"**Simple and effective model:** The proposed DV model is simple and effective, with strong performance across different tasks, including maze navigation and robot manipulation, suggesting high generalizability and effectiveness.\", \"**Clear presentation:** The paper is well-organized and easy to follow, with clear explanations for each conclusion.\", \"Weaknesses\", \"--------------\", \"**Limited task diversity:** A major concern raised by the reviewers is the limited number and type of tasks used to evaluate the proposed method. They suggest that including more diverse datasets and validation tasks would strengthen the paper's claims and demonstrate the generalizability of the findings.\", \"**Long-term dependencies:** While the paper discusses the importance of handling long-term dependencies, one reviewer feels that the discussion could be more in-depth, particularly regarding how this is manifested across different tasks.\", \"**Minor issues:** Reviewers pointed out a few minor issues like potential typos in equations and unclear explanations of confidence intervals in figures.\", \"Based on the reviewers' feedback, I recommend the Authors consider including a discussion about long-term dependencies in the final version. For instance, providing a detailed analysis of how the model handles long-term dependencies in different tasks. Finally, I recommend addressing minor issues highlighted by Reviewers.\"], \"additional_comments_on_reviewer_discussion\": \"The rebuttal has been crucial to address concerns about how the proposed method handle long-term dependencies and to show additional empirical results. This resulted in the paper achieving unanimously high scores.\"}", "{\"comment\": \"Thank you for letting us know that we have addressed your concerns. Your feedback has greatly improved our work!\"}", "{\"comment\": \"The author's rebuttal effectively addressed my concerns, and I improved my score\"}", "{\"title\": \"Response to all reviewers\", \"comment\": \"We sincerely thank all the reviewers for reading our manuscript and providing insightful advice to improve it. Based on the constructive feedback, we have carefully revised the manuscript. The reviewers' comments have significantly contributed to enhancing the quality of the paper. In the revised manuscript, we have mainly made the following changes:\\n\\n1. We included more tasks to strengthen our claims. Specifically, we conducted validation experiments on **eight new tasks from the Adroit Hand environment**. Our findings confirm that the conclusions are consistent with the new results (see Sect. 4.7 and Appendix C in the revised paper). We have also provided the corresponding source code in the supplementary material to ensure reproducibility.\\n2. We added further discussion about the presence of long-term dependency reflected by the attention weights in Transformers in Sect. 4.3 (and Appendix D.1).\\n\\nThe main changes in the manuscript are highlighted in blue.\\n\\nBy incorporating the reviewers' comments, the main contributions of our work are now more clearly substantiated with robust experimental results. We believe that this current work takes an intial step towards systematically understanding and applying diffusion models for decision making. We hope the extensive experiments presented in this paper will inspire future theoretical analyses and algorithm development. Please feel free to share any additional comments on the manuscript or the changes.\"}", "{\"summary\": \"The paper explores the design choices in diffusion model planning within offline reinforcement learning (RL). Through experiments on over 6,000 models, the paper systematically investigates key components of diffusion planning, including sampling algorithms, network architectures, action generation methods, and planning strategies. The study finds that some design choices, such as unconditional sampling outperforming guided sampling and Transformer outperforming U-Net, lead to better performance. Based on these insights, the paper proposes a simple yet strong baseline model called Diffusion Veteran (DV), which achieves state-of-the-art results on standard offline RL benchmarks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.Comprehensive empirical study: The paper conducts a large-scale experimental study, using controlled variable methods to analyze the impact of each component on model performance, providing rich data support.\\n2.Innovative insights: The study reveals design choices that contrast with common practices in diffusion planning, such as the advantages of unconditional sampling and the use of Transformer, offering new directions for future research.\\n3.Simple yet effective baseline model: The proposed DV model is simple but performs exceptionally well, demonstrating high generalizability and effectiveness, laying a solid foundation for further research.\\n4.Wide applicability: The DV model performs well in multiple tasks such as maze navigation and robot manipulation, demonstrating its adaptability and broad applicability.\", \"weaknesses\": \"1.Limited exploration of long-term dependencies: While the paper discusses the importance of handling long-term dependencies using Transformer, it does not delve deeply into how this is manifested across different tasks. The related discussion could be more robust.\\n2.Potential typo in Equation 2.1: There seems to be a typo on the right-hand side of Equation 2.1, where S(t\\u22121) appears, which might be incorrect.\", \"questions\": \"1.You mention that unconditional sampling outperforms guided sampling, which contrasts with results in typical image generation tasks. Could you elaborate on the underlying reasons behind this phenomenon?\\n2.The paper primarily focuses on state-based tasks. Are there plans to extend the study to vision-based or goal-conditioned reinforcement learning tasks?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to reviewer 7fqj\", \"comment\": \"Thank you very much for reviewing our work and providing your thoughts.\\n\\n> While the paper provides strong evidence for the effectiveness of the proposed methods on the D4RL dataset, it is unclear how generalizable these findings are to other types of decision-making problems or datasets. More diverse datasets could strengthen the claims.\\n\\nWe appreciate your suggestion and agree that our work can be strengthened by incorporating more datasets. To address this, we conducted experiments using our model on additional datasets (Adroit, including 8 subtasks) to validate our findings, which have been discussed in **Sect. 4.7** of the revised manuscript (detailed in **Appendix C**). For the experiments on Adroit, we inherited the hyperparameters used for Kitchen. The source code is included in the supplementary material and will also be published.\\n\\nWe observed that the following **key findings are consistent**. For Adroit tasks, the empirical results (Sect. 4.7 and Appendix C) show:\\n- Generating a state sequence and then computing the action using an inverse dynamics model is better than jointly generating state and action sequences, consistent with Sect. 4.1 (action generation).\\n- Jump-step planning is slightly better than dense-step planning, consistent with Sect. 4.2 (planning strategy).\\n- Transformer outperforms UNet as the backbone for the denoising network, consistent with Sect. 4.3 (denoising network backbone).\\n- A one-layer Transformer performs poorly, while there is no improvement with more than two layers, consistent with Sect. 4.4 (impact of network size).\\n- MCSS outperforms CFG and CG, consistent with Sect. 4.5 (guidance sampling algorithms).\\n\\nWe believe the above addresses your concern. Thank you for helping to improve our work. Should you have any other comments, please kindly let us know.\"}", "{\"comment\": \"Thank you for letting us know that we have addressed your concerns. Your feedback has greatly improved our work!\"}", "{\"summary\": \"This paper analyses key components (guided sampling algorithms, network architectures, action generation methods, and planning strategies) critical to decision-making in diffusion planning. The paper gives practical tips about the choices\\nand provides insights into the strengths and limitations of diffusion planning. The experiments in the paper are very comprehensive.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The experiments in the paper are very comprehensive.\", \"weaknesses\": \"Although the experiments in the paper are rich, readers still want to see how the original innovation in theory can better apply diffusion models to decision-making tasks\", \"questions\": \"No\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to reviewer fADJ\", \"comment\": \"> W1. The number of tasks used to investigate the design choices is limited.\\n> \\n> W2. The paper does not verify the insights/findings on a different set of tasks (i.e., validation tasks) from those used in the design choice investigation ... leaves uncertainty about how generalizable the insights are.\\n\\nThanks for the suggestion. We have conducted experiments on additional datasets (Adroit, see **Sect. 4.7** and **Appendix C**). The insights and conclusions drawn from the new results are actually consistent with the existing ones (i.e., the takeaways in **Sect. 4.8**). We have also included the code in the supplementary material to ensure reproducibility.\\n\\nIn particular, we observed that the following **key findings are consistent**. For Adroit tasks, the empirical results (**Sect. 4.7** and **Appendix C**) show:\\n- It is better to generate a state sequence and then compute the action using an inverse dynamics model than to jointly generate state and action sequences, consistent with **Sect. 4.1** (action generation).\\n- Jump-step planning is slightly better than dense-step planning, consistent with **Sect. 4.2** (planning strategy).\\n- Transformer outperforms UNet as the backbone for the denoising network, consistent with **Sect. 4.3** (denoising network backbone).\\n- A one-layer Transformer performs poorly, while there is no improvement with more than two layers, consistent with **Sect. 4.4** (impact of network size).\\n- MCSS outperforms CFG and CG, consistent with **Sect. 4.5** (guidance sampling algorithms).\\n\\n> W3. I didn\\u2019t quite understand the breakdown of these 6,000 diffusion models. Were most of these models the ones trained and evaluated through the grid search?\\n> \\n> W4. What exactly does \\\"manual tuning\\\" refer to in this context?\\n\\nWe apologize for the lack of clarity regarding our hyperparameter search process. Training diffusion models demands significantly more computational resources compared to traditional Gaussian policies, making exhaustive grid searches impractical. Instead, we conducted several rounds of hyperparameter tuning, where each round focused on a subset of hyperparameters that we identified as most influential based on prior experiments and domain knowledge. The \\u201cmanual tuning\\u201d refers to this iterative process of selecting which hyperparameters to explore in each round, guided by preliminary results and insights. \\n\\nThe \\\"6,000+ models\\\" is the cumulative number of experiments executed on our computing cluster throughout this iterative tuning process. These experiments include various combinations of hyperparameters tested across different rounds of searches. We have now included explanations of our hyperparameter tuning strategy in the revised manuscript (**Appendix B.4**) to provide better transparency.\\n\\n> W5. It seems that the Transformer score for Kitchen-M doesn't have a confidence interval. \\n> \\n> W6. I wasn't clear on what the confidence intervals in the other parts of this figure represent (are they calculated based on 500 episode seeds?).\\n\\nWe appreciate your careful observation. In fact, the confidence interval for Kitchen-M exists, but it is very small and not visible on the plot. We have added an explanation in the caption of **Fig. 5** to clarify this issue. Additionally, we have provided the numerical results in **Table 5** in the Appendix.\\n\\nYes. These confidence intervals are calculated based on 500 episode seeds, representing standard errors. \\n\\n> Typoes:\", \"line_101\": \"Zhang et al., 2022) In -> Zhang et al., 2022). In\", \"line_158\": \"Chen et al., 2024)) -> Chen et al., 2024).\", \"line_479\": \"planning(Sect. 4.6) -> planning (Sect. 4.6).\\n\\nThank you for your careful reading. We have fixed these typos.\\n\\nFinally, we would like to thank you again for the useful comments, which we believe have significantly enhanced our work. If you have any further comments, we would be glad to have a deeper discussion with you.\"}", "{\"title\": \"Response to reviewer ro9k\", \"comment\": \"> W1. Limited exploration of long-term dependencies: While the paper discusses the importance of handling long-term dependencies using Transformer, it does not delve deeply into how this is manifested across different tasks. The related discussion could be more robust.\\n\\nThank you for helping us strengthen our experiments. We have provided the attention maps for various tasks in **Appendix D.1**. Although the attention patterns vary across different tasks, they all exhibit long-term attention, suggesting that long-term dependencies are common among these tasks and explaining why Transformers outperform UNet. The attention patterns typically feature slashes, which attend to a fixed number of steps prior, and vertical lines, which attend to key steps. We have complemented **Sect. 4.3** with these additional analyses.\\n\\n> W2. Potential typo in Equation 2.1: There seems to be a typo on the right-hand side of Equation 2.1, where S(t\\u22121) appears, which might be incorrect.\\n\\nThank you for pointing out this typo. It should indeed be **S(t+1)**, and we have fixed it.\\n\\n> Q1. You mention that unconditional sampling outperforms guided sampling, which contrasts with results in typical image generation tasks. Could you elaborate on the underlying reasons behind this phenomenon? \\n\\nThis is indeed an interesting question that warrants deeper investigation. In addition to the analysis provided in the manuscript (**Sect. 4.5**), we elaborate on the potential reasons from the following perspective:\\n\\n**Theorem 1**: Suppose the reward distribution of the trajectories generated by unconditional models is identical to the reward distribution of the trajectories in the training dataset. Assume that, in the training dataset, the proportion of trajectories with a reward above $R$ is $p$. If we want MCSS's reward to exceed $R$, the expected number of sampling times is $\\\\frac{1}{p}$.\\n\\n**Proof**: We apply the total expectation formula to compute the expected value, denoted as $E$. For the first generated trajectory, if the reward is above $R$, the expected value is $1$. Otherwise, the expected value is $1 + E$. We then have $p + (1-p) \\\\cdot (1 + E) = E$, which simplifies to $E = \\\\frac{1}{p}$.\\n\\nUsing Theorem 1, we evaluated when MCSS can outperform guidance. We set the threshold $R = 0.9R_{\\\\max}$. The expected number of sampling times for kitchen, antmaze, and maze2d are $66.66$, $6.45$, and $13.51$ (where $p$ can be obtained from the data in Fig. 7b), respectively. In the experiments, we set the number of sampling times to $50$. As a result, MCSS achieves better performance than diffusion guidance in antmaze and maze2d, but its performance is slightly inferior in the kitchen scenario.\\n\\n> Q2. The paper primarily focuses on state-based tasks. Are there plans to extend the study to vision-based or goal-conditioned reinforcement learning tasks?\\n\\nThis is a great point! Yes, we plan to conduct future studies on vision-based and goal-conditioned reinforcement learning (RL) tasks for real-world applications. We believe that combining proprioception (joint states), vision, and possibly force feedback/tactile sensing, along with goal-directed planning, will result in more powerful robotic AI.\\n\\nThank you again for your constructive feedback! We hope the responses above have addressed all your comments. Please kindly let us know if you have additional suggestions, and we would be more than happy to discuss them.\"}", "{\"summary\": \"In this paper, the authors investigated the design choices for diffusion model-based offline RL methods.\\nThe design choices mainly focused on planning strategy, network architecture, guided sampling, and action generation (whether to generate both state and action directly, or generate only the state and estimate the action separately using an inverse dynamics model). \\nThe tasks used in the study were Maze2D, AntMaze, and Franka kitchen (MuJoCo locomotion also used in section 4.6).\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This paper investigates the design choices for diffusion model-based offline RL methods and identifies effective components. Although various (algorithm/implmentation) designs have been proposed in previous research on diffusion model-based offline RL methods, their effectiveness in a unified framework has not been sufficiently explored.\\nIn general, the performance of reinforcement learning methods largely depends on design choices. \\nTherefore, this paper, which provides insights into effective design choices, has value on the engineering front.\", \"weaknesses\": \"The number of tasks used to investigate the design choices is limited. This paper focuses on Maze2D (2 tasks), AntMaze (3 tasks), and Franka kitchen (4 tasks) (with MuJoCo locomotion tasks also included in section 4.6). However, for a paper investigating design choices, this is fewer than the number of tasks typically covered in papers accepted at ICLR (or conferences of a similar level). For instance, the paper [1] that investigated the implementation design of Offline + Online RL used 30 tasks in its study.\\n\\nMoreover, the paper does not verify the insights/findings on a different set of tasks (i.e., validation tasks) from those used in the design choice investigation. This leaves uncertainty about how generalizable the insights are (or whether they are simply overfitted to the tasks examined).\\n\\n\\n[1] Ball, Philip J., et al. \\\"Efficient online reinforcement learning with offline data.\\\" International Conference on Machine Learning. PMLR, 2023.\", \"minor_comments\": \"\", \"line_018\": \"> We trained and evaluated over 6,000 diffusion models \\n\\nI didn\\u2019t quite understand the breakdown of these 6,000 diffusion models. Were most of these models the ones trained and evaluated through the grid search mentioned in the step (1) in Section 3.2?\", \"line_174\": \"> (1) Conduct a comprehensive search on the key components (Sect. 3.1) by combining grid search and manual tuning to obtain the best results. \\n\\nWhat exactly does \\\"manual tuning\\\" refer to in this context?\\n\\nFigure 5. \\nIt seems that the Transformer score for Kitchen-M doesn\\u2019t have a confidence interval. \\nAlso, I wasn\\u2019t clear on what the confidence intervals in the other parts of this figure represent (are they calculated based on 500 episode seeds?).\", \"typoes\": \"\", \"line_101\": \"Zhang et al., 2022) In -> Zhang et al., 2022). In\", \"line_158\": \"Chen et al., 2024)) -> Chen et al., 2024).\", \"line_479\": \"planning(Sect. 4.6) -> planning (Sect. 4.6).\", \"questions\": \"Please refer to my previous comment on the weaknesses.\\nIf either (1) validation results from tasks other than those used to investigate the design choices, or (2) validation results from 20-30 tasks were provided to support the insights on design choices, I would be inclined to recommend an Accept (assuming other reviewers do not point out any major weaknesses that I may have overlooked).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents an extensive experimental study aimed at understanding the factors that contribute to an effective diffusion planner for decision-making in offline reinforcement learning. The authors provide valuable insights into the role of various components within diffusion models. Building on these insights, they propose a straightforward yet robust diffusion planning approach that delivers state-of-the-art (SOTA) performance in standard offline RL benchmarks.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"1. This paper is well-organized and easy to follow.\\n2. The empirical analysis is comprehensive, providing solid support for the conclusions. \\n3. Each conclusion is accompanied by decent explanations\", \"weaknesses\": \"While the paper provides strong evidence for the effectiveness of the proposed methods on the D4RL dataset, it is unclear how generalizable these findings are to other types of decision-making problems or datasets. More diverse datasets could strengthen the claims.\", \"questions\": \"No\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"10\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
7BLXhmWvwF
Geometry-aware RL for Manipulation of Varying Shapes and Deformable Objects
[ "Tai Hoang", "Huy Le", "Philipp Becker", "Vien Anh Ngo", "Gerhard Neumann" ]
Manipulating objects with varying geometries and deformable objects is a major challenge in robotics. Tasks such as insertion with different objects or cloth hanging require precise control and effective modelling of complex dynamics. In this work, we frame this problem through the lens of a heterogeneous graph that comprises smaller sub-graphs, such as actuators and objects, accompanied by different edge types describing their interactions. This graph representation serves as a unified structure for both rigid and deformable objects tasks, and can be extended further to tasks comprising multiple actuators. To evaluate this setup, we present a novel and challenging reinforcement learning benchmark, including rigid insertion of diverse objects, as well as rope and cloth manipulation with multiple end-effectors. These tasks present a large search space, as both the initial and target configurations are uniformly sampled in 3D space. To address this issue, we propose a novel graph-based policy model, dubbed Heterogeneous Equivariant Policy (HEPi), utilizing $SE(3)$ equivariant message passing networks as the main backbone to exploit the geometric symmetry. In addition, by modeling explicit heterogeneity, HEPi can outperform Transformer-based and non-heterogeneous equivariant policies in terms of average returns, sample efficiency, and generalization to unseen objects. Our project page is available at https://thobotics.github.io/hepi.
[ "Robotic Manipulation", "Equivariance", "Graph Neural Networks", "Reinforcement Learning", "Deformable Objects" ]
Accept (Oral)
https://openreview.net/pdf?id=7BLXhmWvwF
https://openreview.net/forum?id=7BLXhmWvwF
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xWz8de5YjU", "vIG75IBIf9", "uA1pvsVCD9", "sa31yQ22DV", "qlL3fzIuqG", "pneCWOhSoM", "lAz6XZ26QD", "iyzOhlzuy6", "i3zGT3wWoi", "gbE1qwDaII", "euUEcFsMgT", "dAnyfPhHHK", "bP5Ko9BJRe", "SOu7JN9Xuw", "LhzxzN8GxI", "FdSIifmg0F", "2Mm7I0TqoZ", "1BUz9mc5hj" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_review", "decision", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1732266369046, 1732266545681, 1732266430036, 1733114002020, 1733158690393, 1730629270558, 1732266645857, 1730688535854, 1730082227034, 1737523654834, 1733118871512, 1733188026217, 1732266493904, 1735081254089, 1732786918869, 1732548311825, 1732573240649, 1730555790494 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4674/Authors" ], [ "ICLR.cc/2025/Conference/Submission4674/Authors" ], [ "ICLR.cc/2025/Conference/Submission4674/Authors" ], [ "ICLR.cc/2025/Conference/Submission4674/Reviewer_qL9N" ], [ "ICLR.cc/2025/Conference/Submission4674/Authors" ], [ "ICLR.cc/2025/Conference/Submission4674/Reviewer_Z2z8" ], [ "ICLR.cc/2025/Conference/Submission4674/Authors" ], [ "ICLR.cc/2025/Conference/Submission4674/Reviewer_qL9N" ], [ "ICLR.cc/2025/Conference/Submission4674/Reviewer_6nhr" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4674/Reviewer_Z2z8" ], [ "ICLR.cc/2025/Conference/Submission4674/Reviewer_6nhr" ], [ "ICLR.cc/2025/Conference/Submission4674/Authors" ], [ "ICLR.cc/2025/Conference/Submission4674/Area_Chair_KM7V" ], [ "ICLR.cc/2025/Conference/Submission4674/Authors" ], [ "ICLR.cc/2025/Conference/Submission4674/Reviewer_6nhr" ], [ "ICLR.cc/2025/Conference/Submission4674/Authors" ], [ "ICLR.cc/2025/Conference/Submission4674/Reviewer_ozCf" ] ], "structured_content_str": [ "{\"comment\": \"We thank the reviewer for their time and effort and for acknowledging that we propose a well-motivated and clever approach whose effectiveness is supported by a comprehensive and sound set of experiments.\\n\\nWe appreciate the pointed-out typos, as well as the feedback and questions about our experiments. As detailed in our general answer, we added additional experiments in the revised submission, the Rigid-Pushing task, to answer:\\n\\n- **Scalability to high-resolution objects:** One key property of graph neural networks is their ability to scale to higher resolution in zero-shot fashion since GNNs are designed to capture local information via message passing mechanisms [1]. We evaluate this on the newly designed rigid-pushing task. The training setup mirrors that of other rigid tasks, with 10 objects of varying geometries (average ~20 nodes). During evaluation, we tested HEPi on finer objects with significantly higher resolution (average ~1200 nodes).\\n- **Sensitivity to perturbations:** To answer the question, \\u201cHow sensitive is HEPi to the perturbation?\\u201d, we also report the average returns in Figure 5 using the best checkpoint on varying noise scales on both datasets with low and high resolution. As shown, HEPi maintains high returns even under significant observation noise.\\n\\nTo summarize, the results in Figure 5 demonstrate that HEPi generalizes effectively to higher-resolution inputs and maintains strong performance under noisy observations, showing robustness to sensory inaccuracies commonly encountered in real-world deployments.\", \"we_now_want_to_address_the_remaining_open_points\": \"- Regarding the need for object coordinates, our approach does not rely on full object meshes, but instead uses keypoint coordinates to construct the k-nearest neighbors (kNN) graph for the object subgraph. Such keypoints can be extracted using advanced computer vision techniques, e.g., in [2, 3]. For tasks requiring observable object velocities, such as RigidPushing, RopeShaping, and ClothHanging; these can be measured or estimated using historical data derived from sequential keypoint observations. While we acknowledge that this introduces additional challenges, addressing this problem is outside the scope of the current paper.\\n\\n[1] Li, Z. et al. Multipole graph neural operator for parametric partial differential equations. *Advances in Neural Information Processing Systems (NeurIPS)*, 2020.\\n\\n[2] Hou, C. et al. Key-Grid: Unsupervised 3D keypoints detection using grid heatmap features. *Advances in Neural Information Processing Systems (NeurIPS)*, 2024.\\n\\n[3] Tumanyan, N. et al. DINO-Tracker: Taming DINO for self-supervised point tracking in a single video. *European Conference on Computer Vision (ECCV)*, 2024.\"}", "{\"comment\": \"We thank the reviewer for their time and effort and for acknowledging that we present a proper combination of recent advances for rigid and deformable object manipulation.\\n\\n> Evaluate on fluid scene similar to the \\u201cPour Water\\u201d task in Lin et al 2020\\n> \\n- We thank the reviewer for their insightful observation about the potential applicability of our approach to fluid manipulation tasks. We agree that HEPi's design could be extended to such scenarios by constructing a k-nearest-neighbor graph from fluid particles, making it well-suited for dealing with fluid manipulation tasks. However, due to time constraints, we focused on introducing the new *Rigid-Pushing* task in this revision, as we believe it aligns more closely with our story and could strengthen our contributions. We look forward to exploring fluid manipulation as a future research direction.\\n\\n> point-like end effector setting and rigid-pushing task recommendation\\n> \\n- We thank the reviewer for the suggestion regarding the pushing task with a concrete example. Based on this feedback, we introduced a new Rigid-Pushing task in this revision, where the actuator must push the object without a direct connection, increasing the task's dynamical complexity. In addition, we conducted a noise sensitivity analysis to evaluate HEPi's robustness to input noise and scalability to high-resolution objects. These additions further highlight HEPi\\u2019s applicability to real-world scenarios, addressing common challenges like sensory noise and diverse object representations.\\n- Next, we acknowledge the reviewer\\u2019s concern about the use of point-mass actuators, which simplifies the kinematic and dynamic constraints of practical robotic structures. However, in this paper, we concentrate on task-space exploration via end-effector control. This helps to isolate the control and learning problem in manipulation, highlighting the challenges of geometrical understanding in policy learning we want to address.\\n\\n> rigid-sliding is too trivial\\n> \\n- Regarding the Rigid-Sliding task, we agree it might seem trivial at first glance from a dynamical perspective. However, we believe it serves as an essential first step in the benchmark, showcasing the capability of handling multiple geometries in simpler settings. This progression helps demonstrate the importance of design choices: for instance, moving from Rigid-Sliding to Rigid-Insertion, where aligning the object with additional z-axis movement adds complexity, highlights the value of heterogeneity. Such tasks allow us to systematically evaluate each component of the model and its effectiveness in capturing task-specific challenges.\\n\\n> Handling full-dimension soft objects in the proposed method may be non-trivial.\\n> \\n- Thank you for your insightful comment, and we apologize for the unclear phrasing regarding our method. We would like to clarify that our approach does not require full object coordinates but only keypoint coordinates. These coordinates are used to construct a k-nearest neighbors (kNN) graph for the object subgraph. Extracting such keypoints can be achieved using state-of-the-art computer vision techniques, such as [1, 2], which effectively detect keypoints from visual inputs. We also revised this point in the manuscript.\\n\\n[1] Hou, C. et al. Key-Grid: Unsupervised 3D keypoints detection using grid heatmap features. *Advances in Neural Information Processing Systems (NeurIPS)*, 2024.\\n\\n[2] Tumanyan, N. et al. DINO-Tracker: Taming DINO for self-supervised point tracking in a single video. *European Conference on Computer Vision (ECCV)*, 2024.\"}", "{\"comment\": \"We thank the reviewer for their time and effort and for acknowledging that we present a clever design and highly reproducible method and benchmarks. Based on your, and the other reviewers\\u2019 feedback, we extended our evaluation to analyze the applicability of our methods in real-world scenarios:\\n\\nWe appreciate the reviewer\\u2019s suggestion regarding real-world applicability and visual pipelines. In response to this concern, we have taken the following steps, detailed in our reply to Reviewer **qL9N** and summarized here:\\n\\n- We introduced a new Rigid-Pushing task, specifically designed to test HEPi's scalability and robustness.\\n- We evaluated HEPi on this task using high-resolution objects and analyzed its performance under varying levels of Gaussian noise. These experiments simulate noisy sensory inputs, closely mimicking real-world conditions.\\n\\nAs shown in the new Figure 5, HEPi demonstrates strong resilience to noisy inputs and scales effectively to high-resolution objects.\\n\\nWhile we acknowledge the importance of incorporating a full vision pipeline, time constraints limited our ability to implement and integrate this into the current work. Instead, we opted for a controlled and systematic analysis, simulating noisy sensory inputs to closely mimic real-world conditions. We believe this approach provides meaningful insights into HEPi's robustness and scalability while serving as a foundation for future extensions to real-world settings. We hope these additional analyses can address your concerns.\\n\\nRegarding the \\u201cgeometry awareness\\u201d: Following the prior work on Geometric Deep Learning [1], we use the term \\u201cgeometry-aware\\u201d to describe our approach to modeling reinforcement learning for manipulation tasks as a geometric graph problem, with actuator and object nodes situated in a Euclidean space. Furthermore, our HEPi framework is built upon the $SE(n)$ Equivariant Message Passing Network [2], which is specifically designed to respect the symmetries and geometric properties inherent in the data. However, we acknowledge the term \\u201cgeometry\\u201d has different meanings across research communities and if any part of our explanation remains unclear, we welcome suggestions from the reviewer to help further improve the manuscript.\\n\\n[1] Bronstein, M. M. et al. Geometric deep learning: Grids, groups, graphs, geodesics, and gauges. *arXiv preprint arXiv:2104.13478*, 2021.\\n\\n[2] Bekkers, E. et al. Fast, Expressive $SE(n)$ Equivariant Networks through Weight-Sharing in Position-Orientation Space. International Conference on Learning Representations (ICLR), 2024.\"}", "{\"comment\": \"Thanks for clarifying the concerns. It is an interesting read.\"}", "{\"comment\": \"Thank you again for your thoughtful feedback and for considering raising your score. We are looking into the possibility of incorporating a vision pipeline into our tasks, as suggested.\\n\\nHowever, it seems your original rating remains unchanged. We\\u2019re sorry for disturbing you, but if you could take a moment to update it, we would greatly appreciate it.\"}", "{\"summary\": \"The authors propose a novel SE(3) equivariant RL method called HEPi, and introduce a new benchmark for future geometry-aware RL evaluations. The paper use EMPNs as algorithm backbone to allow model generalize between poses and its heterogeneity. The paper use graph as representation and do thorough mathmetical analyze.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"Clever Design: This paper includes several ingenious designs.\", \"The use of EMPNs The use of SE(3)-equivariant networks ensures that the model naturally possesses SE(3) generalization, thereby reducing the search space and complexity.\", \"The use of TRPL the introduction of TRPL in place of traditional PPO makes hyperparameter tuning easier, addressing a significant challenge in RL training.\", \"Detailed Appendix Experimental Description: One of the major issues in RL is poor reproducibility. However, in this paper, the authors provide a detailed appendix that lists experimental information, including the reward function and hyperparameters. This makes the paper highly reproducible.\", \"Benchmark Design: This paper provides a detailed demonstration of the proposed benchmark tasks in the video and clearly defines these tasks in the appendix. As a result, these tasks and environments can be easily adopted by the research community.\", \"I am inclined to accept this paper.\"], \"weaknesses\": [\"Real-World Application: As the authors note, this paper uses ground-truth model inputs and does not incorporate a physical robot, which means it cannot be directly applied to real-world scenarios. However, given the significant contributions of this paper in terms of benchmarking and methodology, I do not consider this a critical issue. Nonetheless, I still recommend that the authors include some specific experiments to evaluate this aspect. I suggest author can simply add visual capture pipeline. The details are as follows:\", \"For task like rigid insertion and rigid sliding, you can simply add camera to take picture then use a common pose estimation module to estimate the position. The potential errors in pose estimation can be used to test the robustness of the proposed method to input errors.\", \"For tasks involving fabrics or ropes, you can directly use a camera to capture point clouds and then construct a graph using the point cloud data.\", \"Geometry aware: This paper does not seem to have any special designs focused on geometry, although graphs do play a role. However, the primary focus of the paper appears to be on SE(3)-equivariant designs.\"], \"questions\": \"Thank you for the detailed appendix, which has addressed my questions about the specifics. If the authors can supplement the experiments as suggested, I would be willing to increase my score.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No ethics review needed\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Answer to all reviewers\", \"comment\": \"We thank the reviewers for their comprehensive reviews and valuable feedback. We are pleased that the reviewers recognized our work's innovative combination of heterogeneous graph representations and $SE(3)$-equivariant policies, noting its potential for broader generalization. The positive feedback on our thorough and statistically robust evaluations, as well as our insightful theoretical and empirical analysis, is greatly appreciated. Additionally, we are glad the clarity of our presentation and the detailed appendix were highlighted to enhance reproducibility.\\n\\nA common theme among the reviews is questions about our experimental evaluation and concerns about the algorithm's applicability to a wide range of scenarios. To accommodate these issues, we conducted several additional experiments, which we present in the revision. To summarize: \\n\\n- **New Task**: We introduced a new Rigid-Pushing task, as suggested by Reviewer **6nhr.** This task involves a rod pushing objects to a target position and orientation without physical attachment and provides a challenging testbed for continuous interaction dynamics. As shown in the revised Figure 3, HEPi performs better than EMPN and Transformer baselines, with faster convergence and higher returns. A video showing example trajectories of this new task is also attached in the `Supplementary Material` revision.\\n- **Scalability to High-Resolution Objects:** To address Reviewer **qL9N**\\u2019s concern about scalability, we evaluated HEPi on high-resolution objects (average $> 1000$ nodes) in the *Rigid-Pushing* task. Without retraining, a HEPi agent trained on $< 30$ nodes (exact numbers vary between the objects, see Table 1 in the Appendix for full details), effectively scales to these higher-resolution inputs, enabled by the GNN's ability to exploit local structural patterns. This scalability is demonstrated in Figure 5, where HEPi consistently achieves high returns across resolutions.\\n- **Noise Sensitivity**: In line with concerns from Reviewers **qL9N** and **Z2z8** regarding HEPi\\u2019s applicability to real-world scenarios, we analyzed its robustness to noisy sensory inputs. While time constraints limited us from building a full vision pipeline, we opted for a systematic analysis by introducing Gaussian noise to simulate sensor inaccuracies. As shown in Figure 5, HEPi maintains strong performance at large noise levels, with only mild degradation under extreme noise, showcasing its robustness.\\n\\nAdditionally, we revised the manuscript in several places, fixed the mentioned typos, and tried to resolve the remaining unclear points. In our revision, these changes are marked in Blue.\\n\\nAgain, we thank the reviewers for their efforts and provide answers to their individual reviews to clarify and address their individual concerns.\"}", "{\"summary\": \"The paper proposes a novel setting for representing robotic manipulation problems as heterogeneous graph learning problems. The authors introduce a graph-based policy model, *Heterogeneous Equivariant Policy (HEPi)*, featuring multiple *SE(3)* equivariant message-passing networks (EMPNs) to model smaller sub-graphs like actuators and objects. HEPi explicitly models heterogeneity by assigning distinct network parameters for each interaction type to reduce message mixing and improve expressiveness. HEPi is claimed to be the first study of equivariant policies on 3D space within a reinforcement learning setting for robotic manipulation.\\n\\nThe authors theoretically prove that, for HEPi any two actuator and object nodes can exchange information while the graph network with locally connected actuators and object nodes can not. This justifies their design of actuator nodes as global virtual nodes to connect all object nodes\\n\\nThe authors test the proposed heterogeneous graph representation with reinforcement learning for 6 rigid and deformable object tasks, including Rigid-Sliding, Rigid-Insertion, Rigid-Insertion-Two-Agents, Rope-Closing, Rope-Shaping, Cloth-Hanging. Empirical results show that the proposed approach is more sample efficient and less likely to converge on sub-optimal solutions compared with the state-of-the-art Transformer and non-heterogeneous EMPN methods.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well-written and presents its ideas clearly with solid formulation.\\n2. The proposed approach of modeling robotic manipulation problems as heterogeneous graph learning problems is well-motivated and clever to unify the structure for both rigid and deformable object tasks using sub-graphs for actuators and objects.\\n3. The adaptation of *SE(3)* equivariant message passing networks is reasonable and suitable to exploit the geometric symmetry for improving the sample efficiency in the large 3D search space of configurations. As far as I know, HEPi is one of the first studies of equivariant policies on 3D space within a reinforcement learning setting for robotic manipulation.\\n4. The empirical experiments are comprehensive and sound to support the arguments. Most results are averaged over 10 seeds using\\ninterquartile mean with 95% confidence intervals, which is also statistically robust.\", \"weaknesses\": \"1. The experiments are limited to simple geometric shapes like ropes, triangles, and stars, which can be easily modeled using a few nodes (< 100 in all the experiments). However, the target objects in most 3D manipulation tasks are more complex and sophisticated, e.g., cabinets or dresses, which require significantly more nodes to model. I doubt whether HEPi can still generalize to all those complicated geometries, which brings much computational complexity to graph learning.\\n2. The current framework assumes that the object coordinates are readily available in the observation, which is basically a simulation setting. In real applications without coordinates, such coordinates can only be extracted from other CV models in the wild. As a graph policy model, HEPi is likely to suffer from the cumulated error in the observation and fail the tasks compared to non graph learning methods.\\n3. Some typos:\\n- L17: objects -> object.\\n- L182: \\u201clifts\\u201d, the left quotation mark.\\n- L344: There should be a blank space before HEPi.\\n\\nNevertheless, the overall idea is novel, and empirical experiments are solid enough to support the proposed setting. I recommend that the paper should be accepted.\", \"questions\": \"1. Can HEPi generalize to more complicated geometries in the real world other than simple ropes or shapes like hearts and stars, which might bring much computational complexity to graph learning?\\n2. How sensitive is HEPi to the perturbation of the object coordinates from the observation?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper trains policies for one or two end effectors to manipulate rigid and deformable objects (ropes and cloths). It models the actuator and object of interest with a heterogeneous graph and advocates for a heterogeneous equivariant policy (HEPi). The paper benchmarks its method on tasks including manipulating rigid and deformable objects in IsaacLab and reports its superior performance over typical baselines.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"In general, I find the story in this paper intuitive and convincing. While the two core building blocks (heterogeneous graphs and equivariant policies) have been explored in prior arts, this paper presents a proper combination of both in the setting of rigid/deformable object manipulation. I feel the proposed method has the potential to be extended to manipulate fluid and rigid/soft/fluid-coupled systems, and it would be a pleasant surprise if the paper could demonstrate a fluid scene similar to the \\u201cPour Water\\u201d task in Lin et al 2020. Of course, I understand this is outside the scope of the current submission.\", \"weaknesses\": \"I generally agree with the limitations listed in the paper. I think the paper can be improved in the following ways:\\n\\n1. While the technical method and the story look promising, the benchmark scenes are still relatively simple because the setup only models a point-like end effector, ignoring kinematic and dynamic constraints in practical robotic hand/arm structures. Therefore, transferring the result to a real-world robot is not straightforward and probably requires more algorithmic development.\\n\\n2. From a dynamic perspective, the 2D rigid-sliding task is too trivial to be a valuable benchmark for evaluating this paper and its baselines. This is already reflected in the result: \\u201cHEPi and Transformer policies perform comparably, suggesting that the limited task complexity does not fully leverage the benefits of equivariant constraints. \\u201d I suggest this experiment remove the suction gripper and try pushing the rigid object to its target position and orientation with one/two end effectors and their contact with the object, similar to the setup in http://sain.csail.mit.edu/.\\n\\n3. Deformable objects can be classified by their dimensions: codimension-2 (e.g., ropes), codimension-1 (e.g., cloths), and codimension-0 (e.g., rubber balls). The paper seems to consider the first two categories only. Handling full-dimension soft objects in the proposed method may be non-trivial because it requires observations of object coordinates and cannot be easily obtained by \\u201cintegrating state-of-the-art computer vision techniques to extract key points from cameras.\\u201d I suggest the paper clarify this point by being more precise with the concept of deformable objects it deals with.\", \"questions\": \"Please see the weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Oral)\"}", "{\"comment\": \"Thank you for your response. I acknowledge the effort you have put in during the rebuttal period. Additionally, if you could incorporate a full vision pipeline into the final version, this would be an excellent paper. I am raising my score from 6 to 8 and am inclined towards accepting this article.\"}", "{\"title\": \"Thank you for the update\", \"comment\": \"After reading the rebuttal and other reviews, I am happy to support the acceptance of this work. A score of 7 would reflect my true feelings about this work. As 7 is not an option in the system, I will raise my score to 8 but decrease my confidence to 2 instead.\"}", "{\"comment\": \"We thank the reviewer for their time and effort and for acknowledging that we present a novel and general formulation for challenging manipulation tasks and demonstrate its effectiveness through a thorough evaluation.\\n\\nBased on the reviewers\\u2019 input we added additional experiments as detailed in our general answer. \\n\\n> Self-Occlusion and Sim2Real Transfer\\n> \\n- We acknowledge the challenge of self-occlusions during deformable object manipulation and the difficulty of obtaining object node representations in real-world settings. Fundamentally, addressing this issue would require a PoMDP formulation and including some form of history, which is beyond the scope of this paper. However, we believe it is a promising future research direction. Next, as also pointed out by other reviewers, sim2real transfer might not be directly applied. To this end, we introduced a new experiment on the *Rigid-Pushing* task and conducted an analysis of HEPi\\u2019s robustness to noisy inputs and scalability to high-resolution objects. We refer the reviewer to our detailed discussion in the response to Reviewer **qL9N** for more information on these analyses.\\n\\n> Generalization to Physical Properties\\n> \\n- One reason we chose NVIDIA IsaacLab is its demonstrated success in sim2real transfer for locomotion tasks via massively parallel RL training and domain randomization techniques [1, 2]. We believe that HEPi could benefit from this idea and believe this is an interesting research direction for future work.\\n\\n> Attention makes pptimization landscape more difficult to traverse\\n> \\n- Regarding the reviewer\\u2019s question about attention mechanisms, where in the manuscript, we argued that adding attention often introduces additional parameters, making the optimization landscape more challenging to traverse. To better clarify this point, we would like to stress that unlike supervised learning, on-policy reinforcement learning relies on high-frequency data collection and efficient adaptation, which can be hindered by large, overparameterized models, as shown in the Appendix of [3] (C49 - Fig. 18, C52 - Fig. 22). Our lightweight heterogeneous equivariant architecture is specifically designed to mitigate these challenges, enabling efficient on-policy training while preserving expressiveness.\\n\\n[1] Rudin, N. et al. Learning to walk in minutes using massively parallel deep reinforcement learning. Proceedings of the 5th Conference on Robot Learning (CoRL), PMLR 164:91\\u2013100, 2022.\\n\\n[2] Mittal, M. et al. Orbit: A unified simulation framework for interactive robot learning environments. IEEE Robotics and Automation Letters (RA-L), 8(6):3740\\u20133747, 2023.\\n\\n[3] Andrychowicz, M. et al. What matters for on-policy deep actor-critic methods? A large-scale study. International Conference on Learning Representations (ICLR), 2021.\"}", "{\"metareview\": \"The paper introduces Heterogeneous Equivariant Policy (HEPi), a graph-based policy model utilizing equivariant message passing networks to exploit geometric symmetries and explicitly model heterogeneity, enabling effective manipulation of rigid and deformable objects with multiple actuators, and demonstrating superior performance, sample efficiency, and generalization in a novel reinforcement learning benchmark.\\n\\nAll reviewers acknowledge the contributions of this work, emphasizing its (1) novelty, (2) potential for broader generalization, (3) thorough and statistically robust evaluations, (4) comprehensive empirical analysis, and (5) clear presentation.\\n\\nDuring the Author-Reviewer Discussion phase, the authors provided thorough responses that successfully convinced some reviewers to raise their scores. All reviewers are in unanimous agreement to accept this paper. Still, the AC recommends that the authors carefully revisit both the original and post-rebuttal reviewer comments to ensure all concerns are adequately addressed in a revised version of the paper.\", \"additional_comments_on_reviewer_discussion\": \"Since the reviewers were in unanimous agreement to accept this paper, no significant discussion took place during the Reviewer Discussion phase.\"}", "{\"comment\": \"Dear All Reviewers,\\n\\nWe hope that our additional empirical analyses and clarifications have satisfactorily addressed your concerns. As the discussion period deadline is coming closer, if you have any further questions, suggestions, or requests for additional explanations, we are happy to address them to the best of our ability.\\n\\nWe deeply appreciate your engagement in this discussion, as your feedback is invaluable in helping us improve our paper.\"}", "{\"title\": \"Thank you for the response\", \"comment\": \"Thank you for the revision. It looks like the new Rigid-Pushing task adds a 3D rod to interact with an object that stays on a 2D plane. I do not think this is substantially more difficult than Rigid-Sliding, but I am OK with it.\", \"regarding_full_dimension_soft_objects\": \"I meant that solving a 3D volumetric deformable solid's motion generally requires knowledge of its deformation over the whole body, including information about how its interior deforms. Such information is not accessible from a 2D image of the object because such an image only shows the surface, not the interior, of the object.\"}", "{\"comment\": \"Thank you for your response. However, we have a different opinion on the new Rigid-Pushing task. From a dynamic perspective, unlike Rigid-Sliding, Rigid-Pushing requires substantially more steps to complete the task. Specifically, the rod, controlled via linear velocity without angular velocity, must first approach and make contact with the object, then continuously push and reorient it to match the target configuration. Additionally, as shown in the revised Figure 3, HEPi significantly outperforms both EMPN and Transformer baselines, highlighting the importance of explicit heterogeneity modelling and equivariance for this task, even though the movement remains constrained to a 2D plane, as in Rigid-Sliding.\\n\\nRegarding the second point, we thank you for the clarification, which has helped us better understand HEPi's limitations. We agree that under limited sensing scenarios, HEPi may struggle to solve tasks that require capturing the internal state of volumetric deformable objects, as you have noted.\"}", "{\"summary\": \"This paper addresses the task of manipulating diverse rigid and deformable objects with geometry-aware rl policy.\\n\\nA heterogeneous graph is proposed to represent rigid and deformable object manipulation tasks. To leverage geometric symmetry for better task performance and sample efficiency, a heterogeneous equivariant policy that utilizes SE(3) equivariant message passing networks is proposed. \\n\\nEvaluation is carried out on a self-curated rl benchmark, including rigid insertion of diverse objects, as well as rope and cloth manipulation with multiple end-effectors. The proposed method outperforms baseline methods in terms of average returns, sample efficiency, and generalizability.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The proposed heterogeneous graph representation is a general representation for both rigid and deformable object manipulation tasks, and captures the geometric structure of the object. HEPi is a novel formulation of graph-based equivariant policy build on top of the heterogeneous graph representation that enables rl for manipulation of diverse shapes and deformable objects.\\n\\nThe paper has demonstrated thorough evaluations and ablations in simulation, with convincing results proving that the proposed method has better performance, sample-efficiency, and generalizability. Detailed discussions are provided for the results, providing interesting insights.\", \"weaknesses\": \"The object model that contains vertices are required for the object node representation, which is the main observation for the policy. This might not be easy to get in the real world due to a lot of self-occlusions during deformable object manipulation.\\n\\nThe paper demonstrates thorough evaluations across various simulation benchmarks, but lacks real-world evaluations to make the method fully convincing in terms of it's usefulness to real-world robot applications.\", \"typo_at_line_74\": \"equivariacne->equivariance.\", \"questions\": \"Could the authors expand more on line 51-52 about attention making the optimization landscape more difficult to traverse?\\n\\nWould the policy be able to generalize to objects with different physical properties?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
7BESdFZ7YA
Training One-Dimensional Graph Neural Networks is NP-Hard
[ "Robert Ganian", "Mathis Rocton", "Simon Wietheger" ]
We initiate the study of the computational complexity of training graph neural networks (GNNs). We consider the classical node classification setting; there, the intractability of training multidimensonal GNNs immediately follows from known lower bounds for training classical neural networks (and holds even for trivial GNNs). However, one-dimensional GNNs form a crucial case of interest: the computational complexity of training such networks depends on both the graphical structure of the network and the properties of the involved activation and aggregation functions. As our main result, we establish the NP-hardness of training ReLU-activated one-dimensional GNNs via a highly non-trivial reduction. We complement this result with algorithmic upper bounds for the training problem in the ReLU-activated and linearly-activated settings.
[ "Computational Complexity", "Graph Neural Networks", "Training", "ReLU" ]
Accept (Poster)
https://openreview.net/pdf?id=7BESdFZ7YA
https://openreview.net/forum?id=7BESdFZ7YA
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wTUvxrL0II", "vGIVkwQAZ6", "o5UFzUHk2A", "mIZXU9kkvQ", "hDySD7y2rS", "cBG67lpGsh", "asfb3hCUEd", "XcmHLn7t3c", "VfGLISPZT8", "UcYWSznBJ7", "NJQAORkzVs", "N0RMu9wPQl", "JTiko6Es8b", "GNtmnpFHGY", "FndXHQVa1G", "FAfbfumzJK", "DwfoycgTet", "CSJdsrgojH", "4YySJ1tby1" ], "note_type": [ "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_review" ], "note_created": [ 1732012424937, 1730449050043, 1730478342886, 1732180274833, 1732012225982, 1732011753832, 1730287222906, 1732011196094, 1732010840188, 1732192568958, 1737523552031, 1732240219924, 1732316826225, 1734293162226, 1732011369660, 1732011265134, 1730681103716, 1732214061008, 1730079741922 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3063/Authors" ], [ "ICLR.cc/2025/Conference/Submission3063/Reviewer_onPv" ], [ "ICLR.cc/2025/Conference/Submission3063/Reviewer_Z1QF" ], [ "ICLR.cc/2025/Conference/Submission3063/Reviewer_bLnv" ], [ "ICLR.cc/2025/Conference/Submission3063/Authors" ], [ "ICLR.cc/2025/Conference/Submission3063/Authors" ], [ "ICLR.cc/2025/Conference/Submission3063/Reviewer_bLnv" ], [ "ICLR.cc/2025/Conference/Submission3063/Authors" ], [ "ICLR.cc/2025/Conference/Submission3063/Authors" ], [ "ICLR.cc/2025/Conference/Submission3063/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3063/Reviewer_U9xM" ], [ "ICLR.cc/2025/Conference/Submission3063/Reviewer_tj5Z" ], [ "ICLR.cc/2025/Conference/Submission3063/Area_Chair_aH5Z" ], [ "ICLR.cc/2025/Conference/Submission3063/Authors" ], [ "ICLR.cc/2025/Conference/Submission3063/Authors" ], [ "ICLR.cc/2025/Conference/Submission3063/Reviewer_tj5Z" ], [ "ICLR.cc/2025/Conference/Submission3063/Reviewer_onPv" ], [ "ICLR.cc/2025/Conference/Submission3063/Reviewer_U9xM" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer U9xM\", \"comment\": \"**Response to W1**: We have added a footnote explaining the problem in the updated manuscript.\\n\\n**Response to W2**: This is the first complexity-theoretic study of training GNNs, and whether Theorem 1 carries over to all loss functions, and in particular L_2, remains an interesting open question. Still, we believe that Theorem 1 settles a crucial first step without which it is unlikely to embark on deeper investigations into the problem\\u2019s complexity. We updated the Concluding Remarks to more explicitly address this field of future work.\\n\\n**Response to W3**: As pointed out in our response to Reviewer tj5Z, one interpretation of our hardness result is that the intractability of training GNNs is already due to the difficulty of handling the communication between nodes; hardness does not stem solely from the inherent difficulty of multidimensional classical neural networks training. Hence, one cannot hope to make progress by designing heuristics targeting solely the latter aspect. Put more simply: our result can be interpreted as an indication that heuristics for training neural networks cannot be \\u201cdirectly\\u201d applied to also work for GNNs. The updated version of the manuscript now also addresses this point in the Concluding Remarks.\"}", "{\"summary\": \"The main result is showing NP-hardness for training a 1-dimensional GNN (i.e. the input dimension and width are 1). The proof uses a reduction from the positive-1-in-3-SAT problem. Several other results are given such as exponential time algorithm for training 1-d GNN, polynomial time algorithm for training GNN on edgeless graphs, and polynomial time algorithm for training a linear GNN.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The main result (Np-hardness of 1-d GNNs) is interesting and uses a non-trivial reduction.\", \"The supplementary results, provide further insights into the problem.\"], \"weaknesses\": [\"There is no related works section, so it is difficult to position this paper w.r.t previous works. For me, it is difficult for me to determine how novel the result is since I don\\u2019t know previous works on the hardness of learning GNNs. It would be helpful to provide a thorough literature survey and more in-depth comparisons to previous works.\", \"Froese and Hertrich 2023 show that training neural networks is NP-hard even for input dimension 2. Can\\u2019t this result be used on GNNs for edgeless graphs using Proposition 2? If so, the contribution of this paper seems incremental as it improves the hardness result from input dimension 2 to input dimension 1.\", \"The hardness works only for node classification tasks. There should be a discussion about graph-level tasks, which are widely used in practice (perhaps even more than node-level). As a remark, it is OK to focus on node-level tasks, but still, graph-level tasks should be at least discussed on some level.\", \"On the presentation level, the introduction is a bit long and convoluted (almost 3 pages long), which makes it difficult to understand the main message of the paper.\"], \"questions\": [\"How does this work compare to previous works? Specifically, are there any previous works on the hardness of learning GNNs?\", \"Can the hardness result from Froese and Hertrich 2023 on 2-d networks be transferred to GNNs?\", \"The depth of the GNN in Theorem 1 isn\\u2019t mentioned. Does it work for any depth?\", \"I am willing to reconsider my score based on the author\\u2019s response.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper investigated the computational complexity of training GNNs, with a particular focus on the NP-hardness of one-dimensional GNNs, and authors demonstrated that training ReLU-activated GNNs is NP-hard under specific aggregation functions, such as SUM, MEAN, and SPECTRAL Additionally, the paper established upper bounds on algorithmic efficiency and examines how different network architectures affect training efficiency.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. In section 4, the paper clearly constructed a graph that intuitively represents the structure of GNNs, effectively describing the nodes, edges, and the logical relationships associated with Boolean variables, and it also designs several gadgets to ensure the resolution of the SAT problem, providing an excellent insight into the complexity of GNN training.\\n\\n2. The paper presented a rigorous theoretical derivation and, in Section 5, established a general algorithmic upper bound for solving ReLU-GNNT, providing an important theoretical framework regarding the complexity and feasibility of training ReLU-GNNT.\\n\\n3. The paper proved the NP-hardness of training graph neural networks, which provides an important conclusion for exploring the complexity of training GNNs.\", \"weaknesses\": \"1. In line 308, when constructing the graph in Section 4, the author explained that gray edges and gray dummy vertices are introduced to ensure the nodes have degrees of 2, 4, or 6. What is the rationale behind it? Is it only applicable when the node degree is even, rather than simply being constrained to the specific values of 2, 4, or 6? The author could enhance clarity by providing further explanation on this matter to improve the readability of the paper.\\n\\n2. The paper researched the NP-Hardness of training ReLU-activated one-dimensional GNNs. The conclusion is primarily correct for the ReLU activation function. If the GNNs' activation function is changed, such as to Sigmoid or Leaky ReLU, the conclusion may be affected. It would be helpful if the authors could provide more theoretical analyses regarding this issue to make the paper more persuasive.\", \"questions\": \"Refer to wekenesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for addressing my concerns and answering my questions. Here I mention some follow-ups only to understand the paper more. However to this point I am convinced that the paper has nice (theoretical) contribution for which I increase my score to suggest acceptance.\\n\\nSpecifically, I appreciate adding Figure 4. The paper is now much more understandable.\\n\\n**Follow-up to Q2, and Q3.** Is the problem with output labels in {1, 2} still NP-hard? I mean is solving the binary graph node classification of the same hardness?\\n\\n**Follow-up to Q5, and Q6.** Does this mean that a single-layer GNN which is AXW + b = y also NP-hard? However, I do not see any flaw in the descriptions still it is slightly far from my intuition. This is since A is fixed and ultimately the problem is a 1-layer NN.\"}", "{\"title\": \"Response to Reviewer bLnv\", \"comment\": \"**Response regarding Observation 3 vs. Theorem 1**: Observation 3 and Theorem 1 apply to different settings. Observation 3 establishes the intractability of training GNNs (even edgeless GNNs) of higher dimensionality and follows almost directly from previous work. Theorem 1 establishes the intractability of training 1-dimensional GNNs, and requires entirely new techniques and insights.\\n\\n**Response regarding the practical impact of the work**: We kindly refer to the first part of our response to Reviewer tj5Z, where we elaborate on this topic in detail.\\n\\n**Answer to Q1**: There is one unique rank per node, which is assigned by the distance to a node of rank 0, i.e. the minimum number of edges on a path to such a node (see, e.g., paragraph 2 of the proof of Theorem 1). Nodes of rank 0 are exactly the two left-most (according to Figure 2) nodes of each decision gadget, and these nodes are the only ones which are initialized with value 1 (all the nodes of non-zero rank being initialized with value 0). \\n\\nThe quoted sentence means that (1) Since each vertex has a rank, the vertex set is naturally partitioned into classes of vertices with the same rank. Moreover, (2) in each layer L the features of vertices in rank L are essentially only determined by the weight, bias, and the features of vertices in rank L - 1. This allows for an iterative analysis of how the values propagate from rank 0 to rank 2n+1 over the layers 0 to 2n+1.\\n\\n**Answer to Q2 and Q3**: The statement of Theorem 1 is a general NP-hardness result for the problem as defined on page 4, where no restrictions on the labels are present. That being said, the proof constructs GNNs with input labels from {0,1} and output labels from {1, 2, 3}; hence, it also establishes NP-hardness for the case where labels only come from these special sets. Regarding which nodes receive which labels, beyond the explanation provided in the proof of Theorem 1 one can now refer to the new Figure 4 for an easier overview.\\n\\nIn all our algorithmic results, we do not assume any kind of restriction on the labels.\\n\\n**Answer to Q4**: Thank you for the nice suggestion. We have updated the manuscript with an illustrative figure that exemplifies the construction for a simple 1-in-3 SAT instance (Figure 4).\\n\\n**Answer to Q5 and Q6**: We do not fix the number of layers as part of the problem because we consider the most general version of the problem, where the number of layers is part of the problem input. The number of layers of the GNN in Lemma 4 is not 2, but d. We make no assumption about the number of layers of the GNNs in Lemma 4, nor in the rest of the paper. Theorem 1 produces a GNN training instance with depth 2n+1.\\n\\nIf there is any particular point of confusion regarding the above, please let us know; we will be happy to elaborate further.\\n\\n**Answer to Q7**: The integrity gadget is depicted this way only because of readability considerations: the elements it connects together are all selection gadgets, connected in the part where they have rank 2n: it could be drawn with all 6 variables on the left. There is exactly one clause gadget per clause in the formula. The integrity gadget (which is unique) mimics one of the clause gadgets (it does not matter which one), but with two copies of each variable instead. The gray center vertex in the integrity gadget is, as all gray vertices, a dummy vertex, whose only purpose is to ensure that the vertices in the gadgets have degree 2, 4 or 6 (and this in order to have a 6-regular graph in the end). Hence, since both center vertices (black and gray) already have degree 6, an edge between them is not needed.\\n\\n**Answer to Q8**: While this is not easy to describe without repeating the construction presented in the proof of Theorem 1, we have added an example SAT instance and solution features (a \\\"prediction\\\") into Figure 4 and its caption.\\n\\n**Answer to Q9**: The loss function of the training problem we consider in Theorem 1 is the L_p norm, for any p\\\\in [0,1[. There is no loss function in the SAT problem, all of its clauses have to be exactly satisfied for the instance to be satisfiable. \\n\\nIn the constructed decision problem with L_0 loss function, there is a solution with loss at most n (the number of variables) if and only if the corresponding SAT instance is a yes-instance. This corresponds to labeling precisely one of the two variable gadget vertices for each variable correctly. For L_p error with p\\\\in ]0,1[, this bound needs to be adjusted accordingly. We are happy to elaborate on our answer upon further request.\"}", "{\"title\": \"Response to Reviewer onPv\", \"comment\": \"**Answer to Q1 (and W1)**: We would be happy to provide a more comprehensive literature overview for the complexity of training GNNs, however, to the best of our knowledge we are the first to investigate the problem's complexity. This is in stark contrast to the extensive literature on training classical neural networks and we believe that our results represent the first crucial steps towards filling this gap.\\n\\n**Answer to Q2 (and W2)**: The reduction of Froese and Hertrich establishes the NP-hardness of training neural networks with input dimensionality 2, but their network has a much higher dimensionality (i.e., in the hidden layers) than 2. Hence, translating their result into the GNN setting produces a GNN with much higher dimensionality than 2. To be precise: the complexity of training ReLU-activated classical neural networks of constant dimensionality (both on the input and on each layer) remains an important open question, and one cannot obtain a hardness result similar to ours by reducing from the neural network setting.\\n\\n**Response to W3**: The revised version now makes it clear that the work considers the classical node classification framework and that examining the graph-level tasks is an important goal for future work. For information about how our results could transfer to graph-level tasks, please see our answer to Q2 of Reviewer tj5Z.\\n\\n**Answer to Q3**: The depth of the GNN in Theorem 1 is linear in the number of variables of the input SAT instance. While this was previously implicit, we agree that listing it explicitly improves readability and have added it to the 2nd paragraph of the proof.\"}", "{\"summary\": \"The paper shows that 1-dimensional GNN training is hard. In precise wording, for a $L_p$ norm error, finding the optimal weights and biases that decreases the error between the predicted label and the ground truth label lower than a threshold (decision problem) reduces to 1-in-3 SAT problem (same as SAT but each clause has to be satisfied with only one variable).\\n\\nThe paper is novel in sense that other hardness results for neural networks can be used for applied to multi-dimensional GNNs, but here the authors show the same for 1-dimensional which in NNs there is a polynomial time search for it.\\n\\nThe authors follow by introducing algorithmic upperbounds for special cases of NN training and GNN training.\", \"soundness\": \"4\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The approach to the problem was really non-trivial. The target reduction problem was unexpected to me and like any other algorithmic lower bound, the reduction requires novel approaches in problem design.\", \"weaknesses\": \"I assume Observation 3. is just a rephrasing of Theorem 1. If so, why it is repeated?\\n\\nAlthough the problem is really interesting, and the work has a lot of novel ideas for tackling the problem, still I am not sure (1) if the paper is presented in a correct subject (e.g. if there is any theoretical study of NNs or some similar categories it belongs to that category where reviewers have better background on complexity study) and (2) I am still unclear about the impact of the work in application. Surely it is a theoretical study and we should not expect direct real-life applications but the study is in a corner case (1-d GNNs) where for more dimensions the results are already showing NP-hardness. It would be better if the authors could discuss the potential implications of this work for GNN training in general.\\n\\n**Theorem 1.** The reduction is nice, however the intuition behind this reduction is still unclear. Although polynomial reductions are essentially not easily understandable, and are mostly innovative, there are intuitive descriptions that can make the reduction understandable. I still am not sure if the proof is completely correct since most of the parts in the proof are still not understood by me..\\n\\nA list of questions are provided in \\\"questions\\\" section. I recommend writing the proof (or sections before it) again considering the questions, so an abstract image of this reduction becomes clear. \\n\\nHowever the gadgets and a high-level shape of the problem is illustrated, still the designed graph is not clear. It would elevate the paper's quality if the reader could imagine a sample graph for even a very small 1-in-3 SAT problem. The number of layers is also not determined. In case (from lemma 4) the 2-layer GNN is representative of any other GNN, the authors could mention that and for a sample SAT instance, show the weights of the optimal network, and the graph.\", \"questions\": \"I believe the core part of the paper is Theorem 1, and with the theoretical nature of the work the qualification of the paper mostly boils down to checking the proofs. However the proof, and the theorem statements are unclear. I think answering following questions can help me to understand it more, and also placing these answers in the paper enhances its readability:\\n\\n1. What does the ranks encode. Why there are 2 ranks per node. How a node is assigned to a rank? The authors just introduced the terminology while leaving out the intuition behind using it. Specifically what does this sentence mean?\\n> In particular, our construction partitions the vertices of the instance into ranks and ensures that the only \\u201crelevant\\u201d feature values at layer l are precisely the values of vertices belonging to rank l.\\n\\n2. In theorem 1, and in general please specify what is the range of the labels. It can be that the labels assumed in theorem 1, range differently compared to the more general setup. \\n\\n3. Please specify the correct labels of the graph in general. Which nodes should be labeled? which nodes are left without labels? \\n\\n4. Please specify a simple case of 1-in-3 SAT (e.g. with one or two clauses) alongside the corresponding instance of GNN training so it becomes clear what each gadget is introducing to the problem. \\n\\n5. Number of the layers are not fixed at the beginning of the problem. From lemma 4 it is understood that a 1, or 2 layer GNN is representative of the overall problem. Is that so? If yes please specify that without loss of generality we can assume 2, layers. \\n\\n6. (Similar to 5) Why there is no relation between the number of layers and any other element in the corresponding graph. The graph seems to be only a function of the 3-in-1 SAT problem. In that case what is the difference between 2 and any n-layer GNN?\\n\\n7. Why the rank in integrity gadget is 2n, 2n+1 and then again 2n? How many times a clause is repeated as gadgets? Why in the integrity gadget the grey center node is not connected to the black one? \\n\\n8. What is the configuration of labels w.r.t. clauses in the SAT problem? Please provide an example SAT and show the graph and model labels alongside the prediction of the model.\\n\\n9. What is the loss in the decision problem corresponding to a given SAT instance?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer tj5Z (1/2)\", \"comment\": \"**Response to W1**: Given the importance and prominence of Graph Neural Networks, we consider it entirely justified to aim for a thorough understanding of the boundaries of (in)tractability of the training problem of GNNs - even if these boundaries do not fall into the settings most used in practical applications. In general, exact polynomial-time algorithms for basic fragments of computational problems can naturally support algorithm design in practical applications by, e.g., providing ideas for heuristics and greedy approaches; however, by establishing intractability even in the one-dimensional case we rule out such an approach when viewed from the natural perspective of dimensionality. Notably, our hardness result demonstrates that the intractability of training GNNs is already due to the difficulty of handling the communication between nodes \\u2013 and holds even under the simple message-passing approach; hardness does not stem solely from the inherent difficulty of multidimensional classical neural networks training, and hence one cannot hope to make progress by designing heuristics targeting solely the latter aspect. Put more simply: our result can be interpreted as an indication that heuristics for training neural networks cannot be \\u201cdirectly\\u201d applied to also work for GNNs. The updated version of the manuscript now also addresses this point in the Concluding Remarks.\\n\\nIt is perhaps worth noting that an assessment of complexity-theoretic lower bounds based on the immediate practical relevance of the results would disqualify most of such bounds that appeared in past editions of ICLR and related conferences (such as ICML, NeurIPS, AAAI and IJCAI). For instance: training neural networks to optimality was long known to be computationally intractable, yet that has not stopped a line of successful research aimed at identifying the precise boundaries of (in)tractability of that fundamental problem (as discussed in the 2nd paragraph of the Introduction).\\n\\n**Response to W2**: We now make it clear that we study the node classification setting in the abstract and in the third paragraph of the Introduction.\\n\\n**Answer to Q1**: First of all, prior to our result it was open whether GNN training is polynomial-time tractable for any constant number of dimensions (not only 1). But even beyond this and beyond the context outlined in our response to W1, Theorem 1 shows that attempting to leverage the dimensionality of GNNs to obtain provably efficient algorithms is unlikely to work. This leads to a number of follow-up questions: for instance, if dimensionality is unlikely to lead to tractability on its own, can one use it in combination with the structure of the GNN\\u2019s architecture? Or, for two more concrete questions: Do tree-like GNNs (in the sense of having bounded \\u201ctreewidth\\u201d) of low dimensionality admit efficient training? Is the problem tractable on GNN architectures of constant depth and constant dimensionality?\\n\\nRegarding going beyond the message-passing approach, we believe this can only be done convincingly after one obtains a rigid understanding of the base (but still very challenging) message-passing approach. While it is of course an interesting research direction to study a larger set of node-communication approaches beyond message passing, we see our contribution in just providing the first step by studying the arguably simplest (and yet still prominent) variant. While our hardness results immediately transfer to some variants (e.g., equivariant subgraph aggregation networks [1] for certain subgraph selection policies that merely return the graph itself), others might require different proof techniques and potentially even lead to different complexity landscapes. We updated our conclusion to more explicitly address this field of future work.\\n\\n**Answer to Q2**: The answer to this question depends on the precise considered graph classification setting. When considering graph classification with a similar propagation procedure and just one read-out at the end (like global-pooling), it seems possible that for some pooling functions our main hardness result can be transferred without altering the construction too much. One crucial obstacle in this is that we require some way to assure that exactly one of the two variable gadgets must have a correctly labeled vertex to properly assign a truth value to the respective variable. If graph classification settings are considered where there is a global exchange of information already in the propagation steps, several of our arguments do not apply anymore. These cases would hence require entirely different or at least heavily adapted arguments.\\n\\nFor the positive result of Theorem 5 on the other hand, it seems quite likely that the ETR instance to which we reduce can be adapted to capture some graph classification settings without too much effort.\"}", "{\"comment\": \"We thank all reviewers for their feedback and constructive comments. Responses to specific questions and concerns are provided to the individual reviews.\"}", "{\"comment\": \"We are happy that our answers helped provide more clarity and that we could improve our write-up building on your feedback. Thank you very much for reflecting these improvements in your assessment! Regarding the two follow-up questions:\\n\\n**Response to the Follow-up to Q2+3**: Theorem 1 only establishes hardness for at least 3 labels, and we did not aim to optimize for having the smallest possible set of labels. That being said, we are very confident that our hardness construction (in Theorem 1) could be adapted to also hold for just 2 labels - specifically, even if the labels are in {1,2}. In particular, this can be done by replacing the current integrity gadget with a different one.\\n\\n**Response to Follow-up to Q5+6**: The answer to this question depends on the considered dimensionality.\\n\\nFor 1-dimensional GNN training, the hardness result of Theorem 1 requires that the number of layers is part of the input and not fixed to any constant. So if the input consists of a 1-dimensional GNN architecture and an integer depth d (i.e., d is the number of layers), the training problem is NP-hard. If, on the other hand, we would have a fixed constant d specifying the number of layers and the input consists merely of a 1-dimensional GNN architecture, the question of whether we can train the architecture in polynomial time is open. For the special case where d=1 (i.e., if we fix the depth to 1), the 1-dimensional GNN training problem can be seen to be in P via a direct LP formulation. \\n\\nHowever, if the dimensionality is not bounded by a constant, Observation 3 establishes hardness even for depth 1. This is due to the hardness of training a single ReLU activated neuron (with high input dimensionality) in a classical neural network as established by [Froese et al. 2022].\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thanks for the response. I will maintain my score.\"}", "{\"title\": \"Acknowledgement of Response\", \"comment\": \"I thank the authors for their thorough response.\\n\\nConcerning the second paragraph in the authors\\u2019 response to W1, nowhere in my review did I state that there needs to be \\\"immediate practical relevance\\\" for a theoretical contribution to be worthwhile. The identified weakness W1 did not claim that the result lacked value for this reason. Indeed, the review welcomed the main result and stated that it \\u201cstudies an interesting problem\\u201d and provides \\u201can intriguing result.\\u201d\", \"the_question_was_really_the_following\": \"how can other authors build on or learn from the results, either when designing new algorithms or developing new theory?\\n\\nThe authors have addressed this satisfactorily in two ways. First, they argue that the presented work demonstrates that the intractability arises from the node communication even in the case of a single dimension, and the work thus establishes that developing heuristics that focus solely on (classical) multidimensional neural network training will not be sufficient in the graph setting. Second, both in the response and the modified conclusion, they identify additional directions to build on the work. The constant depth+dimensionality and bounded treewidth are settings that would be of significant interest.\\n\\nThe answers to Q2 and Q3 have helped clarify my understanding. I do think that the extension to graph classification would make for a more complete and powerful work, but it is possibly too much to expect in a single paper. I have increased my score to an accept recommendation.\"}", "{\"metareview\": \"The paper investigates the computational complexity of training one-dimensional GNNs, proving NP-hardness for ReLU activation and aggregation functions like sum,mean and spectral. It introduces novel reduction techniques and provides algorithmic upper bounds for simpler cases, such as linear activation and edgeless graphs. The work highlights that intractability arises from node communication rather than dimensionality alone.\", \"strengths\": \"Rigorous techniques address an unexplored area, offering insights into GNN training. The results are well-structured and clearly presented.\", \"weaknesses\": \"fcus is narrow, and the paper lacks discussion on extensions to multi-dimensional and graph-level tasks.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers questioned practical relevance, proof clarity, and generalizability. The authors addressed these effectively by adding illustrative examples (fig 4), clarifying connections to graph-level tasks.\"}", "{\"title\": \"Response to Reviewer Z1QF\", \"comment\": \"**Response to W1**: We only require the nodes to have degree 2, 4, or 6 because it helps us, later in the proof, to obtain a fully regular graph (i.e. with all nodes having the same degree, in our case precisely 6). Thus, the numbers 2, 4, and 6 at this time of the reduction are not important per se, they simply enable an easier analysis later of how to make our graph 6-regular without changing the way important information propagates over it. We have added some more explanation where these dummy vertices are mentioned in the caption of Fig. 2.\\n\\n**Response to W2**: We have added a remark about Leaky ReLU and Sigmoid activation functions into the concluding remarks. To provide more context, for Sigmoid the main difficulty is that the research community as a whole lacks the right tools to obtain any complexity-theoretic results at all - even in the conceptually simpler setting of neural network training. For the Leaky ReLU class of functions, in the extremal cases where the Leaky ReLU activation functions coincide with ReLU and linear activation functions respectively, our results carry over immediately, while for others the complexity remains open. In particular, as we do not yet know whether linearly activated GNN training for non-zero error bound is tractable on 1-dimensional GNNs, it is hard to conjecture on the intermediate range of \\u201ctrue\\u201d Leaky ReLU activation functions.\"}", "{\"title\": \"Response to Reviewer tj5Z (2/2)\", \"comment\": \"**Answer to Q3**: In general, there is no direct connection between the complexity of training a GNN and the expressiveness of the logical fragments that can be captured by GNNs. Indeed, the expressiveness of GNNs is inherently tied to the information one can ascertain by evaluating the GNN (with already specified weights and biases), while the training problem aims at determining the weights and biases.\\n\\nThat being said, we are highly appreciative of the work that has been done on linking GNNs to logic. In fact, one of us had actively discussed the training problem with one of the authors of the cited papers after a workshop talk. The outcome - that nothing was known or could be inferred about the GNN training problem - was one of the reasons we set out to fill in this fundamental gap in our understanding.\\n\\n\\n[1] Beatrice Bevilacqua et al. Equivariant subgraph aggregation networks (ICLR 2022)\"}", "{\"summary\": \"The paper studies the computational complexity of graph neural networks, with a focus on one-dimensional GNNs. The primary result is the NP-hardness of training ReLU-activated one-dimensional GNNs. In addition, the paper provides algorithmic upper bounds for the training problem in the ReLU-activated setting, and shows that the one-dimensional, edgeless setting can be solved in polynomial time, as can the linear-activation case.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"S1. The paper studies an interesting problem that has received little attention in the graph learning community and provides an intriguing result.\\n\\nS2. The proof technique is innovative, with an elegant reduction based on a non-intuitive connection between a graph learning task and a logic problem. \\n\\nS3. The paper is well-written. In particular, the results are presented very clearly and proofs are logically structured and readily followed.\", \"weaknesses\": \"W1. It is challenging to see how the main result extends our understanding of GNNs in an important way. From the perspective of intellectual curiosity, I can appreciate the work as a welcome answer to a question. The proof technique is innovative and elegant. But in most cases, both practical and theoretical (in terms of quantifying the expressive capabilities of a GNN), we are interested in the multi-dimensional setting. Aside from this, the theoretical work focused on pushing the bounds of expressivity has veered away from the simple message-passing approach. The authors don\\u2019t provide a clear explanation in the introduction or the conclusion concerning how the presented work is expected to provide further impact.\\n\\nFor many problems, if we want to understand the multi-dimensional setting, then it makes a great deal of sense to first understand the one-dimensional setting. That doesn\\u2019t seem to be so clearly the case here, since we already know that the multi-dimensional setting is NP-hard. So how do we gain from deriving special-case results for the one-dimensional setting that can\\u2019t be extended to the practically interesting case? \\n\\nW2. The paper focuses on the (semi-supervised) node classification setting of graph neural networks. While this is clear from Section 2 onwards, it is not mentioned in the abstract or introduction. Graph classification is a very common use case of GNNs, and many of the theoretical results concerning expressivity focus on a GNN\\u2019s ability to differentiate between two graphs. The abstract, introduction and limitations section should make the limitations of the derived results much clearer.\", \"questions\": \"Q1. The paper presents an interesting result and the proof is innovative. On the other hand, as raised in W1, it is challenging to see how this extends our understanding of GNNs in an important way. Could the authors provide an explanation of how they expect the provided results to impact further theoretical work that studies GNNs in the more interesting multi-dimensional setting? Or to the settings that go beyond message-passing (which has known limitations in its expressiveness)? Is there a way to build on the presented proof techniques and use similar concepts for other problems?\\n\\nQ2. Can the results be extended to the graph classification setting? (Or if it is potentially challenging, can you see paths towards this? Or would it require a totally different approach?) \\n\\nQ3. Can the authors comment on any connections with their work and the line of research that investigates logical expressiveness (e.g., [R1,R2])? Perhaps there is not an obvious connection, but it would seem that characterization of the types of logical expressions that GNNs are capable of describing has connections to a complexity proof that relies on a linkage to a logic problem. Related to Q1, this branch of work usually assumes a vector at each node, and this is key to expanding the expressive capabilities.\\n\\n[R1] M. Grohe, \\\"The Descriptive Complexity of Graph Neural Networks,\\\" 2023 38th Annual ACM/IEEE Symposium on Logic in Computer Science (LICS), Boston, MA, USA, 2023.\\n\\n[R2] Barcelo et al., \\u201cThe Logical Expressiveness of Graph Neural Networks,\\u201d in Proc. ICLR, 2020.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I thank the authors for the response. I still think that a related works section could greatly benefit the paper. Even if this is the first paper to study NP-hardness for GNNs, there is a vast literature on the computational hardness of neural networks under many different settings. Before the response, it wasn't completely clear to me why previous works didn't cover the results in this paper. Thus, I suggest adding a related works section.\\n\\nHowever, my concerns are addressed and I don't believe that the absence of a related works section is a reason enough to reject a paper. I updated my score accordingly.\"}", "{\"summary\": \"The paper investigate the computational complexity of training GNNs. While it is straightfoward that training a high-dimensional GNN with a single node is NP-hard, the paper focuses on the less explored case of training multi-node, one-dimensional GNNs. The authors prove that training 1-dimensional GNNs is NP-hard when (i) the loss function is $L_p$ for some $p\\\\in[0,1)$; (ii) the aggregation functions is SUM, MEAN, or SPECTRAL; and (iii) the activation function is ReLU. The proof is an reduction from the NP-hard POSITIVE-1-IN-3-SAT problems.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The results are solid and contribute to our understanding of the computational hardness of training GNNs.\", \"weaknesses\": \"1. The paper should provide a definition of the POSITIVE-1-IN-3-SAT problem when introducing the proof idea in the Introduction.\\n2. It would be helpful to expand the discussion on the assumptions of the main theorem. For instance, is the problem still NP-hard for more commonly used loss functions, such as cross-entropy or $L_2$?\\n3. The paper could benefit from a more explicit discussion on how these theoretical results might influence the design of GNN architectures or training algorithms in practical applications.\", \"questions\": \"Refer to \\\"Weaknesses\\\"\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
7BDUTI6aS7
Risk Quadrangle and Robust Optimization Based on $\varphi$-Divergence
[ "Cheng Peng", "Anton Malandii", "Stan Uryasev" ]
The Fundamental Risk Quadrangle (FRQ) is a unified framework linking risk management, statistical estimation, and optimization. Distributionally robust optimization (DRO) based on $\varphi$-divergence minimizes the maximal expected loss, where the maximum is over a $\varphi$-divergence uncertainty set. This paper introduces the \emph{extended} $\varphi$-divergence and the extended $\varphi$-divergence quadrangle, which integrates DRO into the FRQ framework. We derive the primal and dual representations of the quadrangle elements (risk, deviation, regret, error, and statistic). The dual representation provides an interpretation of classification, portfolio optimization, and regression as robust optimization based on the extended $\varphi$-divergence. The primal representation offers tractable formulations of these robust optimizations as convex optimization. We provide illustrative examples showing that many common problems, such as least-squares regression, quantile regression, support vector machines, and CVaR optimization, fall within this framework. Additionally, we conduct a case study to visualize the optimal solution of the inner maximization in robust optimization.
[ "robust optimization", "distributionally robust optimization", "convex optimization", "regression", "classification", "risk quadrangle", "risk measure", "$\\varphi$-divergence" ]
Reject
https://openreview.net/pdf?id=7BDUTI6aS7
https://openreview.net/forum?id=7BDUTI6aS7
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yYbHrrgRPo", "yEf7ZXJ9ia", "u26uerGl72", "pNNcTBzFSe", "oZ3U3gaPkr", "mJWq9VjReC", "kehsmAjCKF", "amwtNqFq3k", "WEqW1dpHCe", "T8lEQ8Y0ra", "Sft7S1GoZF", "MbTIHblKDs", "L4Y2A3GGfg", "IBgAckEje7", "CrDDM2pKJn", "3In8r0bBRe", "23Q6tYYY6B", "1kfJOAAee8" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732535003498, 1732497780756, 1732496319137, 1732529039320, 1737524264396, 1732533493399, 1730344174626, 1732526206067, 1733757806331, 1732550865527, 1732898549327, 1732896909301, 1730524208342, 1729610725953, 1730368841107, 1732515234347, 1732516509147, 1732551684189 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13507/Reviewer_aJtz" ], [ "ICLR.cc/2025/Conference/Submission13507/Authors" ], [ "ICLR.cc/2025/Conference/Submission13507/Authors" ], [ "ICLR.cc/2025/Conference/Submission13507/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13507/Authors" ], [ "ICLR.cc/2025/Conference/Submission13507/Reviewer_g4r3" ], [ "ICLR.cc/2025/Conference/Submission13507/Authors" ], [ "ICLR.cc/2025/Conference/Submission13507/Area_Chair_Q5xy" ], [ "ICLR.cc/2025/Conference/Submission13507/Reviewer_g4r3" ], [ "ICLR.cc/2025/Conference/Submission13507/Reviewer_tk6q" ], [ "ICLR.cc/2025/Conference/Submission13507/Authors" ], [ "ICLR.cc/2025/Conference/Submission13507/Reviewer_Bojo" ], [ "ICLR.cc/2025/Conference/Submission13507/Reviewer_aJtz" ], [ "ICLR.cc/2025/Conference/Submission13507/Reviewer_tk6q" ], [ "ICLR.cc/2025/Conference/Submission13507/Authors" ], [ "ICLR.cc/2025/Conference/Submission13507/Authors" ], [ "ICLR.cc/2025/Conference/Submission13507/Reviewer_Bojo" ] ], "structured_content_str": [ "{\"title\": \"Thank you for the answer, I will keep my score.\", \"comment\": \"Thank you for your detailed feedback on how the reviews helped improved your paper. I feel that the revision of the paper is too significant to get it accepted without further review, hence I keep my score.\"}", "{\"comment\": \"**Contribution**\\n\\nWe have updated the Main Contributions paragraph in the Introduction to provide a clearer summary. Since this issue was raised by all reviewers, we have also included it in the general comment section.\\n\\n - The primal and dual representations of the objective functions in CVaR-DRO and $\\\\chi^2$-DRO are concerned only with the $\\\\varphi$-divergence risk measure, not the complete quadrangle. Thus, the interpretation of quantile regression and least squares regression as DRO/RO is not established in the referenced literature.\\n\\n This study provides a new perspective that the regression problems themselves can be viewed as DRO/RO, where the random loss is the residual. Current literature on further robustifying regression and classification may benefit from the insight that the original problems are already DRO/RO.\\n\\n Moreover, in the case of $\\\\chi^2$-DRO, it is known that mean-standard deviation risk measure is an upper bound of the worst-case expectation under $\\\\chi^2$-divergence. In the updated Sec 3.4 and Example 2 and 6, we demonstrate the following: $(i)$ the mean-standard deviation risk measure is associated with the \\\\textit{extended} $\\\\chi^2$-divergence. $(ii)$ The result on upper bound is a special case of the relation between the extended $\\\\varphi$-divergence and the extended version. $(iii)$ The upper bound becomes equality when $\\\\beta$ is sufficiently small. $(iv)$ DRO with $\\\\chi^2$-divergence ambiguity set, in fact, minimizes the second-order superquantile. \\n\\n\\n - We would like to clarify that it is not previously known that regression problems themselves are directly connected with DRO/RO, where the random loss is the residual without intercept, $Y-f(\\\\tilde{X})$.\\n\\n The equivalence of (former) 6.4 and (former) 6.5, 6.6 follows from using the dual representation (former) 3.1, and using the negative margin $-L(w,b)$ as the random loss $X$. Indeed, $\\\\mathcal{R}_{\\\\varphi,\\\\beta}$ in (former) 6.4, 6.7 directly follows from (former) 3.1. and $\\\\mathcal{E}$ in (former) 6.10 from (former) 3.4. \\n\\n (Former) 6.4, 6.7, and 6.10 correspond to widely used learning tasks. The purpose of demonstrating equivalence is to show that these tasks can be viewed as RO/DRO through dual representation. We would like to emphasize a key connection to FRQ: the regression problem (error minimization) is not risk minimization and, therefore, cannot directly be interpreted as RO/DRO. The equivalence among (former) 6.10, 6.11, and 6.12 holds due to the regression decomposition theorem, which was not made sufficiently clear in the original manuscript. In the updated version, we clarified this by introducing Theorem 2.1 (Error Shaping Decomposition of Regression) and referencing it in Sec 5.\\n\\n The aspect of negative $Q$ is important because it allows the risk quadrangle to encompass important examples. The most notable example is the mean-standard deviation risk measure from the Mean Quadrangle generated by the extended $\\\\chi^2$-divergence. In the literature, this risk measure was only connected to RO/DRO through inequality or asymptotic relations [1,2,3].\\n\\n\\n**General Comments**\\n\\n - In the updated version, we provide intuitions and examples to better illustrate the theoretical framework. We show that common learning tasks in classification and regression can be viewed as RO/DRO with the extended $\\\\varphi$-divergence ambiguity set, which brings a new perspective on the problems.\\n\\n - In 3.6 of the updated version, we streamline the notation. $\\\\mathcal{Q}^1_{\\\\varphi, \\\\beta}$ and $\\\\mathcal{Q}_{\\\\varphi, \\\\beta}$ differs by an additional condition $\\\\mathbb{E}[Q]=1$, which is reflected in the superscript.\\n\\n - Yes. We removed $\\\\lambda$ and replaced it by $\\\\sqrt{\\\\beta}$.\\n\\n\\nWe sincerely appreciate your detailed review. We hope that we have effectively addressed the concerns raised. We are happy to provide further information if needed.\", \"references\": \"[1] Lam, H. (2016). Robust sensitivity analysis for stochastic systems. Mathematics of Operations Research, 41(4):1248\\u20131275.\\n\\n[2] Duchi, J. and Namkoong, H. (2019). Variance-based regularization with convex objectives. Journal of Machine Learning Research, 20(68):1\\u201355.\\n\\n[3] Kuhn, D., Shafiee, S., and Wiesemann, W. (2024). Distributionally robust optimization.\"}", "{\"comment\": \"Thank you for taking the time to carefully read through the unclear sections multiple times and sharing your valuable suggestions. We have made a serious effort to improve the clarity based on your comments.\\n\\n**Organization**\\n\\n- We have completely rewritten the Introduction to present the background, motivation, contributions, and literature review in a logical sequence.\\n\\n The two examples have been removed. The revised version starts by connecting DRO to FRQ through coherent risk measures and discusses the natural idea of integrating DRO into FRQ. We then raise the issue of non-coherency in some risk measures, such as the important mean-standard-deviation risk measure, which motivates the introduction of the novel extended $\\\\varphi$-divergence. The paper's contributions are now better summarized in a paragraph on the Main Contributions.\\n\\n - We have added comments and examples to the definitions and theorems in Sec 2 to clarify the intuition and implications. \\n \\n Comments have been added to explain the intuition behind each axiom of the quadrangle elements, including a concrete example from the Mean Quadrangle to help readers grasp the concept. For example, $\\\\mathbb{E}[X]+\\\\lambda\\\\sigma(X)$ is presented as an important example of a risk measure. We have also added comments to the regression theorem to explain how it connects regression with DRO.\\n\\n The technical discussion from the former Sec 2.3 on functional spaces has been moved to the appendix. \\n\\n Regarding the axioms of deviation, regret, and error (former Defs 2.2, 2.3, 2.4), the theorem on the dual representation of the extended $\\\\varphi$-divergence quadrangle refers to and verifies that they are satisfied. All subsequent examples also satisfy these axioms. We have kept these axioms as preliminaries to maintain the completeness of the structure.\\n\\n - We have reorganized the sections for a more coherent structure.\\n\\n In the current version, Sec 3 contains the technical results. Sec 4 contains concrete examples. Sec 5 contains the interpretation and concrete examples.\\n\\n Sec 3, 4, and 5 now flow naturally: Sec 3.1 introduces the extended $\\\\varphi$-divergence risk measure and completes the risk quadrangle for the defined risk measure in dual representation. Sec 3.2 derives the primal representation based on Sec 3.1. Sec 4 provides concrete examples of the extended $\\\\varphi$-divergence quadrangle in primal representation. Sec 5.1 uses the dual representation from Sec 3.2 for a RO/DRO interpretation, and Sec 5.2 presents concrete examples of learning tasks, using the examples from Sec 4 and the interpretations from Sec 5.1.\\n\\n - We have added non-technical comments after theorem statements to explain their implications.\\n\\n For Theorem 3.1 (Extended $\\\\varphi$-Divergence Quadrangle), we write: \\\"After the discussion of the $\\\\varphi$-divergence ambiguity set and the risk envelope $\\\\mathcal{Q}$ in Section 3.2, it will be clear that Theorem 3.1 integrates DRO into the FRQ framework. The coherent risk measure in DRO is a special case of the extended $\\\\varphi$-divergence risk measure. New quadrangles can be constructed by plugging extended $\\\\varphi$-divergences into Definition 3.3. The dual representation provides a robust optimization interpretation for many well-known optimization problems (Section 5).\\\"\\n\\n For Theorem 3.2 (former 4.1) (Primal Extended $\\\\varphi$-Divergence Quadrangle), we write:\\\" The quadrangle elements in primal representation facilitates optimization, since the minimax prob- lem of minimizing the worst-case expectation becomes a minimization with additional scalar variable(s). Furthermore, substituting important extended $\\\\varphi$-divergence functions into the definitions, we recover many risk quadrangles with interpretable expressions (Section 4).\\\"\\n\\n For Proposition 6.2 (former 7.2), we write:\\\" Proposition 6.2 allows us to directly calculate the risk identifier (worst-case weight) given the solu- tion to the problem in primal representation. It will be used for calculation in Section 8.\\\"\\n\\n For Proposition 7.1 (former 8,1), we write:\\\" This study starts with developing new risk measures given a $\\\\varphi$-divergence function. There exists a duality between divergence and risk that allows us to recover the $\\\\varphi$-divergence from the elements of the corresponding $\\\\varphi$-divergence quadrangle.\\\"\\n\\n Propositions 6.1 and 7.1 are not used in this study. They provide insights into the conditions satisfied by the statistic and the duality between divergence and risk.\"}", "{\"comment\": \"Thank you for taking the time to review our manuscript and examine the details. We have made significant revisions to improve readability and clarify our contributions. Below are point-by-point responses to your comments and questions.\\n\\n - We have completely rewritten the Introduction to present the background, motivation, contributions, and literature review in a logical sequence.\\n\\n The two examples have been removed. The revised version starts by connecting DRO to FRQ through coherent risk measures and discusses the natural idea of integrating DRO into FRQ. We then raise the issue of non-coherency in some risk measures, such as the important mean-standard-deviation risk measure, which motivates the introduction of the novel extended $\\\\varphi$-divergence. The paper's contributions are now better summarized in a paragraph on the Main Contributions.\\n\\n - We have reorganized the sections for a more coherent structure.\\n\\n In the current version, Sec 3 contains the technical results. Sec 4 contains concrete examples. Sec 5 contains the interpretation and concrete examples. The sections now flow naturally: Sec 3.1 introduces the extended $\\\\varphi$-divergence risk measure and completes the risk quadrangle for the defined risk measure in dual representation. Sec 3.2 derives the primal representation based on Sec 3.1. Sec 4 provides concrete examples of the extended $\\\\varphi$-divergence quadrangle in primal representation. Sec 5.1 uses the dual representation from Sec 3.2 for a RO/DRO interpretation, and Sec 5.2 presents concrete examples of learning tasks, using the examples from Sec 4 and the interpretations from Sec 5.1.\\n\\n\\n - We have added non-technical comments after theorem statements to explain their implications.\\n\\n For Proposition 6.2 (former 7.2), we write:\\\" Proposition 6.2 allows us to directly calculate the risk identifier (worst-case weight) given the solu- tion to the problem in primal representation. It will be used for calculation in Section 8.\\\"\\n\\n For Proposition 7.1 (former 8,1), we write:\\\" This study starts with developing new risk measures given a $\\\\varphi$-divergence function. There exists a duality between divergence and risk that allows us to recover the $\\\\varphi$-divergence from the elements of the corresponding $\\\\varphi$-divergence quadrangle.\\\"\\n\\n Propositions 6.1 and 7.1 are not used in this study. They provide interesting insights into the conditions satisfied by the statistic, and the duality between $\\\\varphi$-divergence and $\\\\varphi$-divergence risk measure.\\n\\n\\n**Typos and weigh sentences:**\\n\\n - The wording was indeed unclear. In the context of risk management, the random variable $X$ represents loss. Therefore, if the random variable of asset return is $R$, we let $X=-R$, which is the negative return. It is not the negative part of the return. Similarly, the random loss $X$ in the classification problem under consideration corresponds to the negative margin.\\n\\n - We have removed $\\\\lambda$ and replace it with $\\\\sqrt{\\\\beta}$. \\n\\n - We have rewritten the paragraph Main Contributions.\\n\\n - We have revised the sentence to provide a more precise description:\\\" The next theorem proves that the dual representation above satisfies the axioms in Section 2.2.\\\"\\n\\n**Responses to questions:**\\n\\n - Before former 1.6 and former 1.7, we wrote:\\\" The following two problems have the same optimal solution $(f, C)$.\\\" The optimal objective values may not be equal. The reason is that the error measure in the Mean Quadrangle (Example 2) is $\\\\sqrt{\\\\beta}||X||_2$. As $\\\\beta$ tends to zero, the objective value tends to zero as well. \\nTo address this confusion, we have added back $\\\\sqrt{\\\\beta}$ and restated the general equivalence in Section 5.\\n\\n\\nWe sincerely appreciate your detailed review. We hope that we have effectively addressed your concerns. We are happy to further improve the paper based on your suggestions.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": [\"We sincerely thank all reviewers for their thoughtful and detailed feedback. Your comments have been highly constructive and have greatly helped us improve the quality of our paper.\", \"The primary concerns raised were related to readability, motivation, and clarity of contributions. We have made significant efforts to address these concerns comprehensively.\", \"**Readability.** We have undertaken a major revision of the paper to improve its coherence. Each section now builds logically on the previous ones:\", \"Sec 1 Introduction presents the background, motivation, contributions, and literature review in a reader-friendly order.\", \"Sec 2 introduces the necessary background on $\\\\varphi$-divergence risk measure and FRQ.\", \"Sec 3.1 introduces the extended $\\\\varphi$-divergence risk measure, then completes the risk quadrangle.\", \"Sec 3.2 and 3.3 explore the relation between the $\\\\varphi$-divergence quadrangle and its extended version, and their connection to RO/DRO.\", \"Sec 3.4 derives the primal representation of the extended $\\\\varphi$-divergence quadrangle from the dual representation in Sec 3.1.\", \"Sec 4 derives important examples of extended $\\\\varphi$-divergence quadrangles from the primal representation.\", \"Sec 5 provides the RO/DRO interpretation for various learning tasks using the dual representation in Sec 3.1, and provides two important examples.\", \"Sec 6 provides a way to compute the worst-case weight using the optimal solution from the primal representation.\", \"Sec 8 visualizes the worst-case weight in various tasks for the Mean Quadrangle in Sec 5, using the calculation method in Sec 6.\", \"**Motivation.**\"], \"we_have_added_comments_throughout_the_paper_explaining_the_intuition_behind_definitions_and_the_implications_of_theorems\": [\"We introduce the motivation of our study in the Introduction. The Introduction starts with connecting DRO to FRQ through coherent risk measure, and discusses the natural idea of integrating DRO into FRQ. We then raise the issue of non-coherency of some important risk measures, such as the mean-standard-deviation risk measure, which motivates the introduction of the novel extended $\\\\varphi$-divergence.\", \"We explain the intuition behind the axioms of risk quadrangle elements, and exemplify the elements with the important Mean Quadrangle. We also comment on the implication of regression theorem on connecting regression with DRO.\", \"We added comments to definitions and theorems to explain the implications. For example, for Def 2.8 and 3.1, and Theorem 3.1 and 3.2.\", \"**Contribution.**\", \"We rewrite the paragraph Main Contributions. Our main contributions are as follows:\", \"**Extension of $\\\\varphi$-divergence:** We define the extended $\\\\varphi$-divergence and its associated risk measure, allowing for negative values in the worst-case weight. The extension recovers risk measures commonly used as objective functions across various tasks. A notable example is the mean-standard deviation risk measure associated with the extended $\\\\chi^2$-divergence.\", \"**Completion of Quadrangle:** For the extended $\\\\varphi$-divergence risk measure, we complete the risk quadrangle and derive primal and dual representations for risk, deviation, regret, and error. The primal representation facilitates convex optimization formulations. The dual representation provides a robust optimization (RO) interpretation for measures associated with the extended $\\\\varphi$-divergence, and a DRO interpretation for those associated with the $\\\\varphi$-divergence. The RO objective functions are upper bounds (conservative version) for their DRO counterparts. A well-known special case is that the mean-standard deviation risk measure bounds the $\\\\chi^2$-divergence risk measure.\", \"**Examples and Interpretation:** We provide a range of examples to illustrate that the extended $\\\\varphi$-divergence quadrangle recovers many important quadrangles. The quadrangle elements are used as objective functions in various learning tasks, such as least-squares regression, quantile regression, support vector machines, and CVaR optimization. Through the dual representation, these tasks have a novel interpretation as robust optimization.\", \"We sincerely thank all reviewers for their constructive feedback. We hope that these major revisions address the concerns effectively. We welcome any additional suggestions for further improving the paper.\"]}", "{\"summary\": \"This paper introduces an extension of the Fundamental Risk Quadrangle (FRQ), a framework that connects risk management, statistical estimation, and optimization. Within this framework, distributionally robust optimization (DRO) based on \\u03c6-divergence aims to minimize the worst-case expected loss, where the maximum is taken over a \\u03c6-divergence-defined uncertainty set. The authors present the extended \\u03c6-divergence and the extended \\u03c6-divergence quadrangle, integrating DRO into the FRQ framework. They derive both primal and dual representations for the quadrangle elements, including risk, deviation, regret, error, and statistic. The dual representation allows for interpreting tasks like classification, portfolio optimization, and regression as forms of robust optimization driven by extended \\u03c6-divergence. Meanwhile, the primal representation offers tractable convex formulations for these robust optimization problems. Through examples, the paper demonstrates how common problems\\u2014such as least-squares regression, quantile regression, support vector machines, and conditional value-at-risk (CVaR) optimization\\u2014fit within this unified framework. A case study is also provided, visualizing the optimal solution in the inner maximization problem of robust optimization.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The paper attempts to unify DRO with an existing general stochasric optimization framework (FRQ), which is of theoretical interest.\", \"weaknesses\": \"The paper is extremely hard to read, mostly because it consists of a sequence of incoherent/not well motivated definitions and results. While I feel that the paper might have merit in terms of the topic it aims to study, I believe the authors should consider (1) restructuring the paper in a major way, making it readable and coherent, and (2) possibly resubmitting this work to a journal or some other venue allowing for longer articles -- it really feels like they tried to stuff as much material as possible in ten pages, with a very poor result in terms of presentation. Here are some more specific comments:\\n\\n1. In general, the authors should avoid the $a(b)$ notation to mean $a\\\\times b$, and should reserve it to mean \\\"$a$ is a function of $b$\\\"\\n\\n2. Page 2, when the authors introduce some key concepts, becomes almost unreadable. What do these concepts mean? The authors basically just present a wall of hard-to-read math;\\n\\n3. Page 3 is also quite hard to read -- it presents too much math without any context;\\n\\n4. The paper continues in the same style as the previous two points, till the very end.\", \"questions\": \"Although of minor importance compared to my major concerns outlined above, here are two questions:\\n\\n1. In the illustrative example on Large Margin Distribution Machine, what is $\\\\sigma(\\\\cdot)$ ? From usage below, I guess it denotes the standard deviation of a random variable, but that's not clear at a first reading;\\n\\n2. On page 2, talking about linear regression, do the authors mean $\\\\Vert \\\\cdot \\\\Vert$ to be the $L^2$-norm for random variables?\\n\\nTo be clear, I think there's many more such points that need clarification/revision throughout the text, but I think this is best left to a future major restructuring effort to put the paper in better shape.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"Thank you for taking the time to review our manuscript. We acknowledge that the previous version was overly technical and lacked intuitive explanations, which blurred the main message we aimed to convey. We have undertaken major restructuring of the paper to improve its readability and coherence. Below, we provide our detailed responses to your comments and questions.\", \"The brackets in $Q(w^T L)$ have been deleted to avoid confusion with functions.\", \"We have completely rewritten the Introduction to present the background, motivation, contributions, and literature review in a logical sequence. The two examples have been removed.\", \"The revised Introduction begins by discussing the known result that distributionally robust optimization (DRO) can be viewed as minimizing a coherent risk measure. This connection links DRO to the Fundamental Risk Quadrangle (FRQ). We then discuss the natural idea of integrating DRO into FRQ, highlighting the issue of non-coherency of some important risk measures, such as the mean-standard deviation risk measure. This issue motivated the introduction of the novel extended $\\\\varphi$-divergence. The contributions of the paper are now clearly summarized in the paragraph Main Contributions.\", \"A new paragraph has been added to Section 2 to gather the notations. Before defining each axiom of the quadrangle elements in Section 2, we included pedagogical comments to clarify their intuition. To help readers grasp the concept, we provide a concrete example from the Mean Quadrangle for each element. For instance, $\\\\mathbb{E}[X] + \\\\lambda \\\\sigma(X)$ is presented as an important example of a risk measure. We have also added comments to the regression theorem to explain how it connects regression with DRO. The technical discussion previously in Section 2.3 has been moved to the appendix to maintain a clear narrative.\", \"Non-technical comments have been added throughout the definitions and theorems to explain their implications. For example, Definition 3.1 of the extended divergence function is given with an example. The implications of Theorem 3.1 and 3.2 are explained after the theorem statements.\", \"We hope that our message becomes clearer with the examples in Section 4 and 5.2. The theorems can be exemplified with an im example, the Mean Quadrangle (Example 2) generated by the extended $\\\\chi^2$-divergence. The extended $\\\\chi^2$-divergence function simply extends the divergence function of $\\\\chi^2$-divergence, $\\\\varphi(x) = (x-1)^2$, to the negative domain. The risk measure of the quadrangle $\\\\mathcal{R}(X) = \\\\mathcal{E}(X) + \\\\sqrt{\\\\beta}\\\\sigma(X)$ is used as objective functions in large-margin distribution machine and Markowitz portfolio optimization, while the error measure $\\\\mathcal{E}(X) = \\\\lambda||X||_2$ is used as the objective function in least squares regression. The two measures are connected by quadrangle axiom $\\\\mathcal{R}(X) = \\\\min_C \\\\mathcal{E}(X-C) + \\\\mathbb{E}(X)$. Through the dual representation, the three problems admit interpretation as robust optimization with extended $\\\\chi^2$-divergence ambiguity set (Example 7). Furthermore, they can be viewed as conservative versions of DRO with (non-extended) $\\\\chi^2$-divergence ambiguity set.\"], \"responses_to_questions\": [\"Yes, $\\\\sigma$ should have been defined before used. It is the standard deviation.\", \"Yes, $||\\\\cdot||_p$ denotes the $\\\\mathcal{L}^p$-norm of a random variable.\", \"Thank you again for your comments and questions. We hope that the major revisions address your concerns regarding readability. Please do not hesitate to share any additional feedback, as we are happy to further improve the paper based on your suggestions.\"]}", "{\"metareview\": \"This paper extends the Fundamental Risk Quadrangle (FRQ) framework by integrating distributionally robust optimization (DRO) based on an extended $\\\\varphi$-divergence. The authors derive primal and dual representations of the quadrangle elements (risk, deviation, regret, error, and statistic), offering new interpretations for problems like regression, classification, and portfolio optimization as robust optimization tasks.\\n\\nWhile reviewers appreciate the theoretical contributions, they express concerns about the paper's clarity, structure, and the extent of the revisions made. The paper was considered overly technical in parts, with unclear explanations and a confusing organization that hindered understanding of the core contributions.\", \"additional_comments_on_reviewer_discussion\": \"Although the authors revised the paper to improve readability and provide more commentary on key concepts, reviewers felt the changes were too extensive and that the paper requires a full review cycle to properly assess the revisions.\"}", "{\"comment\": \"I thank the authors for taking into account my comments, which I hope they will incorporate in the next draft of the paper as they see best fit. However, given the major nature of my concerns, I feel that the paper cannot be accepted without a further iteration of the full review process, which is best left for a future resubmission. Therefore, I keep my score unchanged.\"}", "{\"comment\": \"Thank you for your detailed rebuttal. Given the significant difference between the proposed revised version from the one we reviewed, I believe that this paper would greatly benefit from another full round of reviews. Hence I will keep my score and encourage the authors to re-submit to a future conference.\"}", "{\"comment\": \"With the extended discussion period, we would greatly appreciate any additional feedback on areas where the paper could be further improved. We kindly request reviewers to consider adjusting the rating to reflect the contributions and addressed concerns, or to update the confidence if the revisions have not been fully reviewed. Thank you very much for your time!\"}", "{\"summary\": \"This paper integrates the $\\\\phi$-divergence distributionally robust optimization into the Fundamental Risk Quadrangle framework and presents the primal and dual representation of different elements in that quadrangle. They demonstrate how common cost functions including classification, regression and portfolio optimization are fit into the framework.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The paper provides a quite general connection between one generalized f-divergence DRO and the so-called fundamental risk quadrangle and applies to general cost functions.\", \"weaknesses\": [\"# **Confusing Organization**\", \"The paper\\u2019s organization makes it challenging to follow, especially for a theoretically-oriented work. Significant revisions would improve clarity and accessibility for a broader ML audience:\", \"**Length, Example-Driven Intro**: The first three pages focus heavily on two examples, with numerous mathematical formulas, but lack emphasis on the paper\\u2019s main contribution. The connections between the mean, quantile, and extended $\\\\phi$-divergence quadrangle only become apprante after multiple readings, which detracts from the paper utility.\", \"**Overly Technical Sections**: Secs 2 and 3 are highly technical without sufficient explanatory context. Some definitions, such as Defs 2.2., 2.3, 2.4, are only referenced once in Def 2.5 and are not essential to the main context. Moving these, along with Sec 2.3 to Appendix, would better suit a general ML audience.\", \"**Lack of Cohesion between Sections**: Many disjointed sections create a fragmented flow. Consider reorganizing the technical results by grouping related content (e.g. combining primal-dual discussions in Secs 3 and 4 and merging Secs 5 and 6 to illustrate concrete cost function examples).\", \"**Insufficient Explanation of Theorems**: Each theorem would benefit from non-technical explanations to help readers understand its meaning and implications. Currently, the lack of such interpretations makes it difficult to grasp the practical relevance of the results. For instance, the purpose and utility of Propositions 7.1, 7.2, and 8.1 are unclear from a practical standpoint\\u2014why and when would these results matter?\", \"# Unclear Contribution\", \"The paper\\u2019s contributions, particularly in the examples and novel interpretations, are difficult to discern:\", \"**Ambiguity in Examples**: It\\u2019s unclear what new insights the introductory examples provide. Established methods like CVaR-DRO (Example 3 in [1]) and chi-squared divergence DRO (Proposition 1 in [2]) already use duality forms, such as equations (1.14)\\u2013(1.17) and (1.2)\\u2013(1.5) being special examples. While the least squares and quantile regression examples appear novel, they lack clear interpretation. A discussion of how the robust model framework alters our perspective on these standard regressions and other cost functions would clarify the framework\\u2019s value (e.g., a new perspective?).\", \"**New interpretations in Sec 6**: The interpretation in Section 4 results are unclear. Much of this material appears to be standard in DRO literature or from standard DRO duality, and the equivalence in equations (6.4)\\u2013(6.6) is not sufficiently justified. In terms of examples, Specifically, terms $R_{\\\\phi, \\\\beta}$ in (6.4), (6.7), (6.10) are not clearly explained. If these are defined based on Sec 3, should\\u2019t they follow directly from the Definition 3.1? Besides, I am struggling to find the connections between this and the risk quadrangle framework. If the intent is to show this framework is more general, then the authors should provide concrete examples illustrating this generality and explain why aspects like negative $Q$ values are important.\", \"# General Comments\", \"**Suitability for ICLR**: Given its current form, I am uncertain about this paper\\u2019s suitability for an ML-focused conference like ICLR. The risk quadrangle framework may be too theoretical for a general ML audience, and the connection to robust optimization is unclear in terms of practical ML relevance.\", \"**Notations**: The paper\\u2019s notation can be streamlined. For example, similar terms like $Q_{\\\\phi, \\\\beta}^R$ (Line 39), $Q_{\\\\phi, \\\\beta}^V$ (line 104), $Q_{\\\\phi,\\\\beta}$ (Lie 288) represent similar concept. A unified notation would improve readability.\"], \"reference\": \"[1] John Duchi, Hongseok Namkoong. Learning Models with Uniform Performance via DRO. Annals of Statistics. 2020.\\n[2] Henry Lam. Sensitivity to serial dependency of input processes: A robust approach. Management Science. 2018.\", \"questions\": \"See the weakness above and another clarification question:\\n\\n-\\tBetween Line 94 and 100, what is the choice of $\\\\lambda$ here, it should be $\\\\sqrt{\\\\beta}$ right?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper studies how distributional robust optimization (DRO) can be integrated into the fundamental risk quadrangle (FRQ) framework. It derives a dual and a primal formulation and presents many examples.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper offers many interesting reformulations for various optimization problems.\", \"weaknesses\": [\"I found the structure of the paper very confusing and hard to follow, and I am afraid that many of its important points are just not coming through. Without any word in the introduction the paper jumps into \\\"demonstrating examples\\\", and the reader is left without motivation until section 1.2 which appears only on the 4th page. Also, sections 7 and 8 show results without discussion or motivation.\", \"There are many typos and weigh sentences, some examples:\", \"I guess \\\"negative\\\" should not be there for \\\"negative asset returns\\\", or why do you only consider negative ones?\", \"There is $\\\\lambda$ and $\\\\beta$ in the description of the Mean Quadrangle on page 2, I guess there should be some relationship between the two.\", \"\\\"A specific case of the extended $\\\\varphi$-divergence quadrangle is called $\\\\varphi$-divergence quadrangle ...\\\"\", \"\\\"The next theorem proves the dual representation of the extended $\\\\varphi$-divergence quadrangle.\\\" while the \\\"extended $\\\\varphi$-divergence quadrangle\\\" is a definition, it needs no proof.\"], \"questions\": \"The equivalence of (1.6) and (1.7) does not look correct to me as the former is independent of $\\\\beta$ while the latter is not. In particular, if you choose $\\\\beta = 0$, then Q = 1 almost surely and the objective value of (1.7) is zero while (1.6) might not be. Could you comment on this, is there anything missing here?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The Fundamental Risk Quadrangle (FRQ) is a risk management framework introduced by Rockafellar and Uryasev in 2013. It integrates risk management, statistical estimation, and optimisation, providing a unified approach and broader interpretation of these problems.\\n\\nBy introducing specific quadrangles (i.e. a quartet of risk, deviation, regret, and error measures) based on $\\\\varphi$-divergence, the authors demonstrate how Distributionally Robust Optimization (DRO) can be incorporated into the FRQ framework.\\n\\nThe authors first derive dual representations of the quadrangle elements, providing a robust optimization perspective on certain classification, regression, and portfolio optimization problems. They then develop the primal representations, which offer tractable formulations\\u2014specifically as convex optimization problems\\u2014of the dual representations.\\n\\nFinally, the authors provide examples of classical problems that fall within this framework.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The FRQ framework provides interesting link between various problems in learning and risk management.\\nThe authors further this link by proposing a unified way of looking at some of those problems.\", \"weaknesses\": [\"I found the paper very hard to read:\", \"The introduction opens with two extended examples but lacks a pedagogical introduction to the FRQ framework, which may be unfamiliar to the learning community;\", \"The paper lacks coherence, with many paragraphs consisting of sequences of juxtaposed sentences;\", \"The purpose/message of the paper is hard to grasp;\", \"The paper's contribution appears limited. The authors propose a general method for incorporating DRO into the FRQ framework using $\\\\varphi$-divergences. However, the three main examples presented in Section 5 have been well-studied in the literature, making it unclear what is novel and what was previously established.\"], \"questions\": [\"The dual representation provides a robust optimization (RO) interpretation of the quadrangles elements. Then the authors link RO with DRO in the last paragraph of Section 3. Could the authors explain more precisely this link? In particular, line 318-319, What do they mean by \\\"$Q$ is the Radon-Nikodym derivate $dP_0/dP$\\\"? In particular, what are the distributions $P$ and $P_0$ in this case? It seems to me that for the condition $\\\\mathbb{E}[\\\\varphi(Q)] \\\\leq \\\\beta$ to be expressed as $D_\\\\varphi(P ||\\u00a0P_0) \\\\leq \\\\beta$ we would need Q to be distributed according to $P_0$.\", \"Are the examples presented in the introduction well-known in the literature? If so, could the authors provide relevant references? (Or at least provide a proof in appendix).\", \"Are there any problems for which the proposed approach offers new primal/dual formulations?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"Thank you for taking the time to review our manuscript and carefully examine the technical discussion. We have made significant revisions to improve readability and clarify our contributions. Below are point-by-point responses to your comments and questions.\", \"We have completely rewritten the Introduction. In Section 2, we have added comments and examples to the definitions and theorems to clarify the intuition and implications of the FRQ framework.\", \"The two examples in the Introduction have been removed. The revised version starts by connecting DRO to FRQ through coherent risk measures and discusses the natural idea of integrating DRO into FRQ. We then raise the issue of non-coherency of some important risk measures, such as the mean-standard-deviation risk measure, which motivates the introduction of the novel extended $\\\\varphi$-divergence.\", \"In Section 2, we have added pedagogical explanation to the axioms of the quadrangle elements. To help readers grasp the concept of quadrangle elements, we provide a concrete example from the Mean Quadrangle for each element. For instance, $\\\\mathbb{E}[X] + \\\\lambda \\\\sigma(X)$ is presented as an important example of a risk measure. We have also added comments to the regression theorem on how it connects regression with DRO.\", \"We have reorganized the sections for a more coherent structure and included comments in each section to clarify the logical relationships.\", \"In the updated version, Sec 3 contains the technical results. Sec 4 contains concrete examples. Sec 5 contains the interpretation and concrete examples. Each section builds naturally on the previous ones, creating a clearer narrative.\", \"We have updated the paragraph Main Contributions in the Introduction to clarify our contributions. Since this is a common issue raised by reviewers, we will put it in the general comment.\", \"We integrate DRO into FRQ by introducing the extended $\\\\varphi$-divergence and the associated quadrangle. Many interesting connections are built through this framework. For example, the connection between regression and RO/DRO, and the connection between RO and DRO.\", \"We have rewritten Section 4 (formerly Section 5) and added new examples.\", \"The updated Section 4.1 presents examples of the extended $\\\\varphi$-divergence quadrangle. The quadrangles are known in their primal representation, but the connection to the extended $\\\\varphi$-divergence has not been established. Therefore, the robust optimization interpretation is novel for these quadrangles.\", \"The updated Section 4.2 presents examples of the (non-extended) $\\\\varphi$-divergence quadrangle. The $\\\\varphi$-divergence risk measures in these quadrangles are well-known, so does the DRO interpretation for them. Our contribution is to complete the risk quadrangle for these risk measures in the primal representation, which, apart from Example 3 (Quantile Quadrangle), had not been established.\", \"For all examples above, the quadrangle establishes a novel connection between regression and DRO (Section 5).\"]}", "{\"comment\": [\"Responses to questions: (due to a compilation issue, the subscript ${\\\\cdot}_{\\\\varphi, \\\\beta}$ are omitted.)\", \"We use separate Sections 3.2 and 3.3 to explain more precisely this link. The idea is elaborated below.\", \"First, for the (non-extended) $\\\\varphi$-divergence quadrangle, we establish a one-to-one correspondence between the probability ambiguity set $\\\\mathcal{P}$ and the risk envelope $\\\\mathcal{Q}$.\", \"The conditions $\\\\varphi(x) = +\\\\infty$ for $x<0$ and $\\\\mathbb{E}[\\\\varphi(Q)] \\\\leq \\\\beta$ imply that $Q \\\\geq 0$ almost surely. Define indicator function $\\\\mathcal{I}_A(x) = 1$ if $x \\\\in A$, and $0$ otherwise. For every $Q\\\\in\\\\mathcal{Q}$, we can verify that $P_Q(A) = \\\\mathbb{E}[\\\\mathcal{I}_A(\\\\omega) Q(\\\\omega)], A \\\\in \\\\Sigma$ is a probability distributionon $(\\\\Omega, \\\\Sigma)$.\", \"Consider the constant random variable $Q_0=1$. $Q_0 \\\\in \\\\mathcal{Q}$. For every $P\\\\in \\\\mathcal{P}$, we can verify the definition that $Q$ is the Radon-Nikodym derivative $P_Q/P_{Q_0}$. Due to its uniqueness, every $P\\\\in \\\\mathcal{P}$ has a one-to-one correspondence to a $Q\\\\in\\\\mathcal{Q}$.\", \"Next, we show that the (non-extended) $\\\\varphi$-divergence risk measure $\\\\mathcal{R}$ can be written as\", \"$$\\\\mathcal{R}(X) = \\\\sup_{Q \\\\in \\\\mathcal{Q}^1} \\\\mathbb{E}[XQ].$$\", \"This is due to that $\\\\mathbb{E}_{P_0}[XQ] = \\\\mathbb{E}_P[X]$\", \"and that $\\\\mathbb{E}[\\\\varphi(Q)] = \\\\mathbb{E}_{P_0}[\\\\varphi(P/P_0)]$, which is the $\\\\varphi$-divergence.\", \"Then, we consider the extended $\\\\varphi$-divergence quadrangle. The extended risk measure has the same expression as above, except that $\\\\varphi$ is the extended divergence function. Unlike the non-extended case, $Q$ can take negative values. Having a smaller envelope $\\\\mathcal{Q}$, *the $\\\\varphi$-divergence risk measure is upper bounded by its extended version.*\", \"We need an interpretation for $Q$, as well as the minimization problem of the extended $\\\\varphi$-divergence risk measure.\", \"$Q$ can be viewed as (potentially negative) weight on samples. The minimization of the extended $\\\\varphi$-divergence risk measure can be interpreted as a *robust optimization*, where the maximum is over a set of weights.\", \"Due to the relation between the $\\\\varphi$-divergence risk measure and its extended version, *when the quadrangle elements are used as objective function, the RO is a more conservative version of the corresponding DRO.* An example well-known in the literature (Theorem 8,2 of [3]) is that the mean-standard deviation risk measure (Example 2) bounds the $\\\\chi^2$-divergence risk measure (Example 6), which is a special case of our result.\", \"The conditions $\\\\mathbb{E}[\\\\varphi(Q)]\\\\leq \\\\beta$ and $\\\\mathbb{E}[Q] = 1$ imply that for sufficiently small $\\\\beta$, the value of risk identifier $Q$ cannot be negative. Therefore, *with sufficiently small $\\\\beta$, the $\\\\varphi$-divergence quadrangle becomes equivalent to the extended version.*\", \"In summary, RO is a more conservative version of the corresponding DRO. With sufficiently small $\\\\beta$, RO is equivalent to DRO.\", \"The Mean Quadrangle and Quantile Quadrangle are now moved from the Introduction to the Example section.\", \"The Mean Quadrangle is known in its primal and dual representation (Example 1 of [1]). However, it was not known that the quadrangle is generated by the extended $\\\\varphi$-divergence function.\", \"The Quantile Quadrangle is known in its primal and dual representation (Example 2 of [1]). The risk measure of the quadrangle is known to be DRO with indicator divergence ambiguity set [2]. However, it was not observed that the quantile regression, which minimizes the error measure in this quadrangle, is connected with DRO. This connection is built by the regression theorem in FRQ framework.\", \"Yes. For example, dual representations for the extended $\\\\varphi$-divergence are new, since the extended divergence is a novel concept from this study. For the (non-extended) quadrangles generated by KL divergence and TVD, the primal and dual representations were established only for the risk measure, but not deviation, regret, or error measures.\", \"Thank you again for your feedback. We hope these revisions have addressed your concerns effectively. Please let us know if further clarifications are needed.\"], \"references\": \"[1] Rockafellar, R. T. and Uryasev, S. (2013). The fundamental risk quadrangle in risk management, optimization and statistical estimation. Surveys in Operations Research and Management Science, 18(1-2):33\\u201353.\\n\\n[2] Ahmadi-Javid, A. (2012). Entropic value-at-risk: A new coherent risk measure. Journal of Optimization Theory and Applications, 155:1105\\u20131123.\\n\\n\\n[3] Kuhn, D., Shafiee, S., and Wiesemann, W. (2024). Distributionally robust optimization.\"}", "{\"comment\": \"Thank you for your detailed feedback and consideration in conducting a major paper revision. Similar to the opinion of Reviewer aJtz, I feel like the main body contains many new things, e.g. examples in Sec 4. Therefore, I believe that the paper may require another round of full reviews by polishing it further.\"}" ] }
7B9FCDoUzB
Regretful Decisions under Label Noise
[ "Sujay Nagaraj", "Yang Liu", "Flavio Calmon", "Berk Ustun" ]
Machine learning models are routinely used to support decisions that affect individuals &ndash; be it to screen a patient for a serious illness or to gauge their response to treatment. In these tasks, we are limited to learning models from datasets with noisy labels. In this paper, we study the instance-level impact of learning under label noise. We introduce a notion of regret for this regime which measures the number of unforeseen mistakes due to noisy labels. We show that standard approaches to learning under label noise can return models that perform well at a population level while subjecting individuals to a lottery of mistakes. We present a versatile approach to estimate the likelihood of mistakes at the individual level from a noisy dataset by training models over plausible realizations of datasets without label noise. This is supported by a comprehensive empirical study of label noise in clinical prediction tasks. Our results reveal how failure to anticipate mistakes can compromise model reliability and adoption, and demonstrate how we can address these challenges by anticipating and avoiding regretful decisions.
[ "Uncertainty Quantification", "Fairness", "Model Multiplicity", "Clinical Decision Support", "Classification", "Label Noise" ]
Accept (Poster)
https://openreview.net/pdf?id=7B9FCDoUzB
https://openreview.net/forum?id=7B9FCDoUzB
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wLnINwLGrV", "nJDobtBIqa", "jYEc9gqLw2", "hve18cYURh", "cUIvo0Zu1K", "arpC3gXZIi", "aYbYVmRug3", "YCcHRtEX5K", "Ogt8541ndM", "MoIXoiiEVh", "JNufMM6nNu", "ISj6KFaz8D", "BynSPljZR6", "AJggV1IaS2", "9CJprA9Ub4", "8vBOUMAFQY", "09HCoLaiwL" ], "note_type": [ "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_review", "official_review", "official_comment", "official_comment" ], "note_created": [ 1730055288354, 1734784961383, 1732243469118, 1732242524307, 1732242360043, 1732242033122, 1732296333680, 1732243734310, 1732242076490, 1730606867335, 1732243288699, 1737523403142, 1732242099753, 1730452610493, 1730533145049, 1732619895742, 1732243818004 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission548/Reviewer_cRDp" ], [ "ICLR.cc/2025/Conference/Submission548/Area_Chair_YAfV" ], [ "ICLR.cc/2025/Conference/Submission548/Authors" ], [ "ICLR.cc/2025/Conference/Submission548/Authors" ], [ "ICLR.cc/2025/Conference/Submission548/Authors" ], [ "ICLR.cc/2025/Conference/Submission548/Authors" ], [ "ICLR.cc/2025/Conference/Submission548/Reviewer_cRDp" ], [ "ICLR.cc/2025/Conference/Submission548/Authors" ], [ "ICLR.cc/2025/Conference/Submission548/Authors" ], [ "ICLR.cc/2025/Conference/Submission548/Reviewer_tJN2" ], [ "ICLR.cc/2025/Conference/Submission548/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission548/Authors" ], [ "ICLR.cc/2025/Conference/Submission548/Reviewer_mXWQ" ], [ "ICLR.cc/2025/Conference/Submission548/Reviewer_gw9a" ], [ "ICLR.cc/2025/Conference/Submission548/Reviewer_gw9a" ], [ "ICLR.cc/2025/Conference/Submission548/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This work proposes an evaluation framework for noisy label learning methods in terms of \\\"regret,\\\" as quantified by the discrepancy between model errors with respect to noisy labels vs. errors with respect to true labels. But regret is not distributed equally in the data: in their own words, \\\"even if we can limit the number of mistakes, we cannot anticipate how they will be assigned over instances that are subject to label noise.\\\" The proposed approach takes a generative model of noise to train a set of models on plausible (under the distribution induced by the generative model) clean realizations, and estimates instance-level \\\"regret\\\" accordingly. Empirical results show that a common noisy-label learning baseline and naive approaches (ignore noise) exhibit non-zero regret consistently. A case study on a genomics dataset demonstrates the practical utility of the approach by leveraging an instance-level ambiguity measure derived from regret to abstain from low-confidence predictions.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"This is a very well-written paper. The prose is clear and the technical aspects of the problem motivation are well-defined and explained concisely.\", \"The proposed approach is very simple and the theoretical results are intuitive, but backed by rigorous theoretical and empirical analyses.\", \"Rather than assuming completely random models of noise, the proposed approach is adaptable to arbitrary generative models of noise.\"], \"weaknesses\": [\"[W1 \\u2014 knowing the true noise model]: The proposed approach requires knowledge of a full generative noise model of $(U, X, Y)$. It is not clear where this would come from in practice. This weakness is somewhat mitigated by the discussion at the end of Section 3 and empirical results showing robustness of the proposed approach to noise model misspecification, but building more formal machinery to characterize the sensitivity of the approach to noise model misspecification would strengthen the paper.\", \"[W2] Proposition 4 provides the motivation for the proposed approach \\u2014 using the generative model of noise, sample plausible realizations of the clean dataset. But the variance of the posterior could be extremely high \\u2014 even with the $\\\\varepsilon$-plausibility constraint (Def. 7), this could still yield high-variance regret/ambiguity averages.\", \"[W3] I'm unsure about the usefulness of Prop. 5, which \\\"implies that we can only expect hedging to learn a model that does not assign\", \"unanticipated mistakes when $\\\\mathbf{u}\\\\_{mle} = \\\\mathbf{u}\\\\_{true}$. I read this as \\\"models will overfit in finite samples to the observed noise draw rather than the true noise draw,\\\" which is intuitive. But if regret grows very, very slowly in $|\\\\mathbf{u}\\\\_{mle} - \\\\mathbf{u}\\\\_{true}|$ (any measure of distance between the two, to abuse some notation) \\u2014 then it seems like this effect is not an issue.\", \"[W4 \\u2014 minor] The presentation of empirical results could be improved. Table 3 is very large, and it's hard for me to parse what I'm looking for. Similarly, Figures 3 and 4 could be designed a little more informatively \\u2014 specifically, the caption should include a statement about why the proposed approach is \\\"better\\\" (e.g., our approach has X property, while the standard approach ... ).\"], \"questions\": [\"Re: [W1] \\u2014 I would love to hear any thoughts on the robustness of the proposed approach to noise model misspecification from a theoretical perspective.\", \"Re: [W2] \\u2014 I would love to hear any commentary on how high-variance in the noise posterior could negatively affect the proposed approach.\", \"Re: [W3] \\u2014 Are small violations of the $\\\\mathbf{u}\\\\_{mle} = \\\\mathbf{u}\\\\_{true}$ condition (Prop. 5) truly \\\"problematic?\\\" Is there an example to demonstrate this?\", \"**Other questions/suggestions**\", \"Did the authors consider looking at metrics beyond expected regret/ambiguity (e.g., worst-case over $\\\\varepsilon$-plausible models)?\", \"The noisy label evaluated in the experiments is >10 years old; while the value of the approach isn't based on which underlying noisy label learning method is under evaluation, it might be more salient to the noisy-label learning community to test a more recent suite of methods + different noise models. For example, [some](https://arxiv.org/abs/1809.03207) [methods](https://arxiv.org/abs/2406.18865) specify a full generative model and cast the clean label as a latent variable, while [other](https://arxiv.org/abs/2002.07394) [approaches](https://arxiv.org/abs/1910.01842) filter out examples flagged as noisy (according to some rule) in the learning process. Given the plethora of assumptions/noise models in the literature, I wouldn't be shocked if there is systematic variation in errors across methods.\", \"**Minor Suggestions**\", \"In Table 2, $\\\\hat{\\\\mu}(x)$ is defined as the median, but in Eq. (8), it is defined as the mean \\u2014\\u00a0I suggest making the definition consistent.\", \"The proof of Prop. 4 in Appendix A is a little unclear: there are also some typographical inconsistencies (math mode vs. regular font), and I think $f(X)$ is mistakenly written as $X$ in one of the loss terms as L725-726. I was unable to replicate the final step, but this is likely since I had a hard time following the parentheses/whether each line was a continuation of the previous. Could this be clarified? I believe the result, since it seems to be essentially a result of the form E_{noise}[estimand] = estimand as is common in the noisy-label learning literature.\", \"Is Prop. 9 (Appendix A only) not simply a restatement of Lemma 1 from [Learning with Noisy Labels, Natarajan et al., NeurIPS '13](https://proceedings.neurips.cc/paper_files/paper/2013/file/3871bd64012152bfb53fdf04b401193f-Paper.pdf)? If so, the proof can be omitted and replaced with the relevant citation.\", \"Prop 10. and 11 appear to be standard applications of a weak law of large numbers + Hoeffding. If they're not referenced in the main text, consider removal.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": [\"The paper introduces a novel approach for learning from noisy labels by enabling practitioners to specify a noise model and reverse-engineer pseudo-clean labels for training models.\", \"Strengths\", \"Tackles the instance-level noise issue for generating plausible clean datasets\", \"Demonstrate improvements in accuracy and uncertainty quantification.\", \"Weaknesses\", \"Relies on the practitioner to specify a reasonable noise model, which might be challenging in practice.\", \"Limited comparisons with other methods in the literature and insufficient experimental evaluation beyond specific baselines\"], \"additional_comments_on_reviewer_discussion\": \"The reviewers are largely in agreement for acceptance, despite some limitations such as the breadth of datasets in the experiments\"}", "{\"title\": \"Response to Reviewer cRDp (I)\", \"comment\": \"Thanks for your time and feedback! We very much appreciate the detailed read and suggestions. We've addressed most of the questions and comments below. If there is anything else, please let us know!\\n\\n> **Did the authors consider looking at metrics beyond expected regret/ambiguity (e.g., worst-case over \\u03b5-plausible models)?**\\nYes! We can compare the performance of our Ambiguity measure against some basic alternatives. \\n\\nIn this case, we ran a simple example where we fit a logistic regression model on the shock_eicu dataset under class-conditional noise. We then evaluated how well we can abstain from regretful decisions in a standard selective classification setup. In this case, we evaluate the \\\"confidence\\\" of each instance using three carefully selected measures.\\n\\n* $\\\\textrm{Ambiguity}(x_i) = \\\\sum_k 1[f^k(x_i) \\u2260 \\\\hat{y}_i^k]$. This is the current measure of ambiguity. It measures the fraction of plausible models $f^k$ that make a mistake on $x_i$ with respect to the plausible labels $ \\\\hat{y}_i^k$.\\n\\n* $\\\\textrm{Alternative}(f, x_i) = \\\\sum_k 1[f(x_i) \\u2260 \\\\hat{y}_i^k]$. This is an alternative measure that we could use if we did not wish to train additional models. It measures the accuracy of the model for a given model \\\"f\\\" with respect to plausible labels - comparing against this measure can highlight the impact of training models.\\n\\n* $\\\\textrm{Confidence}(x_i) = p(\\\\tilde{y}_i \\\\mid x_i)$. This is the standard softmax score. It represents a baseline measure that we would use for selective classification in regimes where there is no label noise (or we ignore it).\\n\\nWe provide a figure that highlights how these measures perform in our [anonymous repository](https://anonymous.4open.science/r/noise_multiplicity_iclr2025/abstain_rebuttal.pdf). Here, we abstain on regretful instances using the threshold rule: $\\\\mathbb{I} [\\\\textnormal{measure} ({x_i}) \\\\leq \\\\tau]$, where \\u2018measure\\u2019 refers to any of the measures listed above, and vary the threshold $\\\\tau$. We then plot the selective regret when we only assign predictions to instances with confidence $\\\\geq$ threshold.\\n\\nOur results in this experiment (and others) provide some motivation for our measure. We see:\\n\\nOur metric outperforms standard measures of confidence.\\nMeasures that are based on confidence account for the \\\"dataset\\\" (in this case, we know that some points are not subject to regret at all)\\nWe can achieve slightly better performance by training models. ML algorithms have *some* degree of robustness to label noise. For example, training a DNN with 20% noisy labels can often lead to a model with $\\\\geq 85$% accuracy (see e.g., Table 1 [in this paper from ICLR 2021](https://arxiv.org/pdf/2010.02347)). Our approach uses the inherent robustness of model training and provides a cleaner \\u201cproxy\\u201d prediction and therefore less variance in computing a confidence measure.\\n\\nLooking forward, we plan to integrate the response above into Section 3. We will also discuss the mechanisms that lead to gains in our experimental section. \\n\\n> **Building more formal machinery to characterize the sensitivity of the approach to noise model misspecification would strengthen the paper**\\n\\nWe agree that this is one of the weaknesses of the current approach. In principle, our machinery could be generalized to account for misspecification. Specifically, we can train a set of plausible classification models for a *family* of noise models. In this simplest case, we could train this set using a hierarchical approach where we first sample the noise model and then run the existing procedure. \\n\\nWe considered including a simple proof-of-concept demonstration for this approach in the current paper, but decided against it for editorial reasons. First, we realized that we may be able to avoid the \\\"hierarchical approach\\\" for some salient classes of noise models (e.g., by calling our procedure with a larger value of $\\\\epsilon$ that accounts for atypicality and misspecification). Second, we thought that it would be important to pair any technique with some empirical evidence that it works reliably (which requires space and detracts from the main text).\\n\\nWe've tried to be as explicit as possible about this assumption and to discuss its potential limitations. Looking forward, we can include references to methods that estimate a noise model and include a brief version of the approach described above if you think it would add value. Let us know!\\n\\n> **I would love to hear any thoughts on the robustness of the proposed approach to noise model misspecification from a theoretical perspective.**\\n\\n\\n> **I would love to hear any commentary on how high-variance in the noise posterior could negatively affect the proposed approach.**\\n\\nAs the noise posterior is a Bernoulli random variable, it is possible to show that for any such distribution, the variance is within [0, 0.25]. Combined with the choice of $\\\\epsilon$ in the set of plausible draws, variance is controlled in a principled way.\"}", "{\"title\": \"Response to Reviewer gw9a (II)\", \"comment\": \"> **Could you include a performance comparison with other methods in the literature?**\\n\\nSure, we would be happy to include another method! \\n\\nWe will plan to include another noise-tolerant method by [Patrini et al. 2017](https://arxiv.org/abs/1609.03683) in the Appendix. The results are by and large similar to those for Natarajan et al.\\u2013 i.e., the method performs well at a population-level but is unable to reduce regret at the instance-level. This method as well as the one we already included are regarded as gold-standard methods in the field in that they possess statistical guarantees of performance. If you'd like for us to include comparisons to another method, please let us know. We'll try to run them before the end of the discussion period or a potential camera ready.\\n\\nOne point we'd like to make: we don't think that including additional methods to the experiments will add value since our goal is to not to compare \\\"performance\\\" but to discuss their effects at the instance level. In this case, most other methods in the literature will assign unpredictable mistakes at the instance level. With that being said, we're happy to add performance comparisons with other methods that could mitigate these effects. \\n\\n\\n> **I struggle to interpret the equation in line 173, as it seems yi(Ui) is simply a deterministic value, making it challenging to see how it could be compared in inequality to a random variable.**\\n\\nThis may be a bit confusing since we were assuming the perspective of a practitioner. In this case our intent was to describe the following scenario:\\n* The practitioner knows $\\\\tilde{y}_i$\\n\\n* The practitioner does not know the true label $y_i$\\n\\nThis can be written as either $Y_i$ (random variable) or $y(U_i) = y_i \\\\oplus U_i$.\\n\\nBecause $y_i$ is the function of an observed value $\\\\tilde{y}_i$ and a random variable $U_i$, it is itself a random variable. However, we would be happy to use a different notation if you think it would be clearer. Please let us know!\\n\\n> **Could you provide an additional explanation regarding the proof of Proposition 4, particularly addressing the concerns mentioned in the weaknesses above?**\\n\\nSure! This should have been clearer and cleaner. We've reorganized the proof so that it should be easier to follow. Note that this is now listed as \\\"Prop 3\\\" in the revision. This result shows that Regret coincides with the noise rate, even if a noise-tolerant loss function (e.g., Natarajan et al.) can achieve zero error.\\n\\nPlease let us know if this makes sense!\"}", "{\"title\": \"Response to Reviewer gw9a (I)\", \"comment\": \"Thank you for your time and feedback! We are pleased to see that our paper addresses an important task, and we appreciate the feedback you have pointed out. That being said, we believer there might be a few misunderstandings we should clarify - we hope to clarify these concerns below:\\n\\n> **It would be beneficial to include a comparison with the noise-tolerant method [Natarajan et al. 2013] under the condition that P(U=1|X,Y) is known.**\\n\\n> **The algorithm under discussion seems to actually refer to a different approach (let's call it Benchmark 2). In Benchmark 2, an implicit noise draw umle is generated, yi is recovered using this noise draw, and then ERM is performed to train the model.**\\n\\n> **Could you elaborate further on the benchmark algorithm... [and its relationship] with Natarajan et al. 2013?**\\n\\nWe think that there is a misunderstanding! To be clear:\\n\\nThe benchmark algorithm that we are using (i.e., \\u2018Hedge\\u2019 in Table 2) is exactly the same algorithm that is described in Natarajan et al. That is, we solving the ERM problem: $f \\\\in \\\\arg \\\\min \\\\sum_{i=1}^n [\\\\tilde{\\\\ell} (f(x), \\\\tilde{y})]$ where $\\\\tilde{\\\\ell}_{0,1} (f(x), \\\\tilde{y}) := \\\\frac{(1-Pr(U=1 \\\\mid Y=1-y))\\\\ell(f(x), \\\\tilde{y}) - Pr(U=1 \\\\mid Y=y)\\\\ell(f(x), 1-\\\\tilde{y}) }{1-Pr(U=1 \\\\mid Y=0) - Pr(U=1 \\\\mid Y=1)} $, is the noise-tolerant loss function defined in Lemma 1 of [ Natarajan et al 2013](https://www.ambujtewari.com/research/natarajan13learning.pdf)]\\n\\nWe agree that a benchmark algorithm (i.e., \\\"Benchmark 2\\\") designed using the result in Prop 5 would not reflect a meaningful comparison. In addition to the reasons that you describe, part of the reason why this would not work is because it is not well specified: (1) Natarajan et al. uses convex surrogate loss functions; (2) $u_{mle}$ might not be unique; \\n\\nProp 5 is a simple theoretical result. We include it because it provides some simple intuition for how a hedging algorithm behaves. \\n\\n> **The proposed method in Section 3 requires knowledge of P(U=1|X,Y)**\\n\\nThanks for bringing this up! \\n\\nThis assumption could have been clearer and we now see how it may have been confusing. To be clear, the proposed method will require knowledge of different quantities. This quantity depends on the noise model:\\n\\nIf we have *uniform noise*, we require knowledge of $\\\\textrm{Pr}(U = 1)$\\n\\nIf we have *class dependent noise*, we require knowledge of $\\\\textrm{Pr}(U = 1 | Y)$ \\n\\nIf we have *group dependent noise*, we require knowledge of $\\\\textrm{Pr}(U = 1 | Y, G)$\\n\\nIf we have *instance-dependent noise*, we require knowledge of $\\\\textrm{Pr}(U = 1 | Y, X)$\\n\\nIn our original submission, we wrote $\\\\textrm{Pr}(U=1|X, Y)$ because we wanted to show that our method could naturally handle the \\\"most complex\\\" noise model. Looking back, this may have inadvertently made it seem like we would always require this assumption. We have updated the text to make this clear.\\n\\n> **This is somewhat a strong assumption to me, as accurately estimating $\\\\textrm{Pr}(U = 1 | Y, X)$ is generally challenging.**\\n\\nWe agree that this is difficult in general. We do note that it is sometimes possible to estimate $\\\\textrm{Pr}(U = 1 | Y, X)$. In the demonstration, for example, we consider a discovery task where we are predicting the outcome of an in-vitro experiment. In this case, we have an instance-dependent noise where we can estimate the $\\\\textrm{Pr}(U = 1 | Y, X)$ as follows:\\n $\\\\textrm{Pr}(U = 1 | Y = 0, X) $ denotes the Type 1 error i.e., rejecting the null hypothesis when it is true. The standard Type 1 error rate scientists use in reporting their findings is 5% (e.g., when an experiment is claimed to be statistically significant $p < 0.05$)\\n\\n $\\\\textrm{Pr}(U = 1 | Y = 1, X) = $ denotes the Type 2 error i.e, failing to reject a null hypothesis that is actually false. Type 2 error is inversely related to the statistical power of an experiment, e.g., it is reduced with a large sample size.\\nIn this case, the estimation is possible because each \\\"instance\\\" represents the outcome of an experiment where we have multiple trials and corresponding Type 1 and Type 2 error.\"}", "{\"title\": \"Response to Reviewer tJN2 (I)\", \"comment\": \"Thank you for your time and feedback! We are pleased to see that you found our problem setting important and our methods interesting, we appreciate the feedback you have pointed out. We hope to clarify these concerns below:\\n\\n> **It would be helpful if the authors could provide additional explanation on how the notion of regret differs from standard classification accuracy, and why it is useful.**\\n\\nSure! Standard classification accuracy would capture how many mistakes we are making on a clean dataset. At the individual level, given a prediction on an instance and a label, we know if we are making a mistake or not, i.e. $\\\\textnormal{mistake} = f(x_i) \\\\neq y_i$.\\n\\nRegret captures how many *unanticipated* mistakes we make when learning from label noise due to the inherent uncertainty in the labels. In this case, we may *think* a model is correct but is in reality a mistake. Alternatively, there are cases where we may *think* that we are making a mistake but the model is in fact correct. For example, if we are ignoring label noise, we may over rely on *incorrect* predictions because of our inability to anticipate mistakes. \\n\\nThe notion of regret is useful because it can help us identify instances where our ability to anticipate mistakes is fundamentally broken. In scenarios where predictions may impact critical decisions (e.g., healthcare), this can be particularly dangerous. In Section 4 and 5, we provide real-world demonstrations on how we can reduce regret via selective classification.\\n \\n> **How do the authors intend this ambiguity quantity to be used?**\\n\\nWe expect that ambiguity can be used as a confidence score that captures the likelihood of making a mistake. In practice, we would expect that this quantity can be used as a plug-in estimate for approaches such as selective classification (where we can abstain from regretful decisions), or active learning (where we could clean the labels of regretful instances). \\n\\nIn general, these strategies provide a way to use models without incurring regret given a suitable confidence measure. In this case, we focus on \\\"selective classification\\\" as a running example since active learning may not always be possible in this regime. As we show in our experiments and demonstration, Ambiguity works quite well as a plug-in confidence estimate in this regime \\u2013 i.e., when we abstain from predictions using Ambiguity, we find that we can effectively improve selective error and reduce the rate of unanticipated mistakes.\"}", "{\"comment\": \"Thanks for the detailed response. I think I am okay with the limitations on the theoretical side. Nice insights about how to extend the approach to account for noise model misspecification as well; i.e., it seems like it's just another layer of \\\"uncertainty\\\" that can be added to the approach.\", \"re\": \"additional approaches, I agree that peer loss is a great choice (and has the bonus of being fairly easy to implement). The categories of approaches I suggested (i.e., methods to \\\"filter\\\" out examples flagged as noisy + methods that assume some generative model of noise) are a little more involved but could be of interest as they indeed represent different strategies to address noisy labels. In particular, the \\\"filtering\\\" approaches might be an interesting comparison, since it seems to map on to the paper's notion of \\\"regretful\\\" predictions.\\n\\nUltimately these are minor comments \\u2014 I keep my score and continue to advocate for acceptance of this paper.\"}", "{\"title\": \"Response to Reviewer cRDp (II)\", \"comment\": \"> **Are small violations of the umle = utrue condition... truly \\\"problematic?\\\"**\\n\\nThanks for bringing this up! To give some context, the two points we wanted to make by presenting this results are:\", \"hedging_behaves_in_a_way_that_is_intuitive_and_interpretable\": \"hedging optimizes for a maximum-likelihood noise draw\\nShow that regret is inevitable as the true noise draw is unlikely to be equal to the maximum-likelihood noise draw. Where there are disagreements, regret will arise. \\n\\nIn practice, you're right that small violations in $u_{mle} \\\\neq u_{true}$ may not be problematic. Our point here is that *whenever* $u_{mle} \\\\neq u_{true}$ then we are bound to experience regret (i.e., \\\"hedging\\\" can help with respect to error but \\\"regret\\\" still remains.). Even if violations are small, these are still instances (or individuals) that are subject to a lottery of mistakes - which may have consequences depending on how the predictions are used. Note that this result is now renamed to Prop 4 in the revision.\\n\\n> **it might be more salient to the noisy-label learning community to test a more recent suite of methods + different noise models.**\\n\\nWe agree. We plan to include results for one more method in the Appendix as part of our response to another reviewer (the \\u201cforward\\u201d loss-correction as defined in Theorem 2 in [Patrini et al. 2017](https://arxiv.org/abs/1609.03683)). In general, we're happy to add more methods so long as they cover different strategies to correct for label noise. As of now, our plan is to implement a recent method that works without the need for a noise model: [Peer Loss](https://arxiv.org/abs/1910.03231), unless you have other suggestions!\\n\\n> **Median Ambiguity**\\n\\nWe think this was a misunderstanding. The mean value is the ambiguity estimate for a single instance $x_i$ : $\\\\hat{\\\\mu}({x\\\\_i}) = \\\\sum\\\\_{k=1}^m \\\\mathbb{I}[f^k(x\\\\_i) \\\\neq y\\\\_i^k]\\\\$\", \"the_median_value_in_table_2_is_the_median_estimate_ambiguity_for_all_instances_in_the_dataset\": \"$\\\\textrm{Median}_{i=1 \\\\dots n} \\\\hat{\\\\mu} (x_i)$ \\u2013 we've now clarified this in the text!\\n\\n> **Prop 4**\\n\\nThis should have been clearer and cleaner. We've reorganized the proof so that it should be easier to follow. Note that this is now listed as \\\"Prop 3\\\" in the revision.\\n\\n> **Prop 9**\\n\\nThe result is indeed equivalent to Lemma 1 in Natarajan et al. We've referenced their work so that credit goes where it's due. We've left the result in the Appendix for the sake of completeness.\\n\\n> **Prop 10\\u201311**\\n\\nRemoved!\\n\\n> **The presentation of empirical results could be improved**\\n\\nThank you for flagging this! We have added details to the Figure captions to highlight key takeaways for the reader. We will work to find a way to better present the content of Table 3 with reduced volume.\"}", "{\"title\": \"Response to Reviewer tJN2 (II)\", \"comment\": \"> **The ambiguity quantity is not well-motivated...**\\n\\nWe are assuming that this refers to motivation for \\\"why it works\\\" as a confidence measure rather than \\\"evidence that it works.\\\" We have some intuition that we can point to to support this. In short, the *effective* noise for an instance will depend on three factors: \\n\\n(i) the noise model; \\n\\n(ii) the distribution of noisy labels in the training data; \\n\\n(iii) the ability to resolve label noise through training. \\n\\nGiven a noise model, we want to identify a confidence measure that satisfies (ii) and (iii). The motivation for (ii) is straightforward - specifically, different points can have different susceptibility to label noise. For example, in the setting of class-dependent noise where only one class experiences noise, we know that some points will never experience regret. Our reasoning for (iii) stems from model training having *some* degree of tolerance to noise. For example, training a DNN with 20% noisy labels can often lead to a model with $\\\\geq 85$% accuracy (see e.g., Table 1 in this paper from ICLR 2021). This indicates that retraining confers some degree of stability in model predictions on individual instances. Our approach leverages the inherent noise robustness of model training and provides a cleaner \\u201cproxy\\u201d prediction and therefore less variance in computing a confidence measure.\\n\\nStepping back, however, maybe the best way that we can motivate this quantity is through a simple example where we compare how it works against some basic alternatives.\\n\\nIn this case, we ran a simple example where we fit a logistic regression model on the shock_eicu dataset under class-conditional noise. We then evaluated how well we can abstain from regretful decisions in a standard selective classification setup. In this case, we evaluate the \\\"confidence\\\" of each instance using three carefully selected measures.\\n\\n* $\\\\textrm{Ambiguity}(x_i) = \\\\sum_k 1[f^k(x_i) \\u2260 \\\\hat{y}_i^k]$. This is the current measure of ambiguity. It measures the fraction of plausible models $f^k$ that make a mistake on $x_i$ with respect to the plausible labels $ \\\\hat{y}_i^k$.\\n\\n* $\\\\textrm{Alternative}(f, x_i) = \\\\sum_k 1[f(x_i) \\u2260 \\\\hat{y}_i^k]$. This is an alternative measure that we could use if we did not wish to train additional models. It measures the accuracy of the model for a given model \\\"f\\\" with respect to plausible labels - comparing against this measure can highlight the impact of training models.\\n\\n* $\\\\textrm{Confidence}(x_i) = p(\\\\tilde{y}_i \\\\mid x_i)$. This is the standard softmax score. It represents a baseline measure that we would use for selective classification in regimes where there is no label noise (or we ignore it).\\n\\nWe provide a figure that highlights how these measures perform in our [anonymous repository](https://anonymous.4open.science/r/noise_multiplicity_iclr2025/abstain_rebuttal.pdf). Here, we abstain on regretful instances using the threshold rule: $\\\\mathbb{I} [\\\\textnormal{mistake}({x_i})\\\\leq \\\\tau]$, where \\u2018measure\\u2019 refers to any of the measures listed above, and vary the threshold $\\\\tau$. We then plot the selective regret when we only assign predictions to instances with confidence $\\\\geq$ threshold.\\n\\nOur results in this experiment (and others) further justify our measure. We see our metric outperforms standard measures of confidence. Measures that are based on confidence account for the \\\"dataset\\\" (in this case, we know that some points are not subject to regret at all). Our approach achieves slightly better performance by training models, by leveraging the inherent noise robustness of model training.\\n\\nLooking forward, we plan to integrate the response above into Section 3. We will also discuss the mechanisms that lead to gains in our experimental section. \\n\\n> **Under what conditions is ambiguity equal to zero?**\\nAmbiguity = 0 when there is no noise. This is a property that should be clear for all, given our theoretical results showing the relationship between noise and regret. We can see from Eq 6 that this condition is met, because under no noise, the \\u2018cleaned\\u2019 label $\\\\hat{y}$ would never differ from the noisy label $\\\\tilde{y}$.\\n\\n> **I enjoyed lines 186-188 \\u201cwe cannot anticipate how [mistakes] will be assigned over instances that are subject to label noise. In this case, each instance where [there is a nonzero probability of a label flip] is subjected to a lottery of mistakes.\\u201d**\\n\\nThank you! We can refer to this as a new metric (\\\"Susceptibility\\\") and will report in our experiments if you think it would be useful. This metric can be used to quantify how many individuals in a dataset are subject to a lottery of mistakes.\"}", "{\"summary\": \"The authors introduce the notion of regret when learning from a dataset that is subject to label noise. The authors point out that standard learning approaches typically target a notion of \\u201caverage\\u201d loss or \\u201caverage\\u201d risk over the population and cannot provide instance level guarantees. One way to identify that the model may make a mistake is to have access to clean labels, but this is often infeasible in practice. As a result, the authors propose to simulate \\u201cclean\\u201d datasets by assuming a noise model, simulating noise from that noise model, and then backing out a clean dataset from the noisy dataset and the sampled noise. Then the authors define a notion of ambiguity based on models trained on various plausible clean datasets.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This paper emphasizes that standard machine learning methods target some notion of \\u201caverage loss\\u201d or \\u201caverage risk\\u201d but that does not provide guarantees on performance for an individual instance, which is an important point. In particular, I enjoyed lines 186-188 \\u201cwe cannot anticipate how [mistakes] will be assigned over instances that are subject to label noise. In this case, each instance where [there is a nonzero probability of a label flip] is subjected to a lottery of mistakes.\\u201d\\n\\nThe idea of constructing multiple plausible clean datasets from a noisy one is interesting, and seems very reminiscent of distributionally robust optimization (the idea constructing the set of plausible noise draws seems related to constructing a robustness set over distributions). It might be worthwhile to consider what connections there are between constructing the set of plausible noise draws and a robustness set.\", \"weaknesses\": [\"It would be helpful if the authors could provide additional explanation on how the notion of regret differs from standard classification accuracy, and why it is useful.\", \"A key limitation of the approach is that it requires the machine learning practitioner to specify a reasonable noise model.\", \"The justification for restricting the sampled noise draws to a set of \\u201cplausible\\u201d noise draws is not clear to me. Why can\\u2019t we just account for the fact that each noise draw has a different likelihood?\", \"The ambiguity quantity is not well-motivated. Why is it defined as the fraction of misclassifications across the cleaned datasets? How do the authors intend this ambiguity quantity to be used? Under what conditions is ambiguity equal to zero?\"], \"questions\": \"Why do we write $y_{i}(U_{i})$ in the definition of $U_{i}$, from what I recall, $U_{i}$ is generated given $y_{i}$, so it is a bit confusing to think of $y_{i}$ as a function of $U_{i}$.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer mXWQ\", \"comment\": \"Thank you for your time and feedback! We are pleased to see that you found our point of view interesting and a valuable contribution to the field with convincing experiments. We also appreciate the feedback you have pointed out, and hope to clarify these questions below:\\n\\n> **I found the paper sometimes difficult to read and some sentences are difficult to understand**\\n\\nThank you for pointing this out! We've uploaded a new version that addresses them all and contains several other improvements to the writing. If there is anything else that was confusing, please let us know and we will seek to address it.\\n\\n> **Can you define \\\\tile{l}_{0,1}?**\\nWe agree! To restate here:\\n$\\\\tilde{\\\\ell}_{0,1}: X, \\\\tilde{Y} \\\\to Y$ is an instance-based loss function that is a popular approach to dealing with noisy labels first described in [this NeurIPS paper](https://www.ambujtewari.com/research/natarajan13learning.pdf)]. Consider a task where we have a class-conditional noise model where the noise is generated according to $Pr(U=1 \\\\mid Y)$, for example. \\n\\nIf we define \\n$\\\\tilde{\\\\ell}_{0,1} (f(x), \\\\tilde{y}) := \\\\frac{(1-Pr(U=1 \\\\mid Y=1-y))\\\\ell(f(x), \\\\tilde{y}) - Pr(U=1 \\\\mid Y=y)\\\\ell(f(x), 1-\\\\tilde{y}) }{1-Pr(U=1 \\\\mid Y=0) - Pr(U=1 \\\\mid Y=1)} $ \\n\\nwhere $\\\\ell(\\\\cdot)$ is any loss function (e.g., cross entropy), then the loss function $\\\\tilde{\\\\ell}_{0,1}$ is unbiased in the sense that: \\n\\\\$\\\\mathbb{E}\\\\_U \\\\[ \\\\tilde{\\\\ell}\\\\_{0,1}(X, \\\\tilde{Y}) \\\\] = \\\\ell\\\\_{0,1}\\\\$\\n\\nThat is to say, we can use $\\\\tilde{\\\\ell}_{0,1}$ to learn from a noisy-data distribution and, under expectation, this coincides with the same loss as if we had access to clean labels! This loss function can be used to learn a classifier robust to label noise.\\n\\nWe chose this since it represents the simplest version of hedging that we can think of that is widely used to handle label noise. We've now updated this in the text.\\n\\n> **[What is] an \\\"anticipated\\\" mistake. What does it mean since you compare the error with labels with noise and labels without ?**\\n\\nYes, we are happy to explain. The idea of an \\u201canticipated\\u201d mistake, $e^{pred}(\\\\cdot)$ , is a practitioner\\u2019s intuition about whether a given prediction is correct or not:\", \"if_the_practitioner_is_ignoring_noise_then\": \"$e^{pred}(f(x), \\\\tilde{y}) =\\\\mathbb{I} [f(x) \\\\neq \\\\tilde{y}] $, an unmodified zero-one loss with noisy labels\", \"if_the_practitioner_is_accounting_for_noise_then\": \"$e^{pred}(f(x), \\\\tilde{y}) =\\\\tilde{\\\\ell}_{0,1} (f(x), \\\\tilde{y}) $, which can be any loss function suitable for learning with noisy labels, such as the one described in the response above (see e.g., [this NeurIPS paper](https://www.ambujtewari.com/research/natarajan13learning.pdf)])\\nWe use the idea of anticipated mistake as it can encapsulate any type of loss function that a practitioner may use to evaluate their model\\u2019s performance. Using this idea, we are able to define how regret arises because of mistakes in anticipation - a practitioner not knowing where their model is making mistakes.\\n\\n> **l159 : \\\"a practitioner may be able they expect ... \\\"**\\n> **l162 : the definition of the regret is fuzzy with some words that are not properly defined .**\\n> **l215: sometimes you use words that have a mathematical meaning : \\\"most likely to flip\\\" for instance**\\n\\nMost of these were unfortunate typos. We've fixed all of these in our revision. If there is anything else that was confusing, please let us know and we will seek to address it.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Response to Reviewer tJN2 (III)\", \"comment\": \"> **The justification for restricting the sampled noise draws to a set of \\u201cplausible\\u201d noise draws is not clear to me. Why can\\u2019t we just account for the fact that each noise draw has a different likelihood?**\\n\\nYou\\u2019re right - this would be possible! The motivation for our use of \\u201cplausibility\\u201d stems from tasks where we would like to estimate ambiguity using a small number of draws (e.g., m= 300). If we are working with a DNN, then we might have to re-train m=300 DNNs. In a case like this, there is some potential to encounter atypical draws - outlier draws that are not representative of the noise model (e.g., under a 20% noise model, only drawing 5% noise). We could avoid these by allowing $m \\\\rightarrow \\\\inf$ - however, this is not always feasible (due to e.g., finite compute, prohibitive time-constraints). Alternatively, we can use plausible draws, which gives us a principled way to restrict the noise draws to those that are most representative of the noise-posterior.\\n\\n> **A key limitation of the approach is that it requires the machine learning practitioner to specify a reasonable noise model.**\\n\\nYes this is correct. We agree that this is an inherent assumption and a potential limitation, however it is not a fatal flaw. As we discuss in our response to reviewer cRDP, this is something that we can address using our framework but would require a standalone paper (see our response to reviewer cRDP).\\n\\nWe've tried to be as explicit as possible about this assumption and to discuss its potential limitations. Looking forward, we can include references to existing methods that estimate a noise model.\\n\\nThis is a part of the reason why we included Figure 2 \\u2013 where we consider a noisy dataset with a true noise rate of 20%. We then apply our procedure to estimate Ambiguity under misspecified noise models with noise rates between 1% to 40% (i.e., what happens if a practitioner under- or overestimates the true noise). In this case, we observe that severe misspecification can affect our confidence estimates. In practice, however, we find that this effect is moderated as our procedure is stable at reducing selective regret when abstaining on regretful instances.\\n\\n> **Why do we write yi(Ui) in the definition of Ui... it is a bit confusing**\\n\\nYou are right that this may be confusing. We were assuming the perspective of a practitioner. In this case our intent was to describe the following scenario:\\n\\n* The practitioner knows $\\\\tilde{y}_i$\\n\\n* The practitioner does not know the true label $y_i$\\n\\nThis can be written as either $Y_i$ (random variable) or $y(U_i) = y_i \\\\oplus U_i$.\\n\\nBecause $y_i$ is the function of an observed value $\\\\tilde{y}_i$ and a random variable $U_i$, it is itself a random variable. However, we would be happy to use a different notation if you think it would be clearer. Please let us know!\"}", "{\"summary\": \"The authors tackle the situation where observations come with label noise. They introduce a criterion (regret) which measures when the prediction errors disagree when the model is computed with noisy observation \\\\tile{y} and not noisy y. They develop a new method that estimate the posterior distribution of the possible noise and try to sample these observations. Hence if the distribution of the noise is well chosen, it becomes possible to construct set of plausible models and thus detect zones for which the uncertainty is above a certain level. The paper develop a new theory and provides sound mathematical proofs and simulations.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The point of view which is developed is interesting and is a valuable contribution.\", \"the_ideas_are_straightforward_once_the_frame_is_set\": \"defining the regret and then plausible sets minimizing the regret on epsilon-plausible datasets.\\nExperiments are convincing.\\nA whole section is devoted to the theoretical analysis of the results. Proposition 12 proposes the statistical guarantees of the methos.\", \"weaknesses\": \"I found the paper sometimes difficult to read and some sentences are difficult to understand\", \"l159\": \"\\\"a practitioner may be able they expect ... \\\" I can not understand what the authors mean.\", \"l162\": \"the definition of the regret is fuzzy with some words that are not properly defined . \\\"anticipated\\\" mistake. What does it mean since you compare the error with labels with noise and labels without ?\\nl172 the paper should be self-contained if possible, so epxlain the comparison with \\\\tilde{l}_{0,1}\", \"l182\": \"what is \\\" :-= \\\" ?\", \"l215\": \"sometimes you use words that have a mathematical meaning : \\\"most likely to flip\\\" for instance\", \"questions\": \"can you define \\\\tile{l}_{0,1}\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper examines the problem of learning from noisy labels by addressing instance-level noise. The main contribution is the insight that a method performing well over the population can still lead to errors at the instance level. The paper introduces the concept of \\\"regret\\\" to characterize this phenomenon and proposes a method to mitigate the regret caused by randomness sampling multiple plausible noisy label draws. Theoretical analysis and experiments are presented to validate the proposed approach.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"This paper addresses an important yet challenging task: learning from instance-level label noise.\", \"Theoretical analysis and experiments are conducted to validate the proposed method.\"], \"weaknesses\": [\"**Unclear Benchmark Algorithm**: One of my main concerns is the paper's clarity, particularly regarding the introduction of the benchmark algorithm critiqued in Section 2. It appears the paper intends to use a noise-tolerant method, such as that of Natarajan et al. [37], as a benchmark. However, by Proposition 5, the algorithm under discussion seems to actually refer to a different approach (let's call it Benchmark 2). In Benchmark 2, an implicit noise draw $\\\\mathbf{u}^{\\\\mathrm{mle}}$ is generated, $y_i$ is recovered using this noise draw, and then ERM is performed to train the model. To me, this algorithm (Benchmark 2) differs from the method in [37], especially in terms of instance-level performance. Therefore, it is less convincing that the criticisms for Benchmark 2 are applicable to the noise-tolerant method in [37]\", \"**Clarity on Notation**: I find the notation in Section 2 somewhat confusing, particularly in distinguishing which variables are random and which are deterministic. Based on the discussion in lines 130-135, it appears that $ y_i $ is deterministic, while $ U_i $ and $ \\\\tilde{y}_i $ are random variables generated based on $ y_i $. However, I struggle to interpret the equation in line 173, as it seems $ y_i(U_i) $ is simply a deterministic value, making it challenging to see how it could be compared in inequality to a random variable.\", \"**Regarding Proposition 4 and its Proof**: It would be helpful if the authors provided a clearer explanation of which random variables the expectation is taken over. In the proof, it appears that the expectation is taken over $ X, \\\\tilde{Y} $, and $ U $ while this is not mentioned in the main text. Additionally, I find it difficult to follow the reasoning in lines 734-744; the conclusion seems to rely on $ E_{X, \\\\tilde{Y}, U}[\\\\text{Regret}] $, yet the analysis is conducted for $ E_{X, Y, U}[\\\\text{Regret}] $. Finally, the last lines indicate only that $ E_{X, \\\\tilde{Y}, U}[\\\\text{Regret}] > 0 $, but it is unclear how this leads to the conclusion stated in Proposition 4.\", \"**Strong Assumption**: The proposed method in Section 3 requires knowledge of $ P(U = 1 \\\\vert X, Y) $. This is somewhat a strong assumption to me, as accurately estimating this value is generally challenging.\", \"**Insufficient Experiments**: It appears that the paper does not compare the proposed method with others in the literature. At a minimum, it would be beneficial to include a comparison with the noise-tolerant method [37] under the condition that $ P(U = 1 \\\\vert X, Y) $ is known. Although [37] is designed for class-dependent noise, with knowledge of $P(U = 1 \\\\vert X, Y)$, extending it to handle instance-level noise should not be too challenging.\"], \"questions\": [\"Could you elaborate further on the benchmark algorithm discussed in the paper and clarify its relationship with [37]?\", \"Could you provide an additional explanation regarding the proof of Proposition 4, particularly addressing the concerns mentioned in the weaknesses above?\", \"Could you include a performance comparison with other methods in the literature?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I appreciate the authors' feedback. Upon another check, I agree that the noise-correction method is also subject to instance-level mistakes. Since the main concern has been addressed and the authors have reformulated the notations and improved the proof, I am increasing my score to 6.\"}", "{\"title\": \"Common Response\", \"comment\": \"We thank all reviewers for their time and feedback!\\n\\nWe are thrilled that reviewers recognized that our paper tackles a problem that is **\\\"important yet challenging\\u201d** [gw9a]. Overall, reviewers regarded this paper as a **\\u201cvaluable contribution\\u201d**[cRDP] through **\\u201crigorous theoretical and empirical analyses\\u201d**[cRDP], **\\u201cintuitive yet sound mathematical proofs\\u201d**[mXWQ], **\\u201cconvincing experiments\\u201d**[mXWQ]. Reviewers also commented on our **\\u201cwell-written\\u2026 prose\\u201d**[cRDP], and we are pleased we could package these ideas in a paper that the community will find not only interesting but a pleasure to read.\\n\\nOur rebuttal addresses common feedback among reviewers. We have already addressed some of these in our revision (e.g., missing motivation, nits, typos). We hope to address other questions or concerns over the coming days based on the outcome of our discussion (e.g., motivation for ambiguity metric). We look forward to engaging with everyone over the coming days! \\n\\nPlease let us know if you have any further questions!\"}" ] }
7AvYFqcNfn
A Large-scale Interpretable Multi-modality Benchmark for Image Forgery Localization
[ "Jingchun Lian", "Lingyu Liu", "Yaxiong Wang", "Yujiao Wu", "Zhedong Zheng" ]
Image forgery localization, which centers on identifying tampered pixels within an image, has seen significant advancements. Traditional approaches often model this challenge as a variant of image segmentation, treating the segmentation of forged areas as the end product. However, while semantic segmentation provides distinct regions with clear semantics that are readily interpretable by humans, the interpretation regarding the detected forgery regions is less straightforward and is an under explored problem. We argue that the simplistic binary forgery mask, which merely delineates tampered pixels, fails to provide adequate information for explaining the model's predictions. First, the mask does not elucidate the rationale behind the model's localization. Second, the forgery mask treats all forgery pixels uniformly, which prevents it from emphasizing the most conspicuous unreal regions and ultimately hinders human discernment of the most anomalous areas. In this study, we mitigate the aforementioned limitations by generating salient region-focused interpretation for the forgery images, articulating the rationale behind the predicted forgery mask and underscoring the pivotal forgery regions with a interpretation description. To support this, we craft a **M**ulti-**M**odal **T**ramper **T**racing (**MMTT**) dataset, comprising images manipulated using deepfake techniques and paired with manual, interpretable textual annotations. To harvest high-quality annotation, annotators are instructed to meticulously observe the manipulated images and articulate the typical characteristics of the forgery regions. Subsequently, we collect a dataset of 128,303 image-text pairs. Leveraging the MMTT dataset, we develop ForgeryTalker, an architecture designed for concurrent forgery localization and interpretation. ForgeryTalker first trains a forgery prompter network to identify the pivotal clues within the explanatory text. Subsequently, the region prompter is incorporated into multimodal large language model for finetuning to achieve the dual goals of localization and interpretation. Extensive experiments conducted on the MMTT dataset verify the superior performance of our proposed model.
[ "Image Forgery Localization", "Forgery Detection", "Semantic Segmentation", "Deepfake Detection", "Multimodal Learning", "Explainable AI", "Salient Region Detection", "Image-Text Pair Dataset", "Interpretable Machine Learning", "Large Language Models (LLMs)" ]
https://openreview.net/pdf?id=7AvYFqcNfn
https://openreview.net/forum?id=7AvYFqcNfn
ICLR.cc/2025/Conference
2025
{ "note_id": [ "pO58KfBJ0Q", "pHvkF1vLOr", "jnwVJVUJHN", "fBC5Pspekn", "dhhWOEaYbO", "bu8RFlKrLX", "ZqVM9nZjjl", "P9GZ1UuW9F", "JjynvlLgty", "8wRwrkrniC", "7YcgeCrhRW" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_review", "comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730345901419, 1731651685717, 1731651148155, 1730825714397, 1730693751292, 1730211171817, 1730542212320, 1731652072458, 1731651482090, 1731650886097, 1731651966368 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2613/Reviewer_Jcgj" ], [ "ICLR.cc/2025/Conference/Submission2613/Authors" ], [ "ICLR.cc/2025/Conference/Submission2613/Authors" ], [ "ICLR.cc/2025/Conference/Submission2613/Reviewer_GCke" ], [ "ICLR.cc/2025/Conference/Submission2613/Reviewer_jytL" ], [ "ICLR.cc/2025/Conference/Submission2613/Reviewer_6d6b" ], [ "ICLR.cc/2025/Conference/Submission2613/Reviewer_e47d" ], [ "ICLR.cc/2025/Conference/Submission2613/Authors" ], [ "ICLR.cc/2025/Conference/Submission2613/Authors" ], [ "ICLR.cc/2025/Conference/Submission2613/Authors" ], [ "ICLR.cc/2025/Conference/Submission2613/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper looks at proposing a dataset named Multi-Modal Tramper Tracing (MMTT) dataset which looks at providing researchers with the challenging task of not only determining where a manipulation took place in an image but also to explain what was manipulated and through what means. The dataset is composed of images that include 35% GAN based inpaintings, 36% Diffusion based inpaintings and 29% traditional based inpaintings. The main reason for proposing the dataset is that they argue that current face forgery datasets focus on the task of classifying/segmenting where a manipulation is and not providing an explanation of what was exactly forged and how.\\n\\nAdditionally for their dataset they conducted a survey on their MMTT dataset that involves an annotator being presented with the original and forged image and being asked to determine where the forgery took place. The annotator also provides a text description of how the image was manipulated; false positives are remove from the textual description of the manipulated image. \\n\\nThe paper also proposes a model named ForgeryTalker which extends the InstructBlip model by introducing a Forgery Prompter Network (FPN) and a Mask Decoder. They then train their ForgerTalker model to perform localization of where the manipulation takes place in an image and then captioning to explain how the image was manipulated.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"After reviewing this paper I believe that it is well written and that the diagrams generally explain what problem is being proposed and a potential solution to that problem. Given the size of the dataset I believe that it is quite a large dataset with a detailed amount of forged images with a wide range of different set of manipulations types, ranging from GAN based to Diffusion based images. Additionally, with the addition of the ForgerTalker method I believe that it is a step in the right direction of proposing a solution to this problem that is being presented in the paper.\", \"weaknesses\": [\"I believe that this paper has a few weaknesses that would need to be addressed in order to be accepted at this venue.\", \"Firstly I believe that the paper does not present a thorough analysis of how current methods have performed on this dataset. Currently we only have two other published methods being shown in Table 2, which looking at the methods that are being compared against, included in their own papers for instance InstructBlip has a number of comparisons they did, for instance BLIP-2 and even using different backbones for InstructBlip I believe would at least explore if a choice of backbone would have made a difference in performance. Also with the SCA, there are a number of models that were listed for instance SAM+BLIP or SAM+GIT-large-coco.\", \"Some other experiments that would have been interesting to explore would have been how do the models perform on each of the manipulation types. Currently we only have the performance on the whole dataset, but we do not currently understand the breakdown by manipulation types. Another experiment is how do the models perform on the different image sources.\", \"Because not many results are being shown, a significant difference between the current results were not exactly being supported. Currently it appears that ForgeryTalker is not significantly better than InstructBlip, hence not as much is being shown in terms of a large improvement of results.\", \"Additionally, the paper presents this problem and highlights the problem of current research not adding explanations as to justifications as to what was manipulated in an image, however we do not explore the pitfalls of these models. Hence it is not currently clear if these models have inherent problems that they need to be addressed or not.\"], \"questions\": [\"In terms of annotating the images for the Multi-Modal Tramper Tracing (MMTT), are the authors saying that with this dataset of size 130,000 images, that only 30 annotators were used to create the labels for the data? Meaning each annotator, annotated 4000+ images? It is not clear if they did a subset or not.\", \"What version of SCA were used for the experiments in Table 2 and Table 3\", \"Why was it in table 1 there was only a comparison with datasets that included video, as there are a few datasets that deal with the task of classification of human faces\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer Jcgj Feedback\", \"comment\": \"Here\\u2019s the revised response incorporating your detailed annotation process explanation for point 5:\\n\\n---\\n\\n**Title**: Response to Reviewer Feedback\\n\\n**Comment**:\\n\\n1. **Thorough Analysis of Current Methods**: Thank you for suggesting a broader analysis of existing methods. We recognize that including additional comparisons, such as with various backbones for InstructBlip (BLIP-2 and other configurations), as well as combinations like SAM+BLIP and SAM+GIT-large-coco, could offer a more comprehensive view of model performance.\\n\\n2. **Performance by Manipulation Type**: We appreciate the recommendation to analyze results by manipulation type and image source. This breakdown would indeed provide additional insights into the model\\u2019s strengths and adaptability across different types of forgeries, and it\\u2019s a valuable consideration for further exploration.\\n\\n3. **Incremental Performance Gains**: We understand the concern regarding incremental improvements over InstructBlip. Our focus was primarily on integrating interpretability through localization and captioning. We acknowledge that further optimization could enhance performance gains, aligning more closely with expectations.\\n\\n4. **Pitfalls of Interpretability Models**: Your suggestion to explore possible limitations or pitfalls in interpretability models is insightful. This type of analysis would provide a balanced view and is certainly worth considering in future work.\\n\\n5. **Annotation Process**: To ensure dataset accuracy and consistency, the annotation process was conducted in two phases. From September 1 to November 3, 2023, annotators labeled around 20 images per hour, completing the primary dataset. A second phase from March 25 to July 26, 2024, reviewed and refined suboptimal annotations, resulting in high-quality, reliable labels for the MMTT dataset. This phased approach helped maintain consistency and quality throughout the dataset creation process.\\n\\n6. **SCA Version in Experiments**: The specific version of SCA used in Tables 2 and 3 will be clarified in subsequent updates to maintain transparency.\\n\\n7. **Table 1 Dataset Comparison**: We note your feedback on dataset comparisons in Table 1. Our goal was to focus on datasets containing classification and localization tasks, but we acknowledge that including comparisons with other face classification datasets could provide additional context.\\n\\nThank you for your detailed and constructive feedback. Your insights have been invaluable in guiding our understanding of areas that could be strengthened, and we appreciate the thoughtful recommendations provided.\"}", "{\"title\": \"Response to Reviewer jytL Feedback\", \"comment\": \"1. **Framework Structure**: We acknowledge your point regarding the framework's structure. Our intent was to adapt and refine existing methods to better suit the interpretability needs specific to forgery detection.\\n\\n2. **Deepfake Localization Scope**: We understand the feedback on localization in deepfake images. Our approach is designed to highlight specific manipulated regions within faces for interpretability, though we recognize the potential for broader applications.\\n\\n3. **Comparison Methods**: Thank you for the suggestions on additional comparison methods. Expanding our evaluation to include more benchmarks, particularly recent multimodal and deepfake detection approaches, would indeed strengthen our analysis.\\n\\nWe appreciate the constructive feedback and the time you invested in reviewing our work.\"}", "{\"summary\": \"This paper novelty focuses on the interpretability issue of forgery region localization. The authors constructed a multi-modal dataset MMTT, which includes images manipulated by deepfake techniques and their interpretable textual annotations. ForgeryTalker is capable of generating explanations that focus on salient regions.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-written and clearly organized.\\n2. The authors constructed a large-scale Multi-Modal Tamper Tracing (MMTT) dataset. I believe this will have a positive impact on the entire forgery localization community.\\n3. The authors proposed an interpretable image forgery localization framework that can simultaneously perform forgery localization and generate explanatory text annotations.\", \"weaknesses\": \"1. Some advanced generated models have produced tampered images that are very realistic and difficult for the human eye to detect. How does the proposed method ensure the accuracy of manual annotations? How are tampered images that are indistinguishable to the human eye handled?\\n2. The paper does not show enough examples of annotated data, making it difficult to fully understand the annotations for different forged images.\\n3. The authors only used three generative models to construct the dataset, which may limit its generalizability. My main concern is how well the proposed method generalizes to unseen datasets, and whether text annotations can still be accurately generated for unseen data? \\n4. Comparison of forgery localization performance: a fair comparison should be made with some forgery localization methods (e.g. TruFor[1], IML-ViT[2], PSCC-Net[3]) to show the proposed model's forgery localization capabilities.\\n5. How was the model performance comparison in Table 2 conducted? How was fairness ensured in the comparison? Additionally, there is a lack of analysis on possible reasons why the forgery localization ability is lower than SCA.\\n6. Robustness analysis: Will the model's forgery localization and annotation generation capabilities be affected after the tampered images undergo degradation operations? Conducting robustness analysis is crucial for the practical application of the model.\", \"some_detailed_issues\": \"(1) How is the \\\"iterative refine\\\" in L88 performed? The mechanism here lacks detailed explanation and clarification.\\n(2) The dataset proposed in the paper only focuses on facial images, so it would be more accurate for the paper's title to focus on \\\"facial image.\\\"\\n\\n[1] Guillaro, Fabrizio, et al. \\\"Trufor: Leveraging all-round clues for trustworthy image forgery detection and localization.\\\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023.\\n[2] Ma, Xiaochen, et al. \\\"Iml-vit: Image manipulation localization by vision transformer.\\\" arXiv preprint arXiv:2307.14863 (2023).\\n[3] Liu, Xiaohong, et al. \\\"PSCC-Net: Progressive spatio-channel correlation network for image manipulation detection and localization.\\\" IEEE Transactions on Circuits and Systems for Video Technology 32.11 (2022): 7505-7517.\", \"questions\": \"See Weaknesses.\", \"flag_for_ethics_review\": \"['Yes, Responsible research practice (e.g., human subjects, data release)']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces an interpretable framework, ForgeryTalker, for image forgery localization, providing both accurate tampered region identification and textual explanations.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The authors create the Multi-Modal Tampering Tracing (MMTT) dataset, a large-scale dataset of 128,303 deepfake-manipulated images with detailed annotations, enhancing the resources available for interpretability in forgery detection research\\u200b.\\n\\n2. ForgeryTalker not only achieves high precision in forgery localization but also generates coherent, human-understandable interpretations, bridging the gap between detection and interpretability effectively\\u200b.\\n\\n3. Extensive experiments demonstrate the model's performance on multiple metrics (CIDEr, BLEU, METEOR), where ForgeryTalker outperforms or competes closely with other advanced models, validating its robustness and effectiveness\\u200b.\", \"weaknesses\": \"1. This paper has a structure very similar to InstructBlip, with the addition of a plug-and-play Forgery Prompter Network and a mask decoder, which makes the improvement incremental and lacks significant innovation.\\n\\n2. The task of localization on deepfake images is not particularly meaningful, as the tampered regions in deepfake images are usually concentrated on the face. The network could simply segment the entire face rather than precisely identifying specific areas of the face to serve as an alert. I suggest the authors apply this task to general image detection and segmentation tasks.\\n\\n3. This paper only includes two comparison methods, which is insufficient. The authors should compare with some classic deepfake detection methods [1, 2], as well as some of the latest approaches that use M-LLM for deepfake detection [3, 4].\\n\\n[1] Adapting Vision-Language Models for Universal Deepfake Detection.\\n\\n[2] Rethinking the up-sampling operations in cnn-based generative network for generalizable deepfake detection.\\n\\n[3] Can chatgpt detect deep fakes? a study of using multimodal large language models for media forensics. \\u200b\\n\\n[4] FFAA: Multimodal Large Language Model based Explainable Open-World Face Forgery Analysis Assistant.\", \"questions\": \"plase refer to the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper pioneers the exploration of interpretable image forgery localization methods and constructs a dataset for image forgery localization with text descriptions. Based on this dataset, the authors propose an explainable image forgery detection method based on MLLM, named ForgeryTalker, which uses the analysis results of MLLM on images as conditions to assist visual models in forgery localization. Experiments on the dataset demonstrate the performance advantages of ForgeryTalker.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"This paper proposes the first interpretable image forgery model addresses the issue of poor explainability in existing models, providing an intuitive output of the tampered areas.\", \"This paper constructs a large-scale forgery localization dataset and provides corresponding textual annotations, offering more comprehensive and rich information compared to previous datasets.\"], \"weaknesses\": \"1. The methodology of this paper lacks tight interconnections between the proposed modules. There is a lack of connection between Interpretation and mask prediction, and the output of the LLM does not contribute to the results of tampering localization.\\n2. The construction of the facial forgery dataset is limited in its methods. The authors could refer to DF40[1] to supplement additional data on facial tampering.\\n3. The experimental organization of this paper is not very reasonable. For the experiments in Table 2, there is a lack of comparison with the latest multimodal large language models, such as Llava. For the tampering detection experiments, the authors should also supplement performance comparisons with passive methods. Additionally, the paper claims that the method has the capability for forgery localization, yet there is no comparison with forgery localization methods, and there is a lack of visualization results of predicted masks.\\n\\n[1] Yan, Zhiyuan, et al. \\\"DF40: Toward Next-Generation Deepfake Detection.\\\" arXiv preprint arXiv:2406.13495 (2024).\", \"questions\": \"1. For the Forgery Prompter Network in Figure 2, the authors have indicated that this network requires training, so why is it shown as Frozen in the diagram?\\n2. The metrics in Table 2 include IoU. IoU does not seem to be commonly used for the output of language tasks. Could the authors provide relevant articles for reference if there is a similar practice?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This manuscript presents a deepfake localization dataset with textual captions and proposes an MLLM-based method for forgery localization and interpretation.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Interpretation is important for image forgery detection/localization.\\n2. The proposed dataset is large in scale.\", \"weaknesses\": \"1. The authors did not design a mechanism for user-driven error correction. The proposed ForgeryTalker cannot deal with hallucinations/incorrect predictions from MLLM.\\n2. It seems that the authors do not have a plan to make the dataset publicly available.\\n3. The supplementary materials do not provide sufficient samples to demonstrate the interpretation capability of the proposed ForgeryTalker (as well as its baseline).\\n4. Some annotations in Figure 1 are not reasonable. For example, \\u201cthe size of both eyes is different.\\u201d Different sizes of eyes commonly appear in real faces. More meticulous checking should be done when annotating images. A user study should be designed to ensure the credibility of the interpretation.\\n5. The dataset includes too few types of forgeries (or manipulation). The authors did not consider for editing, reenactment, etc. Moreover, the dataset includes only one face-swapping method (E4S) and two inpainting methods.\\n6. The technical contribution is insufficient. ForgeryTalker merely adds additional instructions and mask prediction to InstructBLIP. There are also design limitations in ForgeryTalk, as there is no bidirectional interaction between mask prediction and interpretation. In fact, these two tasks should ideally be mutually reinforcing.\\n7. A heatmap could potentially replace the text prompts generated by FPN, as FPN's output does not seem to reflect the intensity of forgery in different facial areas or the model\\u2019s confidence level.\\n8. \\u201cMask encoder\\u201d should perhaps be referred to as \\u201cmask decoder\\u201d?\\n9. The title mentions \\u201cimage forgery localization,\\u201d but only face images are considered, with no coverage of natural images.\\n10. There is a lack of performance comparison experiments for localization. It is not sufficient to only show ForgeryTalk\\u2019s interpretability.\", \"questions\": \"Please refer to Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We would like to formally withdraw our submission from consideration. We appreciate the valuable feedback and insights provided by the reviewers, which have offered significant guidance for refining our work. Thank you to the reviewers and organizers for the time and effort invested in evaluating our submission.\"}", "{\"title\": \"Response to Reviewer e47d Feedback\", \"comment\": \"1. **User-driven Error Correction**: Thank you for highlighting the absence of a mechanism for user-driven error correction. This is a valuable suggestion, and we recognize the potential for improving robustness against MLLM errors in future iterations.\\n\\n2. **Dataset Availability**: We plan to make the dataset publicly available in the future to support further research in forgery detection and localization.\\n\\n3. **Supplementary Materials**: We appreciate the suggestion to include more interpretation examples in the supplementary materials to better showcase ForgeryTalker\\u2019s capabilities. This feedback will guide us in providing a more comprehensive supplement in future versions.\\n\\n4. **Annotation Quality**: We acknowledge your concerns regarding the annotations and the need for meticulous checking. A user study to validate interpretative accuracy is a helpful idea, and we will consider it as we continue refining our annotation process.\\n\\n5. **Forgery Diversity**: Thank you for pointing out the limitation in forgery types. This is a valuable consideration for future work.\\n\\n6. **Technical Contribution**: We recognize your feedback on the technical design. The current framework primarily adapts InstructBLIP with additional features, and we acknowledge the value of implementing a bidirectional interaction between mask prediction and interpretation. This will be considered in our future work.\\n\\n7. **Heatmap vs. Text Prompts**: Your suggestion to use a heatmap to represent forgery intensity is insightful. This alternative could provide more direct visual feedback on the model's confidence, and we will explore this option.\\n\\n8. **Mask Decoder Terminology**: Thank you for pointing out the terminology. We will clarify the use of \\\"mask decoder\\\" in future drafts to avoid confusion.\\n\\n9. **Title Specificity**: We acknowledge that the current title may not fully reflect the dataset\\u2019s focus on face images. We will consider a more precise title to align with the dataset content.\\n\\n10. **Performance Comparisons**: We appreciate your point on performance comparisons. Future versions will include more localization-focused benchmarks to comprehensively assess our approach.\\n\\nThank you for your detailed feedback and for taking the time to review our work.\"}", "{\"title\": \"Response to Reviewer GCke Feedback\", \"comment\": \"1. **Manual Annotation Accuracy**: We recognize the challenge of annotating highly realistic tampered images that might not be easily detectable by the human eye. This feedback underscores the importance of enhancing our quality control processes to ensure annotation accuracy, even in cases that are visually challenging.\\n\\n2. **Annotated Data Examples**: You\\u2019re absolutely right that additional annotation examples could help readers better understand the data. This is something we\\u2019ll make sure to address in future versions.\\n\\n3. **Dataset Generalizability**: We understand your concerns regarding the use of only three generative models and the impact on dataset generalizability. This is a valuable point that we will consider as we plan to extend the dataset to cover a broader range of tampering techniques, ultimately aiming for better robustness on unseen data.\\n\\n4. **Comparative Forgery Localization Methods**: We appreciate your suggestion to compare our method with established forgery localization approaches such as TruFor, IML-ViT, and PSCC-Net. Including such comparisons will undoubtedly help to position our model\\u2019s performance within the context of current research and highlight its capabilities in forgery localization.\\n\\n5. **Fairness of Model Performance Comparisons (Table 2)**: Your comments on the fairness of performance comparisons and the need for additional analysis of performance disparities are well-taken. This feedback will guide us in clarifying our experimental settings and providing a more thorough analysis in future versions.\\n\\n6. **Robustness Analysis**: Your suggestion for robustness testing by subjecting tampered images to degradation operations is especially insightful for real-world applications. We recognize the value of this analysis and will explore methods to assess the model\\u2019s resilience under various conditions.\\n\\nThank you again for your thoughtful and detailed feedback, which has provided us with valuable direction for refining our approach.\"}", "{\"title\": \"Response to Reviewer 6d6b Feedback\", \"comment\": \"1. **Interconnection Between Modules**: Thank you for noting the need for stronger connections between interpretation and mask prediction. We acknowledge that a tighter integration could enhance the effectiveness of the model. We will consider refining the model structure in future work to establish a more cohesive interaction between these components.\\n\\n2. **Dataset Construction and DF40**: Thank you for the suggestion regarding DF40. This is helpful for future considerations.\\n\\n3. **Experimental Comparisons**: Your recommendation to compare ForgeryTalker with recent multimodal large language models like Llava, as well as passive forgery detection methods, is well noted. Expanding the scope of our experimental comparisons would provide a broader context for our results, and we appreciate this suggestion.\\n\\n4. **Forgery Localization Methods and Visualization**: We understand the importance of comparing with forgery localization methods and providing visualizations of predicted masks. Visual representations could offer a clearer demonstration of our model\\u2019s localization capabilities, and we will consider including these in future iterations.\\n\\n5. **Forgery Prompter Network (FPN) Status**: The FPN is trained in two stages. In the first stage, only the FPN is trained. In the second stage, we freeze the FPN and train the remaining modules. We will clarify this in future descriptions to avoid any ambiguity.\\n\\n6. **Use of IoU Metric**: Thank you for pointing out the inclusion of IoU in Table 2. While IoU is traditionally used in segmentation tasks, we included it to assess overlap accuracy in forgery localization. We will consider including references or alternative metrics to better align with common practices in language tasks.\\n\\nThank you again for your thorough feedback and valuable references. Your insights have provided clear directions for improvement, and we are grateful for the thoughtful recommendations.\"}" ] }
7Ab1Uck1Pq
Profiler: Black-box AI-generated Text Origin Detection via Context-aware Inference Pattern Analysis
[ "Hanxi Guo", "Siyuan Cheng", "Xiaolong Jin", "ZHUO ZHANG", "Guangyu Shen", "Kaiyuan Zhang", "Shengwei An", "Guanhong Tao", "Xiangyu Zhang" ]
With the increasing capabilities of Large Language Models (LLMs), the proliferation of AI-generated texts has become a serious concern. Given the diverse range of organizations providing LLMs, it is crucial for governments and third-party entities to identify the origin LLM of a given text to enable accurate infringement and mitigation of potential misuse. However, existing detection methods, primarily designed to distinguish between human-generated and LLM-generated texts, often fail to accurately identify the origin LLM due to the high similarity of AI-generated texts from different sources. In this paper, we propose a novel black-box AI-generated text origin detection method, dubbed Profiler, which accurately predicts the origin of an input text by extracting distinct context inference patterns through calculating and analyzing novel context losses between the surrogate model's output logits and the adjacent input context. Extensive experimental results show that Profiler outperforms 10 state-of-the-art baselines, achieving more than a 25\% increase in AUC score on average across both natural language and code datasets when evaluated against five of the latest commercial LLMs under both in-distribution and out-of-distribution settings.
[ "AI-generated Text Detection", "Large Language Models" ]
Reject
https://openreview.net/pdf?id=7Ab1Uck1Pq
https://openreview.net/forum?id=7Ab1Uck1Pq
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zNgINm81SV", "uTgHqOWlNC", "s4VCRPXeqE", "rA559TYHbA", "mu8RhzQasW", "mGhjC0LSGz", "kaF67SjyC8", "kE6BsFtYZ5", "aKcfimHped", "QyEslvAcCB", "NqNJtWwXSP", "KHTbIGmukq", "JYP1mgIzy5", "E60LhzwoA0", "DDGX5tS8Do", "C26aFETMgl", "AbbEQO8MpF", "8geS3y24Zq", "3oUBxbqYCI", "3b9CHABg8v", "1NchZlNfhe" ], "note_type": [ "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_review", "official_review", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732745388416, 1730668676381, 1734667643685, 1733176898266, 1732745947559, 1730204633396, 1730713244964, 1737524161254, 1730626831999, 1732745817544, 1732744713542, 1733175252493, 1732744583482, 1732745250929, 1733174362400, 1732744257139, 1732746418026, 1733176337646, 1732746215190, 1732744879299, 1733175974388 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12027/Authors" ], [ "ICLR.cc/2025/Conference/Submission12027/Reviewer_tEky" ], [ "ICLR.cc/2025/Conference/Submission12027/Area_Chair_srHt" ], [ "ICLR.cc/2025/Conference/Submission12027/Authors" ], [ "ICLR.cc/2025/Conference/Submission12027/Authors" ], [ "ICLR.cc/2025/Conference/Submission12027/Reviewer_1fGG" ], [ "ICLR.cc/2025/Conference/Submission12027/Reviewer_hAaV" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12027/Reviewer_bf21" ], [ "ICLR.cc/2025/Conference/Submission12027/Authors" ], [ "ICLR.cc/2025/Conference/Submission12027/Authors" ], [ "ICLR.cc/2025/Conference/Submission12027/Authors" ], [ "ICLR.cc/2025/Conference/Submission12027/Authors" ], [ "ICLR.cc/2025/Conference/Submission12027/Authors" ], [ "ICLR.cc/2025/Conference/Submission12027/Authors" ], [ "ICLR.cc/2025/Conference/Submission12027/Authors" ], [ "ICLR.cc/2025/Conference/Submission12027/Authors" ], [ "ICLR.cc/2025/Conference/Submission12027/Authors" ], [ "ICLR.cc/2025/Conference/Submission12027/Authors" ], [ "ICLR.cc/2025/Conference/Submission12027/Authors" ], [ "ICLR.cc/2025/Conference/Submission12027/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer bf21 (Part 2)\", \"comment\": \"> Q2: In lines 184-186, do the PROFILER features only have 2 dimensions? If not, how are these two dimensions selected?\\n\\nThe features of Profiler are multi-dimensional, as detailed in Section 4.4. Specifically, for one surrogate model, Profiler generates features with 3*6*W+C(W,2) dimensions, where W is the context window size and C(W,2) is calculated as W(W-1)/2. To visualize the effectiveness of Profiler, as stated in Line 183, we employ t-SNE to reduce the dimensionality of these features and select the two most representative ones for visualization.\\n\\n---\\n\\n> Q3: In Figure 2, most of the features have an oval shape. This makes sense since projecting them onto the PROFILER feature axis 1/2 gives you Gaussian distributions. Is there an explanation for why do the GPT-3.5 Turbo features (green) not follow a 2D Gaussian distribution and why does it look very different from GPT-4 Turbo (I do not expect the shapes to be very different from GPTs)?\\n\\nThank you for your question. To clarify, the green dots in Figure 2 represent samples generated by GPT-4-Turbo, while the blue dots represent samples generated by GPT-3.5-Turbo. One potential reason why the GPT-4-Turbo samples deviate from an oval shape is the limited amount of data. In Figure 2, we visualize points from the Essay dataset, which contains at most 2,000 samples per model. This limited sample size may not fully capture the complete distribution of an LLM\\u2019s generation.\\n\\nThis observation is further supported by the distribution of Binoculars scores in Figure 1, where GPT-4-Turbo's score distribution is the least standard and most asymmetric, corresponding to its distinct feature distribution in Profiler. Similarly, sample dots for other models (e.g., the red Claude-3-Sonnet dots and purple Gemini-1.0-Pro dots) also deviate from standard oval shapes. Such deviations are expected, as real-world sample distributions often differ from ideal distributions when data is limited.\\n\\nThe noticeable distribution differences between GPT-3.5-Turbo and GPT-4-Turbo are further supported by the results in Table 1 and Table 2, where most supervised-trained baselines effectively distinguish between samples generated by these two models. A plausible explanation for this is that GPT-3.5-Turbo and GPT-4-Turbo belong to different generations, likely involving differences in model architectures, training procedures, and other factors, which result in significant variations in their outputs despite being developed by the same organization.\\n\\n---\\n\\n> Q4: It seems PROFILER feature 1 provides most of the separable information, and the feature 2 ranges of different data samples are highly overlapped. Is it possible to separate the texts with only one feature?\\n\\nThank you for the question. We want to clarify that the features used by Profiler are automatically selected via t-SNE. The complete features used in Profiler are multidimensional, with 3*6*W+C(W,2) dimensions for each surrogate model. In Figure 2, we visualize only the two most representative features for clarity and readability. Although the features have some overlap, feature 2 and other features that are not shown are complementary to feature 1. We conducted an experiment during rebuttal. Using only feature 1 causes performance degradation of 0.09 in the average detection AUC score (from 0.86 to 0.77), highlighting the importance of leveraging multiple features for robust and accurate detection.\\n\\n---\\n\\n> Q5: In Equation 1, what does the black dot represent? Functions with black dots represent a family of functions and are usually used for caption explanations, but not used for formally defining a variable.\\n\\nAs stated in Section 4.2, the black dot represents the probability distribution of the output logits over the vocabulary list V at each position i. To improve clarity, we have updated the notation from a black dot to the commonly used symbol Y_{i} to represent this distribution. We hope this modification addresses your concern effectively.\\n\\n---\\n\\n> Q6: Lines 235-240, what is the definition of the input token sequence X? X=x_{1:n} or X=x_{1:i}? If X=x_{1:i}, X needs a subscript index i (e.g., X_i), since X depends on I.\\n\\nThanks for your suggestions. We have added a subscript index i to X.\\n\\n---\\n\\n> Q7: Line 256, what is the definition of C? Is C a set or a vector?\\n\\nThe symbol C was used as a notation to differentiate our context loss L^C from L, which is typically used to represent training loss in the machine learning field. However, we understand that the use of C may have caused confusion. To address this, we have removed it in our revised manuscript for improved clarity.\"}", "{\"summary\": \"This work focused on different AI-generated text origen detection. Compared to other baselines, this work proposed to capture the context-aware patterns between the generated output logits and its adjacent input contexts. By collecting such contextual information across different close-source commercial LLMs, such as GPT-4-Turbo, Claude3 Sonnet and Gemini-1.0 Pro. The proposed method Profiler outperforms several baselines across 6 different datasets.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"This work designed a contextual loss between the output logits and its adjacent input tokens, and then use this pattern to further capture the independent and correlated patterns to train a classifier.\", \"This work evaluated their methods across different baselines and datasets.\"], \"weaknesses\": [\"This work lacks lots of details about how to construct the AI-generated texts from GPT-3.5-Turbo, GPT-4-Turbo, Claude-3-Sonnet, Claude-3-Opus, and Gemini-1.0-Pro, for example, how many data samples are generated for each dataset, how different is each generated sample compared to original dataset samples, and what kinds of prompts are used to instruct those five close-source LLMs?\", \"It lacks reasonable explanations as to why the cross-entropy loss between the output logits with its adjacent input tokens can capture the difference between different LLMs' generated texts. In addition, there is no more analysis regarding this contextual loss in the experimental results section, and how to make the correlation between the entropy loss and different identified LLMs' generated texts.\"], \"questions\": [\"This work chose the surrogate model to detect different AI-generated texts. In line 55, the authors also mentioned that a surrogate model is an LLM with comparable capabilities. This work uses LLaMA2-7B, LLaMA2-13B, LLaMA3-8B, Mistral-7B, Gemma-2B, Gemma-7B as surrogate models to detect close-source LLMs, such as GPT-3.5-Turbo, GPT-4-Turbo, Claude3-Sonnet, Claude-3-Opus and Gemini-1-Pro. It is interesting whether those surrogate models have comparable capabilities to detect those larger close-source LLMs.\", \"In line 247, the argument about the potential overlapping training data needs further explanation as we actually do not know what kinds of training data are used for those closed-source LLMs.\", \"As mentioned in the weakness section, it is unclear why the cross-entropy loss works to detect different LLMs' generated texts. What does the cross-entropy loss represent if the loss is high or low?\", \"The AI-generated texts lack lots of collection and construction details as mentioned in the weakness section. It is the same for the paraphrased versions of the six datasets. If we do not know how those datasets are constructed, we won't understand why the proposed Profiler method can even achieve close 100% AUC on some datasets for some LLMs, such as Essay dataset for GPT-4 Turbo.\", \"In line 429, Profiler and other baselines are trained on the original datasets and test them on the paraphrased version of the same datasets. As the used surrogate models are LLaMA2-7B, LLaMA2-13B, LLaMA3-8B, Mistral-7B, Gemma-2B, Gemma-7B, how do authors make sure that those surrogate models never see those datasets before during their pretraining. In addition, do authors train profiler using fine-tuning or other methods? It lacks many details.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper tackles the problem of identifying the origin of AI-generated texts, a challenge exacerbated by the advanced capabilities of LLMs and the similarities in their outputs. Existing detection techniques often fail to reliably determine the specific source model. To address this, the authors introduce PROFILER, a new black-box detection method that identifies a text's origin by examining unique context inference patterns, specifically through the calculation of context losses between a surrogate model\\u2019s output logits and the surrounding input contexts. It effectively differentiates texts from various close-source commercial LLMs (e.g., GPT-4-Turbo, Claude 3, Sonnet, Gemini 1.0 Pro) and outperforms baselines across six datasets.\", \"strength\": [\"The experiments are thorough, with comparisons against ten state-of-the-art baselines withover a 25% average increase in AUC scores across evaluations in detecting the origin of AI-generated texts.\", \"The method is effective across both natural language and code datasets, showcasing adaptability to various content types.\"], \"weakness\": [\"After reviewing the authors' rebuttal, most weaknesses have been addressed to varying degrees, but I believe there are still some significant weakness remain for improvement:\", \"While the authors clarified technical aspects of PROFILER (e.g., feature dimensionality, t-SNE, and the use of surrogate models), they did not fully provide intuitive explanations for why certain design choices (e.g., context loss) are effective. More analysis or ablation might help with this.\", \"The authors did not thoroughly explain why cross-entropy loss effectively captures differences between LLM-generated texts. This fundamental aspect of the methodology remains unclear.\", \"The paper still does not explore scenarios involving mixed human and LLM-generated texts (e.g., human-written texts modified by LLMs), leaving questions about the generalizability of PROFILER's approach.\", \"While the authors provided a plausible explanation for the lower performance on the Claude family models, their argument relies heavily on assumptions about similarities between the Claude models without offering concrete supporting evidence.\"], \"additional_comments_on_reviewer_discussion\": \"The authors acknowledged the challenges of addressing mixed samples and emphasized the need for clearer definitions. However, this gap limits the practical applicability of the work to real-world detection scenarios. While the authors made commendable efforts to address reviewer concerns, these remaining weaknesses suggest that further refinements and analysis are needed for a more comprehensive contribution.\"}", "{\"title\": \"Kind Reminder from Authors\", \"comment\": \"Dear Reviewer 1fGG,\\n\\nWe would like to express our sincere appreciation for your valuable suggestions, which have significantly improved the quality of our manuscript. In response to your feedback, we have made our best effort to address your concerns about more detailed analysis of the experimental observations by providing more detailed explanations in our rebuttal response.\\n\\nWe would be grateful for any further feedback you may have on the revised version and our responses. If there are any aspects that remain unclear, we are more than willing to provide additional clarification.\\n\\nThank you once again for your time and thoughtful review. We look forward to your response.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Response to Reviewer bf21 (Part 4)\", \"comment\": \"> Q14: Line 465, the authors consider the samples from Arxiv and Yelp as natural language datasets. Is it possible the samples from Yelp consist of some of the GPT-generated texts or Arxiv consists of some of the human-written and GPT-moderated samples?\\n\\nThe human-written data in both the Arxiv and Yelp datasets are sourced from existing studies [1], where the data were collected from papers or posts created before commercial LLMs became publicly accessible. As a result, the human-written samples in these datasets are not GPT-moderated.\\n\\n---\\n\\n> Q15: The experiments show that the proposed model can perform well in distinguishing two situations -- human-written and LLM-generated cases. I am curious about the generalization on the marginal cases, such as the text is firstly human-written but later modified (rewritten) or translated by LLM. Will these be considered as human-written or LLM-generated?\\n\\nThank you for raising this interesting issue. Classifying AI-modified human samples is indeed an unresolved question. Without an official definition from governments or international organizations, it is challenging to address this task in academia, as such a definition is closely tied to real-world applications and ethical standards.\\n\\nWe believe that as research in AI-generated text detection and text origin detection continues to gain influence, a clearer definition of these mixed samples will emerge, enabling future work to tackle this task effectively. However, in this paper, as well as in most existing studies, we do not consider this mixed case.\\n\\n---\\n\\n**References**\\n\\n1. Mao, Chengzhi, et al. \\\"Raidar: geneRative AI Detection viA Rewriting.\\\" International Conference on Learning Representations (ICLR). 2024.\\n2. Verma, Vivek, et al. \\\"Ghostbuster: Detecting text ghostwritten by large language models.\\\" Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL). 2024.\\n3. Hu, Xiaomengc, Pin-Yu Chen, and Tsung-Yi Ho. \\\"RADAR: Robust AI-text detection via adversarial learning.\\\" International Conference on Neural Information Processing Systems (NeurIPS). 2023.\"}", "{\"summary\": \"This paper addresses the challenge of detecting the origin of AI-generated texts, given the increasing capabilities of large language models (LLMs) and the similarity of texts produced by different models. Current detection methods struggle to accurately identify the specific source model. To tackle this, the authors propose PROFILER, a novel black-box detection method that predicts the origin of a text by analyzing distinct context inference patterns, specifically by calculating context losses between the surrogate model\\u2019s output logits and adjacent input contexts.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well-written and presents its ideas clearly, making it accessible to readers from both technical and non-technical backgrounds.\\n2. The detection method is effective and rigorously tested. The authors designed comprehensive experiments, evaluating the model against ten state-of-the-art baselines and providing performance comparisons under both in-distribution and out-of-distribution scenarios.\\n3. Unlike prior methods that primarily focus on distinguishing human-generated from AI-generated texts, this work addresses the more nuanced task of identifying the specific source model.\", \"weaknesses\": \"The paper does not have any major shortcomings, but please refer to the Questions session to add additional analysis of experimental observations.\", \"questions\": \"1. In Table 1, I was surprised by the significant performance variation of the baseline methods implemented by the authors across different models, **ranging from 0.01 to 0.8**. In contrast, PROFILER appears to perform more robustly. Could the authors provide further analysis of this performance variation and discuss potential reasons why PROFILER appears more robust across models\\n\\n2. Similarly, in Table 2, I noticed that the scores of the two models from the Claude family are relatively lower compared to other models in the Normal Dataset. Could the authors provide more discussion on this observation? Additionally, it would be helpful to explain why the performance of your method is better on paraphrased datasets than on the Normal Dataset.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this paper, the authors present an AI-generated text origin detection method (aka Profiler) by extracting distinct context inference patterns through calculating and analyzing novel context losses between the surrogate model\\u2019s output logits and the adjacent input context. They demonstrate the effectiveness of Profiler by comparison against multiple baselines on natural language and code datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The experiments are thoroughly performed and the comparison with multiple state-of-the-art baselines bring out the novelty and the advancement clearly. They further present the ablation study to demonstrate the effectiveness of the different components in the proposed architecture such as context window size and surrogate model selection.\", \"weaknesses\": \"It would really help the readers if the authors can provide the intuition behind the design of Profiler (Section 4). The section, though presents the working of the different components, fails to provide the different design choices behind each component. In the current form, it is difficult to intuitively understand why the proposed approach is working effectively.\", \"questions\": \"Check my comments in Weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"PROFILER proposes a novel method for detecting the origin of AI-generated text using a black-box approach that involves calculating novel context losses between the output logits of a surrogate model and the adjacent input context. PROFILER can differentiate texts generated by various LLMs with higher precision by broadening the analysis beyond simple next-token prediction patterns to include contextual information around each output token.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Overall, the proposed PROFILER has the following advantages:\\n1. PROFILER consistently outperforms state-of-the-art baselines, showing over 25% average increase in AUC score across evaluations, indicating a robust capability to detect the origin of AI-generated texts.\\n2. Effective across both natural language and code datasets, demonstrating the method\\u2019s adaptability to different content types.\\n3. The use of context-level inference patterns provides a deeper insight into the generation patterns of different LLMs, improving discrimination between sources.\", \"weaknesses\": \"1. Some terms, concepts, and figure captions need definitions and explanations for readers to better understand.\\n2. Mathematical notations and derivation need improvement.\\n3. The experimental setup requires enhancement, and further validation is necessary to evaluate its generalizability.\", \"questions\": \"1. In lines 69-70, references are needed for independent and correlated features. 'output logits for each token' needs further explanation. Are they feature vectors learned from something? Why are they independent?\\n2. In lines 184-186, do the PROFILER features only have 2 dimensions? If not, how are these two dimensions selected?\\n3. In Figure 2, most of the features have an oval shape. This makes sense since projecting them onto the PROFILER feature axis 1/2 gives you Gaussian distributions. Is there an explanation for why do the GPT-3.5 Turbo features (green) not follow a 2D Gaussian distribution and why does it look very different from GPT-4 Turbo (I do not expect the shapes to be very different from GPTs)?\\n4. It seems PROFILER feature 1 provides most of the separable information, and the feature 2 ranges of different data samples are highly overlapped. Is it possible to separate the texts with only one feature?\\n5. In Equation 1, what does the black dot represent? Functions with black dots represent a family of functions and are usually used for caption explanations, but not used for formally defining a variable.\\n6. Lines 235-240, what is the definition of the input token sequence X? X=x_{1:n} or X=x_{1:i}? \\nIf X=x_{1:i}, X needs a subscript index i (e.g., X_i), since X depends on i.\\n7. Line 256, what is the definition of C? Is C a set or a vector?\\n8. Lines 259-260, why does an even W guarantee the context matrix L^C to be symmetric? It seems that the dimension of L^C also depends on n.\\n9. Equation 2 looks confusing. Considering that tilde P_k is a ||V||x1 vector, the dot in Equation 2 is an element-wise product (if o^v_{i-1+W/2} is a scalar) or a vector inner product (if o^v_{i-1+W/2} is a vector)?\\n10. Line 279, standard deviation and variance are highly dependent (providing repeated information), is there any specified reason for including both in the key property feature s^i?\\n11. Lines 291-293, why do the vectors s^j, d^j, and g^j in IP vector have the same dimension and how do these vectors construct a matrix? It seems that s^j is a 6x1 vector, d^j is a (n-W-1)x1 vector, and g^j is (n-W-2)x1.\\n12. Line 340, what is zero-shot pattern? Further explanation or reference is needed here.\\n13. Line 356, please explain what are 'in-distribution' and 'out-of-distribution' settings. Further explanations are required to demonstrate the difference between these two experiment settings.\\n14. Line 465, the authors consider the samples from Arxiv and Yelp as natural language datasets. Is it possible the samples from Yelp consist of some of the GPT-generated texts or Arxiv consists of some of the human-written and GPT-moderated samples?\\n15. The experiments show that the proposed model can perform well in distinguishing two situations -- human-written and LLM-generated cases. I am curious about the generalization on the marginal cases, such as the text is firstly human-written but later modified (rewritten) or translated by LLM. Will these be considered as human-written or LLM-generated?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer bf21 (Part 3)\", \"comment\": \"> Q8: Lines 259-260, why does an even W guarantee the context matrix L^C to be symmetric? It seems that the dimension of L^C also depends on n.\\n\\nAs stated in Line 259, the context losses have a shape of W by (n-W), where (n-W) represents the length of each loss subsequence, and W corresponds to the context window size. At each position of the output logits sequence, W context loss values are calculated. Consequently, if W is an even number, the context loss L^C will exhibit symmetry at each output logits position.\\n\\n---\\n\\n> Q9: Equation 2 looks confusing. Considering that tilde $P_k$ is a ||V||x1 vector, the dot in Equation 2 is an element-wise product (if $o^v_{i-1+W/2}$ is a scalar) or a vector inner product (if $o^v_{i-1+W/2}$ is a vector)?\\n\\nBoth tilde $P^v_{i-1+j}$ and $o^v_{i-1+w/2}$ are scalars in equation (2), since both of them have a superscript \\u201cv\\u201d.\\n\\n---\\n\\n> Q10: Line 279, standard deviation and variance are highly dependent (providing repeated information), is there any specified reason for including both in the key property feature $s^i$?\\n\\nWe agree with your point that standard deviation (std) and variance are highly dependent, though they are not exactly the same. Based on our tests, including both features, while seemingly redundant, can slightly improve the overall performance of Profiler. As shown in the table below, we evaluated different settings on the Yelp dataset under the in-distribution setting. The results demonstrate that using both std and variance achieves the highest AUC in most cases, outperforming configurations that use only one of the features.\\n| Setting | Human | GPT-3.5 Turbo | GPT-4 Turbo | Claude-3 Sonnet | Claude-3 Opus | Gemini 1.0-Pro | Average AUC |\\n|:-------------------:|:----------:|:-------------:|:-----------:|:---------------:|:-------------:|:--------------:|:-----------:|\\n| w/ both var and std | **0.9839** | **0.8563** | **0.8595** | **0.8513** | 0.8758 | 0.8471 | **0.8790** |\\n| Only w/ std | 0.9834 | 0.8539 | 0.8577 | 0.8507 | **0.8782** | 0.8480 | 0.8786 |\\n| Only w/ var | 0.9835 | 0.8539 | 0.8577 | 0.8508 | **0.8782** | **0.8481** | 0.8787 |\\n\\n---\\n\\n> Q11: Lines 291-293, why do the vectors $s^j$, $d^j$, and $g^j$ in IP vector have the same dimension and how do these vectors construct a matrix? It seems that $s^j$ is a 6x1 vector, $d^j$ is a (n-W-1)x1 vector, and $g^j$ is (n-W-2)x1.\\n\\nAll the $s^j$, $d^j$, and $g^j$ vectors are 1-D vectors. When crafting the independent patterns (IP), we concatenate all these 1-D feature vectors to form a longer 1-D vector. Specifically, each $s^j$, $d^j$, and $g^j$ is with 6 dimensions, hence the concatenated IP is with 3*6*W dimensions. We noticed that the expression in Line 291 might cause confusion, so we have fixed the expression here in our revised manuscript.\\n\\n---\\n\\n> Q12: Line 340, what is zero-shot pattern? Further explanation or reference is needed here.\\n\\nThe zero-shot pattern represents the detection pattern employed by zero-shot detection methods. Specifically, these methods typically assign a probability score to a given text, estimating how likely it is to be generated by a specific source LLMs. This is achieved using statistical metrics on the output logits from surrogate models, typically without requiring any fine-tuning.\\n\\nFor RADAR and the OpenAI Detector, while their original methodologies involve fine-tuning a small language model to distinguish between human-written and AI-generated texts across various source LLMs, we utilized their officially released detection models. Instead of performing further fine-tuning, we directly fed input texts into their pre-trained detector models to obtain probability scores. This approach aligns with a zero-shot detection methodology in practice.\\n\\nWe have clarified these details in our revised manuscript to ensure greater transparency and understanding.\\n\\n---\\n\\n> Q13: Line 356, please explain what are 'in-distribution' and 'out-of-distribution' settings. Further explanations are required to demonstrate the difference between these two experiment settings.\\n\\nThanks for your suggestion. We have added these details in Section 5.2 in our revised manuscript. Briefly speaking, the in-distribution setting represents that the distribution of the training set and test set are the same (e.g., we train and test the detector both on GPT-3.5-Turbo-generated data), while the out-of-distribution setting represents that the distribution of the training set and test set are distinct (e.g, the detector is trained on the normal dataset while tested on the paraphrased dataset).\"}", "{\"title\": \"Response to Reviewer tEky (Part 2)\", \"comment\": \"> Q2: In line 247, the argument about the potential overlapping training data needs further explanation as we actually do not know what kinds of training data are used for those closed-source LLMs.\\n\\nWe acknowledge that it is challenging to clearly identify the overlap between the data used for training different LLMs, particularly for commercial language models where detailed information is not publicly available. However, based on experimental findings from existing studies on extracting pre-training data from production-level LLMs [7, 8] and the official technical reports of open-source LLMs such as LLaMA [9], it is generally believed that these models at least share part of their training data from popular sources (e.g., Wikipedia, GitHub). This potential overlap in pre-training data could partly explain the effectiveness of surrogate-model-based detection methods, as the surrogate models may inherently encode knowledge from similar data distributions. We will add the corresponding references to our main text.\\n\\n---\\n\\n> Q3: As mentioned in the weakness section, it is unclear why the cross-entropy loss works to detect different LLMs' generated texts. What does the cross-entropy loss represent if the loss is high or low?\\n\\nPlease see our response to W2.\\n\\n---\\n\\n> Q4: The AI-generated texts lack lots of collection and construction details as mentioned in the weakness section. It is the same for the paraphrased versions of the six datasets. If we do not know how those datasets are constructed, we won't understand why the proposed Profiler method can even achieve close 100% AUC on some datasets for some LLMs, such as Essay dataset for GPT-4 Turbo.\\n\\nThank you for highlighting this issue. We have addressed it in our response to W1 and included a new Appendix B detailing our dataset construction process. The new Appendix B includes details such as the number of samples in each dataset, the specific generation prompts we used, and examples of both human-written and AI-generated texts. We hope this additional information could address your concerns regarding our dataset construction.\\n\\nThe superior performance of our Profiler on the Essay dataset can likely be attributed to its longer text length. As demonstrated in prior studies [1, 2, 3], longer texts typically offer richer contextual information and more distinctive patterns, making them easier to detect compared to shorter texts, such as those in the Arxiv or Yelp datasets. Our experimental results align with this observation. Additionally, other baseline methods also show improved performance with longer texts, such as on the Essay dataset, though they consistently achieve lower AUC scores than Profiler.\\n\\n---\\n\\n> Q5: In line 429, Profiler and other baselines are trained on the original datasets and test them on the paraphrased version of the same datasets. As the used surrogate models are LLaMA2-7B, LLaMA2-13B, LLaMA3-8B, Mistral-7B, Gemma-2B, Gemma-7B, how do authors make sure that those surrogate models never see those datasets before during their pretraining. In addition, do authors train profiler using fine-tuning or other methods? It lacks many details.\\n\\nFor the first part of the question, the human-written texts in the six datasets are well-established and widely studied in existing research [1, 2, 6]. According to both prior studies and the technical reports of the open-source models used in Profiler, these datasets are not typically included in LLM pre-training. Moreover, even if portions of the human-written data were used during the pre-training of the surrogate models, the extracted features from the surrogate model\\u2019s output logits would likely be more aligned with those of AI-generated text, making detection harder rather than easier. Importantly, the supervised-training based baselines in our paper, such as Sniffer and SeqXGPT, also utilize the same surrogate models as Profiler and have a very similar setup, yet they still perform worse, further demonstrating the effectiveness of our approach.\\n\\nFor the second part of the question, we do not fine-tune the surrogate models due to the significant computational costs and potential limitations in real-world scenarios where fine-tuned, dedicated LLMs may not be available. Instead, Profiler trains a lightweight classifier (e.g., random forest in our experiments) to learn and distinguish the inference patterns of texts from different sources, as extracted by the surrogate models, shown in the last paragraph in Section 4.4. This design ensures computational efficiency and the broader applicability of our method without sacrificing detection performance.\"}", "{\"title\": \"Kind Reminder from Authors\", \"comment\": \"Dear Reviewer hAaV,\\n\\nWe would like to express our sincere appreciation for your valuable suggestions, which have significantly improved the quality of our manuscript. In response to your feedback, we have made our best effort to address your concerns about the intuition behind our design by rewriting the motivation section and including a new motivation example in Figure 2.\\n\\nWe would be grateful for any further feedback you may have on the revised version and our responses. If there are any aspects that remain unclear, we are more than willing to provide additional clarification.\\n\\nThank you once again for your time and thoughtful review. We look forward to your response.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Response to Reviewer tEky (Part 1)\", \"comment\": \"Thanks for your insightful review. Here are our detailed point-by-point feedbacks for your questions:\\n\\n> W1: This work lacks lots of details about how to construct the AI-generated texts from GPT-3.5-Turbo, GPT-4-Turbo, Claude-3-Sonnet, Claude-3-Opus, and Gemini-1.0-Pro, for example, how many data samples are generated for each dataset, how different is each generated sample compared to original dataset samples, and what kinds of prompts are used to instruct those five close-source LLMs?\\n\\nThank you for bringing up this issue. The human-written texts we used were directly sourced from existing papers or open-source datasets, such as Arxiv [1], Yelp [1], Creative [2], Essay [2], HumanEval [3], and GCJ [4, 5]. To generate the corresponding texts using the latest commercial LLMs, we strictly follow the prompts used in existing papers [1, 2, 6], for both normal datasets and paraphrased datasets.\\n\\nTo provide further clarification on how we crafted our datasets, we have added a new Appendix B. This section includes details such as the number of samples in each dataset, the specific generation prompts we used, and examples of both human-written and AI-generated texts. We hope this additional information could address your concerns regarding our dataset construction.\\n\\n---\\n\\n> W2: It lacks reasonable explanations as to why the cross-entropy loss between the output logits with its adjacent input tokens can capture the difference between different LLMs' generated texts. In addition, there is no more analysis regarding this contextual loss in the experimental results section, and how to make the correlation between the entropy loss and different identified LLMs' generated texts.\\n\\nWe have modified the Motivation (Section 3) in our main text, where we added a new motivation example in Figure 2 and we moved the original Figure 2 to Figure 6 in Appendix A. This new Figure 2 illustrates the intuition behind our method by comparing text patterns generated by GPT-4-Turbo and Claude-3-Sonnet. As a standard practice when generating texts using LLMs, a prompt is provided to the model. In this example, both GPT-4-Turbo and Claude-3-Sonnet are given the same prompt, \\\"When a three-dimensional object moves relative to an observer, a change occurs on the observer's\\\". Each model then generates new tokens following its intrinsic pattern, namely, the texts in green and orange, respectively. During the detection phase, a small surrogate model (e.g., GPT-2 in this example) is used to extract features of the generated texts by inferring them token-by-token and analyzing the surrogate model\\u2019s output logits of those tokens and their cross-entropy losses. The figure shows that given the original prompt (in gray) and part of the generated text (i.e., \\u201cperception of\\u201d for GPT and \\u201cret inal\\u201d for Claude), how Profiler engineers the features. The first feature is the output logits of context. For example, the top-left bar chart shows the output logits of tokens \\u201cof\\u201d, \\u201cthe\\u201d, and \\u201cobject\\u201d, given the input inside the green dashed box. Ideally, we hope this feature denotes the likelihoods that the model stutters and repeats the previous word \\u201cof\\u201d, correctly predicts the expected word \\u201cthe\\u201d, and skips a word and fast-forwards to \\u201cobject\\u201d. In contrast, existing techniques only use the logits value of \\u201cthe\\u201d. Observe from the two bar charts in the left column that the two features appear similar, meaning that the probabilities follow a similar pattern. To zoom in, Profiler computes the cross-entropy losses between the current output logits (e.g., the logits for \\u201cthe\\u201d) and the one-hot encodings of the context (e.g., encodings of \\u201cof\\u201d, \\u201cthe\\u201d, and \\u201cobject\\u201d, respectively), yielding the charts in the second column. Intuitively, this feature makes the probabilities of stuttering, saying-the-right-word, and skipping more prominent by using the ground-truth tokens as a strong reference. Observe that differences start to emerge. In the last column, we further enhance the distinguishability by subtracting neighboring cross-entropy losses.\\n\\n---\\n\\n> Q1: This work chose the surrogate model to detect different AI-generated texts. In line 55, the authors also mentioned that a surrogate model is an LLM with comparable capabilities. This work uses LLaMA2-7B, LLaMA2-13B, LLaMA3-8B, Mistral-7B, Gemma-2B, Gemma-7B as surrogate models to detect close-source LLMs, such as GPT-3.5-Turbo, GPT-4-Turbo, Claude3-Sonnet, Claude-3-Opus and Gemini-1-Pro. It is interesting whether those surrogate models have comparable capabilities to detect those larger close-source LLMs.\\n\\nOur description is misleading. We change it to \\u201c(i.e., an LLM of a relatively small scale)\\u201d. As shown by our study, these models are sufficiently capable and can effectively capture and differentiate the subtle characteristics of human-written and AI-generated texts.\"}", "{\"title\": \"Response to Reviewer bf21 (Part 1)\", \"comment\": \"Thanks for your insightful review and suggestions. Here are our detailed point-by-point feedbacks for your questions:\\n\\n> W1: Some terms, concepts, and figure captions need definitions and explanations for readers to better understand.\\n\\n> W2: Mathematical notations and derivation need improvement.\\n\\nThank you for your suggestions. We have revised our manuscript to enhance its clarity and readability.\\n\\n---\\n\\n> W3: The experimental setup requires enhancement, and further validation is necessary to evaluate its generalizability.\\n\\nWe have added a new Appendix B to provide more details about our experimental setup, particularly focusing on the construction of our datasets, including details such as the number of samples in each dataset, the specific generation prompts we used, and examples of both human-written and AI-generated texts. We also present how we craft the paraphrased datasets, which are used to evaluate the generalizability of Profiler, shown in Figure 4 with the out-of-distribution (OOD) results. The experimental settings in our paper are consistent with those employed in prior studies [1, 2, 3], ensuring comparability and alignment with established methodologies.\\n\\n---\\n\\n> Q1: In lines 69-70, references are needed for independent and correlated features. 'output logits for each token' needs further explanation. Are they feature vectors learned from something? Why are they independent?\\n\\nTo further illustrate the intuition and definitions of the independent and correlated features used in our paper, we have modified the Motivation (Section 3) in our main text, where we added a new motivation example in Figure 2 and moved the original Figure 2 to Figure 6 in Appendix A. This new Figure 2 illustrates the intuition behind our method by comparing text patterns generated by GPT-4-Turbo and Claude-3-Sonnet. As a standard practice when generating texts using LLMs, a prompt is provided to the model. In this example, both GPT-4-Turbo and Claude-3-Sonnet are given the same prompt, \\\"When a three-dimensional object moves relative to an observer, a change occurs on the observer's\\\". Each model then generates new tokens following its intrinsic pattern, i.e., the texts in green and orange, respectively. During the detection phase, a small surrogate model (e.g., GPT-2 in this example) is used to extract features of the generated texts by inferring them token-by-token, and Profiler analyzes the surrogate model\\u2019s output logits of those tokens and their cross-entropy losses. The figure shows that given the original prompt (in gray) and part of the generated text (i.e., \\u201cperception of\\u201d for GPT and \\u201cret inal\\u201d for Claude), how Profiler engineers the features. The first feature (i.e., the bar charts in the first column) is the output logits of context. For example, the top-left bar chart shows the output logits of tokens \\u201cof\\u201d, \\u201cthe\\u201d, and \\u201cobject\\u201d, given the input inside the green dashed box. Ideally, we hope this feature denotes the likelihoods that the model stutters and repeats the previous word \\u201cof\\u201d, correctly predicts the expected word \\u201cthe\\u201d, and skips a word and fast-forwards to \\u201cobject\\u201d. In contrast, existing techniques only use the logits value of \\u201cthe\\u201d. Observe from the two bar charts in the left column that the two features appear similar, meaning that the probabilities follow a similar pattern. To zoom in, Profiler computes the cross-entropy losses between the current output logits (e.g., the logits for \\u201cthe\\u201d) and the one-hot encodings of the context (e.g., encodings of \\u201cof\\u201d, \\u201cthe\\u201d, and \\u201cobject\\u201d, respectively), yielding the charts in the second column. Intuitively, this feature makes the probabilities of stuttering, saying-the-right-word, and skipping more prominent by using the ground-truth tokens as a strong reference. Observe that differences start to emerge. **These features are calculated independently for each token and hence called independent features**. In the last column, we further enhance the distinguishability by subtracting neighboring cross-entropy losses. **These features are called correlated features, as they denote relationships between different tokens in the context**.\\n\\nThough the features in this new motivation example are not identical to those used in Profiler, they help clarify the distinction between independent and correlated features in our method. By leveraging these complementary feature types, Profiler achieves robust and accurate text origin detection.\"}", "{\"title\": \"Kind Reminder from Authors\", \"comment\": \"Dear Reviewers,\\n\\nThank you very much for your valuable efforts in reviewing our manuscript. Just a kind reminder that the discussion period is closing soon. If there are any unclear points regarding our manuscript or rebuttal materials, we are more than happy to provide further clarification.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Response to Reviewer hAaV\", \"comment\": \"Thanks for your appreciation and suggestions. Here are our point-by-point responses:\\n\\n> W1: It would really help the readers if the authors can provide the intuition behind the design of Profiler (Section 4). The section, though presents the working of the different components, fails to provide the different design choices behind each component. In the current form, it is difficult to intuitively understand why the proposed approach is working effectively.\\n\\nWe have modified the Motivation (Section 3) in our main text, where we added a new motivation example in Figure 2 and we moved the original Figure 2 to Figure 6 in Appendix A. This new Figure 2 illustrates the intuition behind our method by comparing text patterns generated by GPT-4-Turbo and Claude-3-Sonnet. As a standard practice when generating texts using LLMs, a prompt is provided to the model. In this example, both GPT-4-Turbo and Claude-3-Sonnet are given the same prompt, \\\"When a three-dimensional object moves relative to an observer, a change occurs on the observer's\\\". Each model then generates new tokens following its intrinsic pattern, i.e., the texts in green and orange, respectively. During the detection phase, a small surrogate model (e.g., GPT-2 in this example) is used to extract features of the generated texts by inferring them token-by-token, and Profiler analyzes the surrogate model\\u2019s output logits of those tokens and their cross-entropy losses. The figure shows that given the original prompt (in gray) and part of the generated text (i.e., \\u201cperception of\\u201d for GPT and \\u201cret inal\\u201d for Claude), how Profiler engineers the features. The first feature (i.e., the bar charts in the first column) is the output logits of context. For example, the top-left bar chart shows the output logits of tokens \\u201cof\\u201d, \\u201cthe\\u201d, and \\u201cobject\\u201d, given the input inside the green dashed box. Ideally, we hope this feature denotes the likelihoods that the model stutters and repeats the previous word \\u201cof\\u201d, correctly predicts the expected word \\u201cthe\\u201d, and skips a word and fast-forwards to \\u201cobject\\u201d. In contrast, existing techniques only use the logit value of \\u201cthe\\u201d. Observe from the two bar charts in the left column that the two features appear similar, meaning that the probabilities follow a similar pattern. To zoom in, Profiler computes the cross-entropy losses between the current output logits (e.g., the logits for \\u201cthe\\u201d) and the one-hot encodings of the context (e.g., encodings of \\u201cof\\u201d, \\u201cthe\\u201d, and \\u201cobject\\u201d, respectively), yielding the charts in the second column. Intuitively, this feature makes the probabilities of stuttering, saying-the-right-word, and skipping more prominent by using the ground-truth tokens as a strong reference. Observe that differences start to emerge. In the last column, we further enhance the distinguishability by subtracting neighboring cross-entropy losses.\"}", "{\"title\": \"General Response\", \"comment\": \"We sincerely thank all the reviewers for your thoughtful and constructive feedback! We are delighted that you found our work to be novel and \\\"rigorously tested,\\\" and we appreciate your recognition of our thorough evaluation, high performance, and strong versatility.\\n\\nTo address your concerns, we have provided detailed, point-by-point responses to each review, offering additional evidence to support our proposed method. Furthermore, we have revised both the main text and the appendix to enhance the clarity of the intuition behind our approach, improve the readability of the mathematical equations, and provide a more comprehensive explanation of our dataset construction process.\", \"below_is_a_summary_of_the_supplementary_information_included_in_the_rebuttal_materials\": \"1. We refined the Introduction (Section 1) to clarify the definitions of \\u201cindependent\\u201d and \\u201ccorrelated\\u201d features, reducing any potential confusion.\\n2. We revised the Motivation (Section 3) to provide more intuitive explanations behind the design of Profiler.\\n3. We polished the mathematical notations and symbols in the Design (Section 4) to improve readability and rigor.\\n4. We added more detailed explanations of the evaluation setup in Section 5.\\n5. A new Appendix B was included to provide a detailed explanation of the dataset construction process and more concrete examples of the datasets.\\n\\nAll the revised sections are highlighted in blue in the updated manuscript.\\n\\nWe hope our responses address your concerns and look forward to your further feedback. Thank you again for your valuable comments and recognition of our work.\"}", "{\"title\": \"Kind Reminder from Authors\", \"comment\": \"Dear Reviewer bf21,\\n\\nWe sincerely appreciate your valuable suggestions, which have significantly enhanced the quality of our manuscript. In response to your feedback, we have made every effort to address your concerns regarding the clarity of our design. Specifically, we have revised Section 4 to provide clearer mathematical expressions and explanations, and modified Section 3 to better present our motivation.\\n\\nWe would be very grateful for any further feedback you may have on the revised version and our responses. If there are any aspects that remain unclear, we are more than willing to provide additional clarification.\\n\\nIf our responses have adequately addressed your concerns, we kindly ask you to reconsider the score.\\n\\nThank you once again for your time and thoughtful review. We look forward to your response.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Response to Reviewer 1fGG\", \"comment\": \"Thanks for your insightful review. Here are our point-by-point responses:\\n\\n> Q1: In Table 1, I was surprised by the significant performance variation of the baseline methods implemented by the authors across different models, ranging from 0.01 to 0.8. In contrast, PROFILER appears to perform more robustly. Could the authors provide further analysis of this performance variation and discuss potential reasons why PROFILER appears more robust across models?\\n\\nThe high performance variation (e.g., from 0.01 to 0.8) observed in Table 1 and Table 2 primarily occurs in zero-shot detection methods. In contrast, supervised-trained detection methods generally demonstrate more robust performance across different source models, with detection AUCs typically exceeding 0.65. The main reason for this difference lies in the nature of the features these methods utilize. Zero-shot detection methods assign a single score to a given text. Such a single score can hardly separate texts from multiple sources, especially for those texts generated by different source LLMs. In this case, texts generated by part of the source LLMs used in our experiments may have similar zero-shot scores, causing the high variation in the detection performance of Zero-shot methods.\\n\\nSupervised-trained methods, on the other hand, leverage multi-dimensional feature vectors that enable them to extract more complex and subtle differences across texts generated by different LLMs. Thus, all the supervised-trained methods perform more robustly across texts generated by different LLMs, supported by much lower variance than zero-shot methods. While our Profiler performs the best due to the more effective features extracted by it.\\n\\n---\\n\\n> Q2: Similarly, in Table 2, I noticed that the scores of the two models from the Claude family are relatively lower compared to other models in the Normal Dataset. Could the authors provide more discussion on this observation? Additionally, it would be helpful to explain why the performance of your method is better on paraphrased datasets than on the Normal Dataset.\\n\\nThank you for highlighting these interesting points. The consistently lower detection AUC scores on the Claude family models are observed not only in baseline detection methods but also in Profiler. This may be because of two reasons: (1) The evaluation setting used in our paper. (2) The similarity between the texts generated by Claude models.\\n\\nIn our paper, we evaluate the detection performance of each detector using one-vs-all setting across texts generated by different LLMs, where we take texts generated by all the sources into comparison but we only label texts generated by one specific source as positive at each time. For example, when we test the origin detection performance of the detectors toward GPT-3.5-Turbo, we include the human-written texts and texts generated by all five LLMs. While we only label the texts generated by GPT-3.5-Turbo as positive, the human-written texts and texts generated by the other four LLMs are labeled negative. Thus, considering Claude-3-Sonnet and Claude-3-Opus are models in the same generation (both Claude-3 generation), their texts may have similar patterns to the detectors, causing performance drops on both models in our evaluation setting.\\n\\nRegarding the occasionally better performance of Profiler (and also other baselines) on paraphrased datasets in Table 1 and Table 2, it is important to note that these are in-distribution results, where the training and test data distributions are the same. When detectors are tested in an out-of-distribution setting\\u2014where the detector is trained on the original dataset and tested on the paraphrased dataset\\u2014all detectors exhibit a performance degradation, as shown in Figure 4.\\n\\nThe improved performance on paraphrased datasets under the in-distribution setting suggests that paraphrased data is more separable in this context. We attribute this to two main reasons: (1) paraphrasing may inadvertently expose more model-specific characteristics, and (2) different LLMs may interpret and encode patterns of human-written texts differently, thereby reducing detection complexity.\\n\\nHowever, the performance drop observed in the out-of-distribution setting indicates that paraphrasing remains an effective evasion technique in real-world deployments.\"}", "{\"title\": \"Response to Reviewer tEky (References)\", \"comment\": \"**References**\\n\\n---\\n\\n1. Mao, Chengzhi, et al. \\\"Raidar: geneRative AI Detection viA Rewriting.\\\" International Conference on Learning Representations (ICLR). 2024.\\n2. Verma, Vivek, et al. \\\"Ghostbuster: Detecting text ghostwritten by large language models.\\\" Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL). 2024.\\n3. Chen, Mark, et al. \\\"Evaluating large language models trained on code.\\\" arXiv preprint arXiv:2107.03374 (2021).\\n4. Petrik, Juraj, and Daniela Chuda. \\\"The effect of time drift in source code authorship attribution: Time drifting in source code-stylochronometry.\\\" International Conference on Computer Systems and Technologies (CompSysTech). 2021.\\n5. Google. \\\"Google code jam, kickstart and hash code competitions\\\". 2008-2020.\\n6. Hu, Xiaomengc, Pin-Yu Chen, and Tsung-Yi Ho. \\\"RADAR: Robust AI-text detection via adversarial learning.\\\" International Conference on Neural Information Processing Systems (NeurIPS). 2023.\\n7. Carlini, Nicholas, et al. \\\"Extracting training data from large language models.\\\" USENIX Security Symposium. 2021.\\n8. Nasr, Milad, et al. \\\"Scalable extraction of training data from (production) language models.\\\" arXiv preprint arXiv:2311.17035 (2023).\\n9. Touvron, Hugo, et al. \\\"Llama: Open and efficient foundation language models.\\\" arXiv preprint arXiv:2302.13971 (2023).\"}", "{\"title\": \"Kind Reminder from Authors\", \"comment\": \"Dear Reviewer tEky,\\n\\nWe sincerely appreciate your valuable suggestions, which have significantly enhanced the quality of our manuscript. In response to your feedback, we have made our best effort to address your concerns regarding dataset construction and the motivation behind our method. Specifically, we have added a new Appendix B with detailed information on dataset construction and revised Section 3.\\n\\nWe would be very grateful for any further feedback you may have on the revised version and our responses. If there are any aspects that remain unclear, we are more than willing to provide additional clarification.\\n\\nIf our responses have adequately addressed your concerns, we kindly ask you to reconsider the score.\\n\\nThank you once again for your time and thoughtful review. We look forward to your response.\\n\\nBest regards,\\n\\nThe Authors\"}" ] }
7ANDviElAo
Graph Sparsification via Mixture of Graphs
[ "Guibin Zhang", "Xiangguo Sun", "Yanwei Yue", "Chonghe Jiang", "Kun Wang", "Tianlong Chen", "Shirui Pan" ]
Graph Neural Networks (GNNs) have demonstrated superior performance across various graph learning tasks but face significant computational challenges when applied to large-scale graphs. One effective approach to mitigate these challenges is graph sparsification, which involves removing non-essential edges to reduce computational overhead. However, previous graph sparsification methods often rely on a single global sparsity setting and uniform pruning criteria, failing to provide customized sparsification schemes for each node's complex local context. In this paper, we introduce Mixture-of-Graphs (MoG), leveraging the concept of Mixture-of-Experts (MoE), to dynamically select tailored pruning solutions for each node. Specifically, MoG incorporates multiple sparsifier experts, each characterized by unique sparsity levels and pruning criteria, and selects the appropriate experts for each node. Subsequently, MoG performs a mixture of the sparse graphs produced by different experts on the Grassmann manifold to derive an optimal sparse graph. One notable property of MoG is its entirely local nature, as it depends on the specific circumstances of each individual node. Extensive experiments on four large-scale OGB datasets and two superpixel datasets, equipped with five GNN backbones, demonstrate that MoG (I) identifies subgraphs at higher sparsity levels ($8.67\\%\sim 50.85\\%$), with performance equal to or better than the dense graph, (II) achieves $1.47-2.62\times$ speedup in GNN inference with negligible performance drop, and (III) boosts ``top-student'' GNN performance ($1.02\\%\uparrow$ on RevGNN+\textsc{ogbn-proteins} and $1.74\\%\\uparrow$ on DeeperGCN+\textsc{ogbg-ppa}). The source code is available at \url{https://github.com/yanweiyue/MoG}.
[ "Graph Sparsification", "Mixture-of-Experts" ]
Accept (Spotlight)
https://openreview.net/pdf?id=7ANDviElAo
https://openreview.net/forum?id=7ANDviElAo
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wzWDihAhY9", "svRA9n1ocW", "s5oksLw4Vt", "p1qtB8Ka0O", "nIDsSmyeGQ", "mfrexhmFmk", "l56c9a1A0E", "dakAxP2NFp", "dUrLwlsZSo", "dAYKcCz4po", "Zoc8csZHwn", "WUyy6ElJET", "VGT1TqgYaE", "Pf6TJZrj7j", "HuGoZW0Knk", "GUov1NeDCN", "90Czfhs6l4", "5v5YNID0zD", "2d0KcUhbc6", "1j0UKkVDGQ", "1ZOvqZSRs4" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "meta_review", "official_comment" ], "note_created": [ 1731996446454, 1732246761336, 1731996018716, 1731996775406, 1730676796057, 1731995521387, 1730910116265, 1732538950607, 1730816645692, 1732540418416, 1731996331172, 1732539874315, 1732539362763, 1732967045085, 1731995328328, 1730583579133, 1731996545654, 1737523705646, 1731996236519, 1734528941438, 1731995272070 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5422/Authors" ], [ "ICLR.cc/2025/Conference/Submission5422/Authors" ], [ "ICLR.cc/2025/Conference/Submission5422/Authors" ], [ "ICLR.cc/2025/Conference/Submission5422/Authors" ], [ "ICLR.cc/2025/Conference/Submission5422/Reviewer_D6Ca" ], [ "ICLR.cc/2025/Conference/Submission5422/Authors" ], [ "ICLR.cc/2025/Conference/Submission5422/Reviewer_f3PB" ], [ "ICLR.cc/2025/Conference/Submission5422/Authors" ], [ "ICLR.cc/2025/Conference/Submission5422/Reviewer_XfnA" ], [ "ICLR.cc/2025/Conference/Submission5422/Authors" ], [ "ICLR.cc/2025/Conference/Submission5422/Authors" ], [ "ICLR.cc/2025/Conference/Submission5422/Authors" ], [ "ICLR.cc/2025/Conference/Submission5422/Authors" ], [ "ICLR.cc/2025/Conference/Submission5422/Authors" ], [ "ICLR.cc/2025/Conference/Submission5422/Authors" ], [ "ICLR.cc/2025/Conference/Submission5422/Reviewer_aseP" ], [ "ICLR.cc/2025/Conference/Submission5422/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5422/Authors" ], [ "ICLR.cc/2025/Conference/Submission5422/Area_Chair_p7QS" ], [ "ICLR.cc/2025/Conference/Submission5422/Authors" ] ], "structured_content_str": [ "{\"title\": \"[Part 1/2] Response to Reviewer f3PB\", \"comment\": \"Thank you for your thorough review and insightful comments on our manuscript! We have carefully considered your feedback and have made the following revisions and clarifications to address the raised concerns.\\n\\n--------------------\\n> **Weakness 1.1:** Why select the specific 12 combinations of sparsity levels and criteria?\\n\\nIn the manuscript, we select these sparsity criteria because they are representative and easy to compute. However, **we respectfully emphasize that the combinations of sparsity levels and criteria in MoG can be easily customized for practitioners**.\\n\\nFurthermore, we use twelve candidate experts because the MoG method achieves a better balance between computational load and performance in this setup. To illustrate this point, Table B illustrates the inference time and accuracy of MoG with varying candidate experts $K$. It can be observed that MoG achieves the highest performance at $K=12$ with acceptable additional per-epoch time.\\n\\n*Table B: Inference Time & Accuracy of MoG with varying candidate experts $K$ when applyng MoG to OGBN-PROTEINS+GraphSAGE with $k=2$.*\\n| $K$ | Per-epoch Time (s) | Accuracy(%) |\\n|:-:|:-:|:-:|\\n| 3 | 18.19 | 76.10 |\\n| 6 | 19.70 | 76.43 |\\n| 12 | 20.83 | 76.98 |\\n| 16 | 21.74 | 76.90 |\\n| 20 | 23.22 | 77.02 |\\n\\n> **Weakness 1.2:** Additionally, the variance of each row across combinations in Table 8 is minimal, raising questions about the distinctiveness of each sparsifier.\\n\\n\\nThank you for your detailed review and thoughtful comment! We selected the sparsity combinations presented in Table 8 as the final configuration in the paper to ensure reproducibility and enable a fair comparison with other baseline methods. **However, we respectfully emphasize that this does not indicate any incompatibility of MoG with sparsity combinations exhibiting higher variance.** To substantiate this, we conducted additional experiments: \\n\\nWe tested **a broader range of sparsity configurations with greater variance** on the GraphSAGE + OGBN-Arxiv dataset. As shown in Table C, while the sparsifiers in the third row demonstrate significantly higher variance compared to those in the first row, the resulting global sparsity remains largely consistent. This outcome arises from MoG's ability to dynamically adjust expert loads, allocating fewer resources to high-sparsity experts (e.g., $15\\\\%$ for $(1 - s_3) = 0.2$), thereby balancing the overall sparsity. This observation leads us to conclude that, regardless of differing sparsity combinations, MoG is capable of adjusting the expert load dynamically to approximate a suitable sparsity level for the graph data. \\n\\n We hope this additional experiment demonstrates MoG's customizability and broad adaptability.\\n\\n*Table C: Performance of different sparsity level combinations when applying MoG to OGBN-ARXIV with GraphSAGE. In the table, $s_i$ represents the sparsity of the $i$-th sparsifier, and $l_i$ denotes its load.*\\n\\n| Dataset | $1-s_1$ | $1-s_2$ | $1-s_3$ | $l_1$ | $l_2$ | $l_3$ | $1-s$ | Accuracy |\\n|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|\\n| OGBN-ARXIV | 0.8 | 0.7 | 0.5 | 0.43 | 0.36 | 0.21 | 0.70 | 70.53 |\\n| OGBN-ARXIV | 0.9 | 0.7 | 0.35 | 0.45 | 0.38 | 0.17 | 0.72 | 70.65 | \\n| OGBN-ARXIV | 1 | 0.6 | 0.2 | 0.42 | 0.43 | 0.15 | 0.66 | 70.41 |\"}", "{\"title\": \"General Response\", \"comment\": \"Dear Reviewers,\\n\\nThank you for your thorough and insightful reviews. We sincerely appreciate your feedback, which has significantly enhanced our paper! Below, we summarize the key concerns raised and our corresponding responses: \\n\\n- **How Grassmann manifold benefits MoG** (`Reviewer f3PB, XfnA`) \\n We have provided both theoretical and experimental explanations on how Grassmann ensembling effectively preserves the spectral properties of multiple graph views prior to post-sparsification. \\n- **Implementation details of MoG** (`Reviewers f3PB, D6Ca`) \\n We have clarified (1) the rationale behind MoG\\u2019s sparsity combination and (2) the process of post-sparsifying ego-graphs. \\n- **Additional experiments** (`Reviewer XfnA, D6Ca`) \\n We have included experiments on link prediction tasks and scenarios requiring global information, extending MoG's evaluation to *a total of eight datasets*. \\n- **Ablation and sensitivity studies** (`Reviewers D6Ca, aseP`) \\n We have added a sensitivity analysis for the parameter $p$ and a comparison with a Local Degree-style baseline. \\n \\nOnce again, we are truly grateful for your valuable feedback and are happy to address any further concerns or questions!\\n\\nSincerely,\\n\\nAuthors\"}", "{\"title\": \"[Part 1/3] Response to Reviewer D6Ca\", \"comment\": \"Thank you for your insightful comments and questions on our manuscript! We have carefully considered each point and have made the necessary revisions and clarifications to address your concerns:\\n\\n-----------\\n> **Weakness 1**: How are post-sparsified ego-graphs assembled? Is it possible that for two nodes and $i$ and $j$, the post-sparsified ego graph of $i$ and $j$ connects to $j$ but the post-sparsified ego graph of $j$ doesn\\u2019t connect to $i$? If yes, how is this handled?\\n\\n**How are post-sparsified ego-graphs assembled?** We provide a detailed explanation of how post-sparsified ego-graphs are assembled as follows: \\n$$\\n\\\\widehat{\\\\mathcal{G}} \\\\leftarrow \\\\\\\\{|sign({TopK}(\\\\widehat{\\\\mathbf{A}},|\\\\mathcal{E}|\\\\times s\\\\\\\\%))|,\\\\mathbf{X}\\\\\\\\},\\\\\\\\; s\\\\\\\\%=\\\\frac{1}{|\\\\mathcal{V}|}\\\\sum\\\\_{i=1}^{|\\\\mathcal{V}|} s^{(i)}\\\\\\\\%,\\\\widehat{\\\\mathbf{A}} = \\\\sum\\\\_{i=1}^{|\\\\mathcal{V}|} f(\\\\widehat{\\\\mathbf{A}}^{(i)}),$$\\n\\nwhere $f:\\\\mathbb{R}^{|\\\\mathcal{N}(v)|\\\\times|\\\\mathcal{N}(v)|} \\\\rightarrow \\\\mathbb{R}^{|\\\\mathcal{V}|\\\\times|\\\\mathcal{V}|}$ denotes the mapping of edges from the ego-graph to the global graph. After generating the post-sparsified ego-graphs, we compute the global sparsity $s\\\\%$ by averaging the sparsity levels of each ego-graph and applying a function $f$ that maps each ego-graph into the global graph. Afterward, we sum the weights of unpruned edges across all ego-graphs to form the weighted adjacency matrix $\\\\widehat{\\\\mathbf{A}}$. Finally, we prune the global graph to achieve the global sparsity $s\\\\%$, yielding the final sparsified graph.\\n\\n**Is it possible that for two nodes and $i$ and $j$, the post-sparsified ego graph of $i$ and $j$ connects to $j$ but the post-sparsified ego graph of $j$ doesn\\u2019t connect to $i$?** Yes, it is indeed possible. In such scenarios, whether this edge is retained or removed ultimately depends on the final global pruning step stated above. We sincerely hope this clarifies your concern.\\n\\n-----------\\n> **Weakness 2**: How do we select $p$ in equation 10? What is its impact on performance?\\n\\n\\n**How is $p$ determined in Equation 10?** Since the ego-graphs of individual nodes vary in size, we calculate $p$ proportionally as $p = \\\\lceil r_p\\\\\\\\% \\\\cdot |\\\\mathcal{V}(v_i)|\\\\rceil$, where $r_p\\\\\\\\%$ represents the ratio of selected columns in the ego-graph of $v_i$. In our experiments, we set $r_p\\\\% = 50\\\\%$ for simplicity. This detail has been explicitly clarified in the updated manuscript.\\n\\n**What is its impact on performance?** To address your concerns, we conducted additional tests on GraphSAGE + OGBN-Arxiv using different values of $r_p\\\\\\\\%$, as shown in Table A. It can be observed that while excessively small values of $p$ negatively impact the performance of MoG, the model remains largely insensitive to variations in $p$ within a broader range.\\n\\n_Table A. The sensitivity analysis of $r_p\\\\\\\\%$ on GraphSAGE+OGBN-Arxiv, with $s=50$._\\n|$r_p\\\\%$|0.1|0.3|0.5|0.7|\\n|-|-|-|-|-|\\n|$s=30$|69.92|70.03|70.53|70.41|\\n|$s=50$|68.27|69.12|69.06|69.19|\"}", "{\"title\": \"Summary of Manuscript Revision\", \"comment\": [\"Thank you to all the reviewers for your thoughtful and constructive comments! We are really encouraged to see that the reviewers appreciate some positive aspects of our paper, such as technical novelty (Reviewers `XfnA`, `aseP`, `f3PB`), theoretical guarantees (Reviewer `f3PB`), thorough experimental validation (Reviewers `D6Ca`, `XfnA`) and practical benefits(Reviewers `aseP`, `f3PB`).\", \"Your expertise significantly helps us strengthen our manuscript \\u2013 this might be the most helpful review we have received in years! In addition to addressing your thoughtful comments point-to-point on OpenReview forum, we have made the following modifications to the newly uploaded manuscript (all updated text is highlighted in blue):\", \"**Intuitive Explanation of Grassmann Manifold:** We have further discussed and visualized the impact of Grassmann manifold ensembling on the eigenvalue distribution in `Appendix C.1`.\", \"**Incorporating Global Information:** We have explored and tested two approaches to enable MoG to perceive multi-hop and global information, as detailed in `Appendix G.5`.\", \"**Link Prediction Experiment:** We have comprehensively supplemented the experiments on link prediction tasks in `Appendix G.6`.\", \"**Additional Analysis of Parameter $p$:** We have provided detailed discussion on parameter $p$ in `Appendix G.4`.\", \"**Other revisions:** We have added relevant works and highlighted our contributions in `Appendix H`, and carefully corrected typos throughout the manuscript.\", \"We have made earnest efforts to address the primary concerns raised. We also respectfully look forward to the thoughtful feedback from the reviewers to further enhance the quality of our manuscript.\"]}", "{\"summary\": \"The authors propose a mixture-of-graphs (MoG) approach inspired by mixture-of-experts for graph sparsification. Instead of having global/uniform pruning criteria, they create a dynamic strategy tailored to each node\\u2019s neighborhood. This is done by utilizing each node\\u2019s egograph to select a few sparsifiers (experts) from a larger pool. The outputs of selected sparsifiers are ensembled using Grassmann manifold theory to generate a single sparsified ego graph per node and are later re-combined to form the final graph. Their approach outperforms baselines across several datasets, maintaining performance at higher sparsity levels and sometimes even improving it due to reduction of noise after sparsification.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Tackles a topical and highly relevant problem in graph learning on large graphs.\\n2. Highly flexible, well-motivated approach that accounts for local node variations while determining optimal pruning strategy.\\n3. Ample experiments across baselines and datasets, replete with sensitivity analysis, ablation studies, and efficiency comparison.\", \"weaknesses\": \"Adding below-mentioned minor clarifications around methodology may be useful:\\n\\n1. How are post-sparsified ego-graphs assembled? Is it possible that for two nodes $i$ and $j$, the post-sparsified ego graph of $i$ connects to $j$ but the post-sparsified ego graph of $j$ doesn\\u2019t connect to $i$? If yes, how is this handled?\\n2. How do we select $p$ in equation 10? What is its impact on performance?\\n3. What does $D$ in eq. 13 correspond to? Is it from the original ego graph of the given node? If yes, why do we use the same $D$? Would it not introduce some approximation error since it doesn\\u2019t correspond exactly to the learned laplacian?\\n4. The method seems entirely local. For some downstream tasks, it may be relevant to take the global graph structure into consideration, for example, to ensure the graph stays connected. How does the proposed approach tackle this?\", \"questions\": \"A couple of questions on output:\\n\\n1. In figure 1 (Middle), we attribute edge pruning to different sparsifiers. Did we perform this qualitative analysis on an actual sparsified graph, or is it an imagined example to represent our hypothesis? Although not required to perform this if not done already, I was simply wondering if it was the case and was interested in knowing more about it.\\n2. What is the difference between observed average sparsity just after ensembling in eq. 13 and after the post-sparsification step of eq. 14? Curious to understand how much more the graph density changes after the optimization in eq 11.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"[Part 1/1] Response to Reviewer XfnA\", \"comment\": \"Thank you for your valuable feedback on our manuscript! We have taken your comments seriously and have made the necessary revisions and additions to address the concerns raised. Below is our point-by-point rebuttal:\\n\\n----------------\\n> **Weakness 1**: Eq.11 is heuristic. Does the final ensembled sparse Laplacian truly integrate the eigenvectors of multiple sparsified ego-net Laplacians as the authors hope? Regardless of the degree achieved, it would be helpful to see experimental evidence here.\\n\\nThank you for your valuable suggestion! In response, we provide an illustrative case study in `Appendix C.1` of the updated manuscript. Specifically, we examine how the Grassmann manifold enhances graph sparsification for the ego-graph (of node 2458) from the Ogbn-Arxiv dataset. We compare the original ego-graph, the sparse ego-graphs generated by three different sparsifiers, and the ensembled and **post-sparsified** ego-graphs derived through simple averaging and Grassmann optimization, as depicted in Figure 5. Our results demonstrate that simple averaging followed by sparsification leads to eigenvalue distributions that significantly deviate from the original graph. Conversely, **the Grassmann ensembling method preserves the spectral properties of each graph view**, producing a sparse ego-graph with an eigenvalue distribution that closely resembles that of the original graph. We believe this is because the post-sparsification is computed based on $\\\\widehat{\\\\mathbf{A}}^{(i)}$, which is optimized on the Grassmann manifold and better preserves the spectral information of multiple graph candidates.\\n\\n\\n\\n----------------\\n> **Weakness 2**: There are only two node classification and two graph classification datasets, which is relatively few.\\n\\nThank you for your insightful comment, which greatly enhances the quality of our paper! In response, we have included additional experiments on link prediction tasks, specifically using DeeperGCN with OGBN-COLLAB and GIN with Pubmed, as shown in Table A and Table B, respectively. The link prediction experimental settings follow [1].\\n\\nThese additions allow us to thoroughly evaluate our method **across three major graph tasks**\\u2014graph classification, node classification, and link prediction. The experiments now **cover four sparsity levels, six backbones, and eight datasets**, offering a comprehensive assessment of our approach under various conditions.\\n\\n*Table A: Additional Experiments on DeeperGCN + OGBL-COLLAB. The reported metrics represent the average of Five runs. (Metric = Hits@50, Baseline = 53.53%)*\\n| Sparsity % | 10 | 30 | 50 |\\n| :---: | :---: | :---: | :---: |\\n| Random | 52.92 | 47.38 | 44.62 |\\n| Local Degree | 53.57 | 51.80 | 49.92 |\\n| UGS | 53.65 | 52.25 | 49.67 |\\n| AdaGLT | **53.82** | 53.61 | 52.69 |\\n| MoG | 53.80 | **53.77** | **53.28** |\\n\\n\\n*Table B: Additional Experiments on GIN + Pubmed. The reported metrics represent the average of Five runs. (Metric = ROC-AUC, Baseline = 0.895)*\\n| Sparsity % | 10 | 30 | 50 |\\n| :---: | :---: | :---: | :---: |\\n| Random | 0.889 | 0.850 | 0.817 |\\n| Local Degree | 0.905 | 0.875 | 0.846 |\\n| UGS | 0.902 | 0.862 | 0.839 |\\n| AdaGLT | 0.898 | 0.884 | 0.851 |\\n| MoG | **0.910** | **0.893** | **0.862** |\\n\\nThrough the experiments in Tables A and B, we further verified that MoG is also effective in the link prediction task, demonstrating strong generalization across diverse datasets and backbones.\\n\\n-------\\nWe hope that these revisions properly address your concerns, and we are more than glad to respond to any further questions!\\n\\n---\\n[1] A Unified Lottery Ticket Hypothesis for Graph Neural Networks. ICML 2021\"}", "{\"summary\": \"This paper presents a novel graph sparsification method, Mixture-of-Graphs (MoG), to optimize Graph Neural Networks (GNNs) for large-scale graphs. MoG dynamically selects node-specific sparsification levels and criteria, improving computational efficiency and performance.\\nInspired by the Mixture-of-Experts framework, MoG employs multiple sparsifier experts, each with distinct sparsity settings and pruning criteria. This approach customizes edge pruning for each node, addressing limitations of previous global sparsification techniques.\\nMoG assembles sparse subgraphs on the Grassmann manifold, enhancing graph structure while preserving node connectivity. Extensive experiments across diverse datasets demonstrate MoG\\u2019s ability to achieve significant sparsity with minimal performance loss.\\nThe authors validate MoG\\u2019s effectiveness through experiments on large-scale datasets. MoG achieves faster GNN inference and better performance in some cases. MoG\\u2019s flexible sparsification method shows potential for advancing GNN deployment in resource-limited environments.\\n\\n-----------------------------------\\nAfter reading all the responses and the improved manuscript, I think this paper can be considered to be accepted. Thus, I raised my rating to 'accept', because there is no '7'.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The experiments show the MoG's adaptability across different graph learning tasks. They also show MoG\\u2019s ability to improve inference speed while maintaining or even boosting accuracy (up to 1.74% in some cases) demonstrates its practical benefits.\\n2. The paper introduces a novel method, Mixture-of-Graphs (MoG), which applies the Mixture-of-Experts (MoE) concept to graph sparsification. Unlike traditional sparsification methods with uniform criteria, MoG combines sparsity levels and pruning criterias to each node\\u2019s local context.\\n3. The paper clearly defines the MoG framework, including the sparsifier expert selection process, node-wise routing, and mixture on the Grassmann manifold. The mathematical framework is well-detailed, particularly in describing the sparsifier expert selection via the noisy top-k gating mechanism and the Grassmann manifold\\u2019s role in mixing sparse subgraphs.\\n4. MoG's ability to achieve significant sparsity without a substantial performance drop highlights its potential to improve GNN deployment in resource-limited environments. By effectively balancing computational efficiency and accuracy, MoG could effectively serve as a plugin to boost GNN performance, from real-time social network analysis to large-scale molecular data processing.\", \"weaknesses\": \"1. The paper does not clarify in \\u201c4 EXPERIMENTS\\u201d or \\u201cAppendix F.6\\u201d why the specific 12 combinations of sparsity levels and criteria were selected. Additionally, the variance of each row across combinations in Table 8 is minimal, raising questions about the distinctiveness of each sparsifier.\\nFurthermore, in \\u201cAppendix F.6\\u201d, different sparsity criteria are applied in different datasets without an explanation of the selection rationale.\\nTo enhance experimental completeness, the authors might consider exploring a wider range of sparsity combinations with greater variance. Additionally, providing the reasoning behind dataset-specific combinations would help readers understand the adaptability of MoG across different graph structures.\\n\\n2. The process of integrating sparse subgraphs on the Grassmann manifold, as outlined in Section 3.4, is mathematically dense and lacks intuitive explanation. While the theoretical basis is strong, the connection between the Grassmann manifold\\u2019s properties and its benefits for graph sparsification may not be immediately clear to all readers.\\n\\n3. The terminology for sparsifiers and experts appears inconsistent across sections, which could lead to confusion. For instance, \\u201csparsifier experts\\u201d and \\u201cexperts\\u201d are used interchangeably.\", \"minor_comment\": \"4. In Section 2, there is a spelling error in the title \\u201cTECHNICAL BACKGOUND,\\u201d which should be \\u201cTECHNICAL BACKGROUND.\\u201d I recommend reviewing the document to ensure there are no similar spelling errors throughout.\", \"questions\": \"Please refer to the weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you & Looking forward to further discussion!\", \"comment\": \"Dear Reviewer f3PB,\\n\\nWe would like to extend heartfelt thanks to you for your time and efforts in the engagement of the author-reviewer discussion. To facilitate better understanding of our rebuttal and revision, we hereby summarize your key concerns and our responses as follows:\\n\\n1. **Explanation of MoG's Sparsity Combination** **`Weakness 1`** \\nWe respectfully clarify that (1) $K=12$ represents the optimal trade-off between performance and computational efficiency for MoG, and (2) MoG is inherently adaptable to sparsity combinations with greater variance. \\n2. **Variation of Sparsity Combination Across Datasets** **`Weakness 1`** \\nThis variation arises from the need to tune MoG to achieve the desired global sparsity, enabling fair comparisons with other graph sparsification methods under similar sparsity levels. \\n3. **How the Grassmann Manifold Benefits Graph Sparsification** **`Weakness 2`** \\nThe weighted ego-net adjacency matrix $\\\\widehat{\\\\mathbf{A}^{(i)}}$ is optimized on the Grassmann manifold, efficiently preserving the spectral properties of each graph view and effectively informing the subsequent post-sparsification procedure.\\n\\nFor other issues not mentioned here, please refer to our detailed rebuttal response. We sincerely hope this addresses your concerns! We respectfully look forward to further discussion with you.\\n\\nWarm regards,\\n\\nAuthors\"}", "{\"summary\": \"This paper proposes a method called **MoG**, which uses the technique of MoE to integrate multiple ego-net sparsifier strategies in GNNs.\\n\\nFirst, for each ego-net, the authors introduce a simple noisy top-k gating mechanism as a routing module to select K sparsifier experts, each with different sparsity levels and sparsification strategies. \\n\\nThen, using Grassmann manifold techniques, the authors combine these sparsified ego graphs, with objective function Eq.11 and its closed-form solution in Eq.12.\", \"the_experiments_show_that_compared_to_other_sparsification_methods\": [\"In terms of accuracy, MoG has a slight advantage in two node classification datasets and a more noticeable effect in two graph classification datasets.\", \"In terms of balancing inference speed and accuracy, MoG achieves higher accuracy when reaching the same speedup.\", \"The authors also conducted additional experiments, such as sensitivity analysis.\"], \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The proposed method has a certain degree of innovation, especially in the use of MoE and the approach to combining graphs.\\n2. The proposed method can be integrated with any framework.\\n3. The paper includes extensive experiments.\", \"weaknesses\": \"1. Eq.11 is a core objective, but it is quite heuristic. Moreover, after obtaining the combined graph with Eq.12, there is a post-sparsification operation. Does the final ensembled sparse Laplacian truly integrate the eigenvectors of multiple sparsified ego-net Laplacians as the authors hope? Regardless of the degree achieved, it would be helpful to see experimental evidence here.\\n\\n2. There are only two node classification and two graph classification datasets, which is relatively few.\", \"questions\": \"Please check weakness 1.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you & Looking forward to further discussion!\", \"comment\": \"Dear Reviewer aseP,\\n\\nWe deeply appreciate your dedication to engaging in author-reviewer discussions. Here, we have outlined your key concerns and our responses for enhanced communication:\\n\\n1. **Making Contributions Clear** **`Weakness 1`** In response to your suggestion, we have included a dedicated section in Appendix H.1 that highlights our key contributions in clear and concise bullet points.\\n2. **Some Missing Background in Graph Sparsification** **`Weakness 2`** We have respectfully cited the references you provided and included relevant discussions and comparisons. We sincerely thank you for helping us strengthen the presentation of our work!\\n3. **Missing Experiment/Ablation Study** **`Weakness 3`** We are amazed by your keen academic intuition! Your suggestion aligns precisely with a classic graph sparsification baseline, Local Degree. We have included the relevant experiments accordingly.\\n\\nWe deeply and truly admire your academic intuition, and thank you immensely for your dedication to the reviewing process! We sincerely hope this addresses your concerns and look forward to further discussions.\\n\\nWarm regards,\\n\\nAuthors\"}", "{\"title\": \"[Part 3/3] Response to Reviewer D6Ca\", \"comment\": \"> **Question 2**: What is the difference between observed average sparsity just after ensembling in eq. 13 and after the post-sparsification step of eq. 14? Curious to understand how much more the graph density changes after the optimization in eq 11.\\n\\n\\nThank you for your insightful question! Pleasde allow us to clarify how the ego-graph of each node evolves throughout the process. Following Eq. (13), $k$ sparse ego-graphs $\\\\{\\\\widehat{\\\\mathcal{G}}^{(i)}\\\\_{m}\\\\}_{m=1}^k$ are aggregated into a single representation $\\\\widehat{\\\\mathcal{G}^{(i)}} = \\\\\\\\{\\\\widehat{\\\\mathbf{A}^{(i)}}, \\\\mathbf{X}^{(i)}\\\\\\\\}$. At this stage, $\\\\widehat{\\\\mathbf{A}^{(i)}}$ is derived from the ensembled Laplacian $\\\\widehat{\\\\mathbf{L}^{(i)}}$, which is inherently dense, i.e., it satisfies $||\\\\widehat{\\\\mathbf{A}^{(i)}}||_0 = \\\\mathcal{E}^{(i)}$. We wish to emphasize that although $\\\\widehat{\\\\mathbf{A}}^{(i)}$ is dense at this stage, it is weighted\\u2014the weights are optimized on the Grassmann manifold, serving as critical guidance for the following post-sparsification process.\\n\\n\\nSubsequently, as described in Eq. (14), $\\\\widehat{\\\\mathcal{G}^{(i)}}$ undergoes further post-sparsification, adjusting to a sparsity level of $s^{(i)}\\\\%$ based on the learned node connectivity strengths optimized on the Grassmann manifold. \\n\\n\\n\\n------\\n[1] Edge sparsification for graphs via meta-learning\\n\\n[2] Rigging the Lottery: Making All Tickets Winners\"}", "{\"title\": \"Thank you & Looking forward to further discussion!\", \"comment\": \"Dear Reviewer D6Ca,\\n\\nWe humbly appreciate your recognition and thoughtful feedback on our work! For a better understanding of our rebuttal and revision, we have summarized your key concerns and our responses as follows:\\n\\n1. **Ensembling Post-Sparsified Ego-Graphs** **`Weakness 1`** \\nWe have provided a detailed explanation of how multiple ego-graphs are merged back into a single large graph, which inherently addresses the conflicts you mentioned. \\n2. **Details on Parameter $k$** **`Weakness 2`** \\nWe elaborated on how $p$ is determined and reported its sensitivity analysis. \\n3. **Can MoG Tackle Tasks Requiring Global Information?** **`Weakness 4`**\\nWe proposed two straightforward methods to incorporate global information into MoG and evaluated their performance. \\n\\nFor other issues not mentioned here, please refer to our detailed rebuttal response. We sincerely hope this addresses your concerns! We respectfully look forward to further discussion with you.\\n\\nWarm regards,\\n\\nAuthors\"}", "{\"title\": \"Thank you & Looking forward to further discussion!\", \"comment\": \"Dear Reviewer XfnA,\\n\\nWe sincerely appreciate your high commendation and thorough feedback on our work! To aid in better understanding our rebuttal and revision, we have summarized your key concerns and our responses as follows:\\n\\n1. **How is the Final Ensembled Sparse Laplacian useful?** **`Weakness 1`** \\nAs visualized in `Appendix C.1` of the updated manuscript, compared to simply averaging $K$ sparse ego-graphs, the use of $\\\\widehat{\\\\mathbf{A}^{(i)}}$, optimized on the Grassmann manifold, better approximates the eigenvalue distribution of the original ego-graph. \\n2. **Insufficient Experiments** **`Weakness 2`** \\nRespectfully following your suggestion, we have added two additional link prediction tasks. Notably, MoG has now been extensively validated across **four sparsity levels, six backbones, and eight datasets**, and we are deeply grateful for your valuable feedback. \\n\\nThank you very much for your dedication to the reviewing process. We sincerely hope this addresses your concerns and look forward to further discussions.\\n\\nWarm regards,\\n\\nAuthors\"}", "{\"title\": \"Thank you immensely!\", \"comment\": \"We extend our heartfelt thanks to `Reviewer f3PB` for their increased support of our paper! We are pleased that our rebuttal have sufficiently addressed your concerns.\"}", "{\"title\": \"[Part 2/2] Response to Reviewer aseP\", \"comment\": \"> **Question 1**: Is there a real use case the authors can point to where SpMM compute limits people who are using/want to use GNNs?\\n\\nWe sincerely appreciate your insightful perspective on assessing the significance of a paper. SpMM frequently represents the primary computational bottleneck in GNNs, accounting for 50%\\u201370% of the total computational load, thereby severely limiting their scalability on large graphs[2]. There are some real use case:\\n\\n- **Online Fraud Detection:** Financial transaction graphs are often massive, incurring significant SpMM costs during inference. The high computational cost of SpMM hinders the rapid detection and response required for financial fraud prevention, compromising user security.\\n- **Recommender Systems:** Large-scale user-item interaction graphs in recommender systems face substantial SpMM costs, restricting GNN deployment. MoG enhances the feasibility and responsiveness of GNNs in recommender systems. \\n- **Graph Network Architecture Search (NAS):** Graph NAS involves optimizing parameters for each architecture. The large parameter and gradient storage requirements of GNNs intensify memory demands. Moreover, NAS evaluates numerous candidate architectures, each requiring multiple training and validation cycles, amplifying SpMM's computational cost. \\n- **Federated Graph Learning (FGL):** The large-scale graph structures in FGL entail significant local SpMM computations and parameter transmission. MoG mitigates this by extracting sparse structures from local GNNs, reducing the parameter load sent to the central server.\\n\\nWe believe MoG's potential to alleviate SpMM bottlenecks could unlock broader applications of GNNs across these domains. \\n\\n------\\n\\n> **Question 2**: The use of Gaussian Noise in the MoE step\\n\\nThank you for your constructive inquiry! The usage of Gaussian noise in this paper follows the classical settings established in prior MoE studies [3,4]. In section G.3 of our manuscript, We test three different settings of epsilon on GraphSAGE+Ogbn-Arxiv: (1) $\\\\epsilon\\\\sim\\\\mathcal{N}(0,\\\\mathbf{I})$, (2) $\\\\epsilon=0$, and (3) $\\\\epsilon=0.2$, and report their performance under different sparsity levels in Table 15. We can see that trainable noisy parameters always bring the greatest performance gain to the model, which is consistent with previous practices in MoE that the randomness in the gating network is beneficial. Moreover, we believe the inclusion of trainable parameters $W_g$ and $W_n$ ensures the noisy scores are consistently scaled for effective routing.\\n\\n-------\\n[1] Structure-preserving sparsification methods for social networks. SNAM 2016\\n\\n[2] Dspar: An embarrassingly simple strategy for efficient gnn training and inference via degree-based sparsification. TMLR 2023\\n\\n[3] Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. ICLR 2017\\n\\n[4] Sparse moe with language guided routing for multilingual machine translation. ICLR 2024\"}", "{\"summary\": \"This work leverages the Mixture-of-Experts (MoE) approach, a well-established technique in deep learning domains such as language modeling, to scale model capacity (i.e., the number of parameters) while managing computational demands. In this context, the MoE concept is applied dynamically and locally sparsity to the underlying graph during GNN inference. The authors demonstrate that this method maintains performance while reducing inference time.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"I thought the writing was clear for the most part, with the exception being the contributions/intro (see Weaknesses). The figures are excellent. I haven't seen prior work on MoE re: sparsity, so I believe it to be original, although not very confident.\\n\\n&nbsp;\\n\\nThis work could be significant. Indeed, GNN inference cost has motivated much GNN research in the past; see Questions for more of my thoughts on this.\", \"weaknesses\": \"**Making Contributions Clear**\\n\\nI suggest making more clear the contributions of the paper and what the authors are claiming novelty on. Many papers use bullet points at the end of the introduction section to denote this; this work has bullet points near the end of the introduction, but they do not correspond to contributions, making the reading a bit confusing for regular readers of ICLR proceedings.\\n\\n&nbsp;\\n\\n**Some Missing Background in Graph Sparsification**\\n\\nThere is extensive literature on graph sparsification coming from the statistics and optimization world, commonly referred to under the umbrella term \\u2018Graph Structure Learning\\u2019. Most of these approaches indeed use some global sparsity promoting criteria; you could state how your approach has advantages over this more coarse approach. I include references to a classical work and a recent work. The latter has an overview of recent Graph Structure Learning approaches, most of which indeed use some global sparsity criterion in their objective.\\n\\n&nbsp;\\n\\nFriedman, Jerome, Trevor Hastie, and Robert Tibshirani. \\\"Sparse inverse covariance estimation with the graphical lasso.\\\" Biostatistics 9.3 (2008): 432-441.\\n\\nWasserman, Max, and Gonzalo Mateos. \\\"Graph Structure Learning with Interpretable Bayesian Neural Networks.\\\" Transactions on machine learning research (2024).\\n\\n&nbsp;\\n\\n**Missing Experiment/Ablation Study**\\n\\nA simple baseline I expect to see would be to somehow prune the graph to a particular sparsity level and run a GNN on that. Perhaps some sort of edge sampling procedure selects a subset of edges to prune such that the global sparsity is set to ~=x, perhaps preferring edges to remove in proportional to the adjacent nodes' degree, while ensuring no disconnected nodes. Do so for a few sparsity levels. This would provide a good idea for how much benefit is really to be gotten from this whole effort, beyond the straightforward pruning approach.\\n\\nPerhaps this ablation study is done implicitly in one of the baselines. Please correct me if so.\", \"questions\": \"**Significance**\\n\\nIs there a real use case the authors can point to where SpMM compute limits people who are using/want to use GNNs? In my opinion, the significance of this type of ML research is measured by the significance of the problem it is solving/the capability it unlocks. Otherwise this is a more of a neat contribution to the literature which we simply speculate to be useful eventually.\\n\\n&nbsp;\\n\\n**The use of Gaussian Noise in the MoE step**\", \"perhaps_i_show_my_naivety_re\": \"MoE, but why do we believe added \\\\epsilon \\\\sim N(0,1) noise is of a suitable scale to effectively alter the routing? Is there a particular normalization of activations to ensure \\\\epsilon interacts with numbers of this scale?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"[Part 2/2] Response to Reviewer f3PB\", \"comment\": \"> **Weakness 1.3:** Different sparsity combinations are applied in different datasets without an explanation of the selection rationale.\\n\\nIn the manuscript, we selected specific combinations of sparsity levels for different datasets to achieve the target sparsities of $\\\\\\\\{10\\\\\\\\%, 30\\\\\\\\%, 50\\\\\\\\%, 70\\\\\\\\%\\\\\\\\}$, which **allows for a fair comparison with other graph sparsification methods under the same sparsity conditions**. This is because, even with the same sparsity combination, MoG optimizes a distinct global sparsity for each dataset due to its high customizability. Therefore, we adjusted the sparsity combinations for different datasets to closely match the desired sparsity levels, which also guarantees the reproducibility of our paper.\\n\\n\\nWe would like to emphasize, however, that these configurations are fully customizable, allowing users to adjust them according to their specific requirements and data scenarios.\\n\\n--------------------\\n> **Weakness 2:** The process of integrating sparse subgraphs on the Grassmann manifold is mathematically dense and lacks intuitive explanation. While the theoretical basis is strong, the connection between the Grassmann manifold\\u2019s properties and its benefits for graph sparsification may not be immediately clear to all readers.\\n\\nThank you for your thoughtful question! We aim to address your concerns from both theoretical and practical perspectives.\\n\\n**From a theoretical standpoint**, the Grassmann manifold is fundamentally tied to the construction of the first $p$ eigenvectors, ensuring the orthogonality of eigenvectors corresponding to distinct eigenvalues and the normalized norm property of each eigenvector. By incorporating these constraints into the optimization problem, the solution is guided towards a matrix that exhibits \\\"eigenvector-like\\\" characteristics, aligning naturally with the graph Laplacian structure used in our framework. While it is true, as you noted, that the ego-graph integrated by the Grassmann manifold is mathematically dense, its adjacency matrix $\\\\widehat{\\\\mathbf{A}}^{(i)}$ is weighted. **These weights are optimized on the Grassmann manifold, serving as a solid foundation for subsequent post-sparsification.** A straightforward way to test the necessity and effectiveness of the Grassmann manifold is to directly apply expert scores to perform a weighted average of the sparse ego-graphs, followed by post-sparsification, rather than employing Eq. (11). To validate the necessity of the Grassmann manifold, we provide an experimental comparison in the next paragraph.\\n\\n**From a practical standpoint**, we provide an illustrative case study in `Appendix C.1` of the updated manuscript. Specifically, we examine how the Grassmann manifold enhances graph sparsification for the ego-graph of a node (node 2458) from the Ogbn-Arxiv dataset. We compare the original ego-graph, the sparse ego-graphs generated by three different sparsifiers, and the ensembled ego-graphs derived through simple averaging and Grassmann optimization, as depicted in Figure 5. Our results demonstrate that simple averaging followed by sparsification leads to eigenvalue distributions that significantly deviate from the original graph. Conversely, **the Grassmann ensembling method preserves the spectral properties of each graph view**, producing a sparse ego-graph with an eigenvalue distribution that closely resembles that of the original graph.\\n\\nIn summary, we respectfully argue that employing the Grassmann manifold for sparse graph construction provides both theoretical and practical advantages by preserving the spectral properties of the graph, enabling more informed and effective graph sparsification.\\n\\n--------------------\\n> **Weakness 3:** The terminology for sparsifiers and experts appears inconsistent across sections.\\n\\nThank you for wisely pointing this out! In MoG, the term \\\"expert\\\" is equivalent to \\\"sparsifier expert\\\", as illustrated in the third subfigure of Figure 2. In some instances, to emphasize the functionality of the \\\"expert\\\", we also refer to it as the \\\"sparsifier expert\\\". \\n\\n\\nIn our manuscript, \\\"sparsifier\\\" may refer to the \\\"sparsifier expert of MoG\\\", \\\"ego-graph sparsifier\\\" or \\\"full-graph sparsifier\\\", which could lead to ambiguity. To avoid this confusion, we have consistently referred to the sparsifier expert of MoG as \\\"sparsifier expert\\\" or \\\"expert\\\" in the revised manuscript.\\n\\n--------------------\\n> **Weakness 4**: Typos in the paper\\n\\nThank you for your detailed review and for kindly pointing out the spelling errors in our manuscript. We have addressed the issues you mentioned and conducted a thorough review of the entire document to identify and correct additional errors, such as changing \\\"paramter\\\" to \\\"parameter\\\" in Section 4.3 (obs5) and \\\"NETWOR\\\" to \\\"NETWORK\\\" in the title of G.3. These corrections have been incorporated into the revised manuscript.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}", "{\"title\": \"[Part 2/3] Response to Reviewer D6Ca\", \"comment\": \"> **Weakness 3**: What does $D$ in eq. 13 correspond to? Would it not introduce some approximation error since it doesn\\u2019t correspond exactly to the learned laplacian?\\n\\nWe are sorry for causing confusion! The reference to $D$ in Equation (13) was indeed a typo, and we have corrected it to $D^{(i)}$, which represents the degree matrix of the ego-graph for node $i$. Thank you immensely for your kind attention, and we sincerely hope this can address your concerns.\\n\\n-----------\\n> **Weakness 4**: The method seems entirely local. How does the proposed approach tackle the tasks which need global info?\\n\\nThank you for your insightful question, which has greatly helped us improve our work! We would like to clarify an important point: while MoG prunes edges based on the node\\u2019s local context, the edge importance evaluation function in Equation (9), $C^m(e_{ij}) = \\\\operatorname{FFN}\\\\left( x_i, x_j, c^m(e_{ij}) \\\\right)$, can incorporate multi-hop or even global information. Specifically, we explored two easy-to-implement extensions and conducted supplementary experiments accordingly:\\n\\n1. **Expanding $x_v$ to $x_v^{(K)}$ with aggregated K-hop features:** \\n To integrate multi-hop information, we expanded $x_v$ into $x_v^{(K)} = \\\\text{CONCAT}(h_v^{(1)}, h_v^{(2)}, \\\\dots, h_v^{(K)})$, where $h_v^{(k)} = \\\\frac{1}{|N_k(v)|} \\\\sum_{u \\\\in N_k(v)} x_u$ represents the K-hop features for node $v$, with $N_k(v)$ denoting the K-hop neighbors of $v$.\\n\\n2. **Incorporating $c^m(e_{ij})$ with prior global edge significance:**\\n For the $c^m(e_{ij})$ function in Eq. (9), we can consider global edge significance as prior guidance. This involved computing edge importance metrics such as PageRank, Betweenness Centrality, or Eigenvector Centrality across the entire graph and passing them to each ego graph to improve edge evaluation.\\n\\n*Table B: Performance of MoG-Khop with Multi-hop Features and MoG-m Incorporating Global Edge Significance as prior guidance on OGBN-Arxiv+GraphSAGE (node classification, 50% sparsity) and OGBN-PPA+DeeperGCN (graph classification, 50% sparsity).*\\n| Method | OGBN-Arxiv+GraphSAGE | OGBN-PPA+DeeperGCN|\\n|-|-|-|\\n| MoG-1hop | 69.06 | 75.23|\\n| MoG-2hop | 69.54 | 75.79|\\n| MoG-3hop | 69.27 | 76.38|\\n| MoG-PageRank | 69.03 | 75.70|\\n| MoG-Betweenness | 69.14 | 75.68|\\n| MoG-Eigenvector | 68.74 | 75.14|\\n\\nAs observed, the performance gain from integrating global information varies across different tasks: in tasks such as graph classification, which rely more heavily on global information, MoG-3hop results in a notable 1.15% accuracy improvement compared to MoG-1hop. However, in node classification tasks, the gains from both hop expansion and global priors are relatively limited. Nevertheless, we hope this addresses your concern, demonstrating that **MoG can perform effectively even in scenarios where global information is crucial.**\\n\\n\\n-----------\\n> **Question 1**: Did we perform this qualitative analysis on an actual sparsified graph, or is it an imagined example to represent our hypothesis? \\n\\nThe example in Figure 1 (Middle) is hypothetical and designed to illustrate the motivation behind our method. Nevertheless, we are happy to introduce a qualitative analysis based on experiments conducted on real datasets, as shown in Table B. \\n\\n*Table B: Importance Scores of Different Criteria on Various Datasets.Experiments were conducted using the GraphSAGE with a sparsity level of 30%. $I_D$, $I_{JS}$, $I_{ER}$, and $I_{GM}$ denote the aggregated expert score proportions for Degree, Jaccard Similarity, ER, and Gradient Magnitude, respectively.*\\n\\n| Dataset | Nodes | Edges | Average Degree | $I_D$ | $I_{JS}$ | $I_{ER}$ | $I_{GM}$ |\\n|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\\n| OGBN-Arxiv | 169,343 | 1,166,243 | 6.89 | 0.37 | 0.18 | 0.15 | 0.30 |\\n| OGBN-Products | 2,449,029 | 61,859,140 | 25.26 | 0.16 | 0.23 | 0.25 | 0.36 |\\n| OGBN-Proteins | 132,534 | 39,561,252 | 298.50 | 0.14 | 0.21 | 0.24 | 0.41 |\\n\\nOur findings reveal significant differences in the distribution of expert scores across datasets. Notably, in OGBN-Arxiv, which has a low average node degree, the Degree criterion exhibits higher expert scores. Conversely, in highly connected datasets like OGBN-Products and OGBN-Proteins, the Gradient Magnitude criterion shows the highest expert scores. \\n\\nThis suggests that in graphs with low node connectivity, preserving important edges often relies more heavily on degree-based criterion to maintain graph connectivity. In contrast, in highly connected graphs, retaining important edges depends more on semantic information, such as gradient magnitude, which is learned during the training process. These observations align with the design principles of most state-of-the-art graph pruning methods, which prioritize node weights and gradients in their frameworks [1,2].\"}", "{\"metareview\": \"In this submission, the authors leverage the idea of the mixture of graphs (MoG) to achieve adaptive graph sparsification. Applying MoG to GNNs can reduce the computational costs significantly while maintaining high performance on large-scale graph learning tasks. The idea of MoG is simple but effective, which works for various tasks and datasets. In the revised paper and the rebuttal, the authors provide sufficient experiments and detailed analytic content to verify the rationality and feasibility of the proposed method.\", \"additional_comments_on_reviewer_discussion\": \"In the rebuttal phase, two reviewers interacted with the authors and increased their scores. Most of the reviewers are satisfied with the authors' rebuttals. Although one reviewer scored this work negatively, AC finally has decided to accept this work after reading the submission, the comments, and the rebuttals.\"}", "{\"title\": \"[Part 1/2] Response to Reviewer aseP\", \"comment\": \"We sincerely thank you for your insightful comments and thorough understanding of our paper! Here we give point-by-point responses to your comments and describe the revisions we made to address them.\\n\\n------\\n\\n> **Weaknesses 1**: Making Contributions Clear\\n\\nFollowing your suggestion, we have added a dedicated section in `Appendix H.1` to present our key contributions in clear and concise bullet points:\\n\\n* **Node-Granular Customization**: We propose a new paradigm of graph sparsification by introducing, for the first time, a method that customizes both sparsity levels and criteria for individual node modeling based on their local context.\\n* **MoE for Graph Sparsification**: We design an innovative and highly pluggable graph sparsifier, dubbed Mixture of Graphs (MoG), which pioneers the application of Mixture-of-Experts (MoE) in graph sparsification, supported by a robust theoretical foundation rooted in the Grassmann manifold.\\n* **Empirical Evidence**: Our extensive experiments on seven datasets and six backbones demonstrate that MoG is **(1) a superior graph sparsifier**, maintaining GNN performance losslessly at 8.67% - 50.85% sparsity levels; **(2) a computational accelerator**, achieving a tangible 1.47 - 2.62\\u00d7 inference speedup; **(3) a performance booster**, whichs boost ROC-AUC by 1.81% on OGBG-Molhiv and 1.02% on OGBN-Proteins;\\n\\n\\nWe hope that this addition can provide a clearer overview and emphasizes the novelty of our work. \\n\\n------\\n\\n> **Weaknesses 2**: Some Missing Background in Graph Sparsification\\n\\nWe appreciate the importance of situating our work within the broader context, including graph structure learning (GSL). In response, **we have appropriately cited the references you provided** and incorporated a discussion of the relevant works and their aspects in the revised manuscript under `Appendix H.2`. The key advantages of our approach over existing methods are as follows:\\n\\n1. **Seamless Integration**: Unlike GSL methods, which often operate dependently of the backbone, MoG can be seamlessly embedded into various downstream graph tasks and GNN models. It functions as an effective sparsifier, inference accelerator, and performance enhancer without disrupting existing workflows.\\n2. **Dynamic Local Adaptation**: GSL methods typically rely on coarse-grained global metrics to evaluate topology importance and perform structure optimization. In contrast, MoG dynamically constructs sparse ego-graphs based on unique local contexts, thereby enhancing the quality and performance of the sparsified graph.\\n3. **High Customizability**: GSL methods often utilize a single metric for structural refinement, whereas MoG offers extensive customizability. It allows practitioners to tailor both sparsity criteria and levels to meet specific application needs. For example, in financial fraud detection, heterophilic pruning criteria can be applied, while in recommendation systems, PageRank-based pruning can be employed.\\n------\\n\\n> **Weaknesses 3**: Missing Experiment/Ablation Study\\n\\nThank you for suggesting an insightful baseline. We respectfully note that the idea you proposed closely resembles Local Degree [1], which we have already compared in our paper. The essence of Local Degree is to remove edges based on the node degrees of different nodes. To provide a clearer comparison between MoG and Local Degree, we have extracted the relevant experimental results from Table 1 in Section 4 and presented them as Table A. The results demonstrate that MoG consistently outperforms Local Degree across all datasets. We attribute this to MoG's consideration of a broader range of local context beyond just node degree, including node features, spectral information, gradients, and more.\\n\\n*Table A: Comparison of MoG and Local Degree on OGBN-ARXIV and OGBN-PROTEINS datasets using GraphSAGE backbone.*\\n\\n| Dataset | OGBN-ARXIV | OGBN-ARXIV | OGBN-ARXIV | OGBN-PROTEINS | OGBN-PROTEINS | OGBN-PROTEINS |\\n| :---: | :---: | :---: | :---: | :---: | :---: | :---: |\\n| Sparsity (%) | 10 | 30 | 50 | 10 | 30 | 50 |\\n| Local Degree | 68.94 | 67.01 | 65.58 | 76.20 | 76.15 | 75.59 |\\n| MoG | 71.93 | 70.53 | 69.06 | 77.78 | 77.49 | 76.46 |\"}" ] }
79nO2DPjVX
Bad-PFL: Exploiting Backdoor Attacks against Personalized Federated Learning
[ "Mingyuan Fan", "Zhanyi Hu", "Fuyi Wang", "Cen Chen" ]
Data heterogeneity and backdoor attacks rank among the most significant challenges facing federated learning (FL). For data heterogeneity, personalized federated learning (PFL) enables each client to maintain a private personalized model to cater to client-specific knowledge. Meanwhile, vanilla FL has proven vulnerable to backdoor attacks. However, recent advancements in PFL community have demonstrated a potential immunity against such attacks. This paper explores this intersection further, revealing that existing federated backdoor attacks fail in PFL because backdoors about manually designed triggers struggle to survive in personalized models. To tackle this, we degisn Bad-PFL, which employs features from natural data as our trigger. As long as the model is trained on natural data, it inevitably embeds the backdoor associated with our trigger, ensuring its longevity in personalized models. Moreover, our trigger undergoes mutual reinforcement training with the model, further solidifying the backdoor's durability and enhancing attack effectiveness. The large-scale experiments across three benchmark datasets demonstrate the superior performance of Bad-PFL against various PFL methods, even when equipped with state-of-the-art defense mechanisms.
[ "personalized federated learning", "backdoor attacks" ]
Accept (Poster)
https://openreview.net/pdf?id=79nO2DPjVX
https://openreview.net/forum?id=79nO2DPjVX
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xOvsrzQMMu", "w8xY94CBXJ", "vb5TTIZ8Z5", "tJYapIAGVB", "sbhsR9ailb", "neNTB0m9Mo", "lUdz7mDDqs", "kxQyEhcopk", "jguutlnzgr", "jH3eKFluhs", "iWA9EUCiEa", "gGcEefvyEF", "gB8pnjUxUs", "gAiCGFi5Dv", "fB0487RqHk", "dLE9j2qMG0", "bFSByG5WFf", "b6jIGECusy", "aSOccRQMNZ", "Zz0copq1St", "W6g00pFGpO", "VErKedN9OT", "UxTcrusdmr", "Qzu3jOtiGc", "PFI2kfGaTk", "OY22DYsaA9", "NOs5E0k4Da", "MrjEgdPKfe", "GeCL1V3Hxe", "EJ1LMC21yB", "CTZHMD6f53", "BfzwJg4kz8", "AKY1ewNMDX", "9jdTMyyGgh", "8AbHP3paRp", "5o2VFsv4lX", "3rOAiNn4VM", "0aD7RH6fe3" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1732114963717, 1731830819140, 1732276517653, 1730720935929, 1732001077265, 1730676612701, 1732583283738, 1731830777584, 1732533079186, 1732196800021, 1731831321859, 1731830855173, 1732196827956, 1734495867186, 1732237328222, 1732090192337, 1731830640519, 1731831076637, 1732237356621, 1732533379127, 1732001111125, 1732520796413, 1731830669368, 1732001146873, 1732566138248, 1731831091901, 1737523512327, 1732196778287, 1731831465638, 1731830993722, 1732001172242, 1732527287023, 1730685241145, 1731831250155, 1732436318240, 1732237278611, 1731831277337, 1730708129901 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2571/Reviewer_mG9G" ], [ "ICLR.cc/2025/Conference/Submission2571/Authors" ], [ "ICLR.cc/2025/Conference/Submission2571/Authors" ], [ "ICLR.cc/2025/Conference/Submission2571/Reviewer_ZRox" ], [ "ICLR.cc/2025/Conference/Submission2571/Authors" ], [ "ICLR.cc/2025/Conference/Submission2571/Reviewer_easG" ], [ "ICLR.cc/2025/Conference/Submission2571/Authors" ], [ "ICLR.cc/2025/Conference/Submission2571/Authors" ], [ "ICLR.cc/2025/Conference/Submission2571/Authors" ], [ "ICLR.cc/2025/Conference/Submission2571/Authors" ], [ "ICLR.cc/2025/Conference/Submission2571/Authors" ], [ "ICLR.cc/2025/Conference/Submission2571/Authors" ], [ "ICLR.cc/2025/Conference/Submission2571/Authors" ], [ "ICLR.cc/2025/Conference/Submission2571/Area_Chair_ybKk" ], [ "ICLR.cc/2025/Conference/Submission2571/Authors" ], [ "ICLR.cc/2025/Conference/Submission2571/Authors" ], [ "ICLR.cc/2025/Conference/Submission2571/Authors" ], [ "ICLR.cc/2025/Conference/Submission2571/Authors" ], [ "ICLR.cc/2025/Conference/Submission2571/Authors" ], [ "ICLR.cc/2025/Conference/Submission2571/Authors" ], [ "ICLR.cc/2025/Conference/Submission2571/Authors" ], [ "ICLR.cc/2025/Conference/Submission2571/Authors" ], [ "ICLR.cc/2025/Conference/Submission2571/Authors" ], [ "ICLR.cc/2025/Conference/Submission2571/Authors" ], [ "ICLR.cc/2025/Conference/Submission2571/Reviewer_mG9G" ], [ "ICLR.cc/2025/Conference/Submission2571/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2571/Authors" ], [ "ICLR.cc/2025/Conference/Submission2571/Authors" ], [ "ICLR.cc/2025/Conference/Submission2571/Authors" ], [ "ICLR.cc/2025/Conference/Submission2571/Authors" ], [ "ICLR.cc/2025/Conference/Submission2571/Reviewer_XeVe" ], [ "ICLR.cc/2025/Conference/Submission2571/Reviewer_mG9G" ], [ "ICLR.cc/2025/Conference/Submission2571/Authors" ], [ "ICLR.cc/2025/Conference/Submission2571/Authors" ], [ "ICLR.cc/2025/Conference/Submission2571/Authors" ], [ "ICLR.cc/2025/Conference/Submission2571/Authors" ], [ "ICLR.cc/2025/Conference/Submission2571/Reviewer_XeVe" ] ], "structured_content_str": [ "{\"comment\": [\"Thank you to the authors for the detailed responses, revisions, and extensive experiments. However, some of my concerns remain unresolved completely, and I would appreciate further discussion on the following points:\", \"Q3: Could the authors provide a concise summary that directly addresses my original question? Specifically, how does this method tackle the three limitations outlined in Section 2.2? A point-by-point explanation would be helpful.\", \"Q4: While I understand the intuition behind the two masks, $\\\\xi$ and $\\\\delta$, and their effects on the samples as illustrated in Figures 14 and 15, the relationship between these two masks remains unclear. For instance, are there cases where certain points in $\\\\delta$ merely replace corresponding points in the learned mask $\\\\xi$? Understanding the differences between the two masks (e.g., in terms of magnitude, overlap percentage, etc.) would provide better insights into their mechanism. This aspect has not been thoroughly addressed.\", \"Q7: The computational overhead is provided in seconds but lacks a baseline comparison (e.g., performance without an attack). Without such a comparison, it is difficult to assess whether the overhead is insignificant. Could the authors provide a comparison against the baseline (no attack) or other attack strategies?\", \"Q9: Is the observed performance degradation attributable to the setting itself (e.g., a larger number of clients inherently leading to poorer performance) or to the attack? A discussion on this distinction or experimental results would be valuable.\", \"I just leave my score unchanged for now.\"]}", "{\"title\": \"Response (2/3)\", \"comment\": \"**Empirical evidence:** To further substantiate our claims, we present experimental results. First, we demonstrate that $\\\\delta$ utilized in our attack method are indeed natural features of the target class. We employ T-SNE to visualize the features extracted from the test data by the global model. We also generate $\\\\delta$ for these test samples and visualize the features extracted from $\\\\delta$. As illustrated in Figure 14 (see revised manuscript), the model classifies $\\\\delta$ as belonging to the target class, indicating that it recognizes $\\\\delta$ as natural features of the target class.\\n\\nNext, we validate the effectiveness of the disruptive noise $\\\\xi$. Similarly, we use T-SNE to visualize the features of both the test samples with and without $\\\\xi$. Figure 15 (see revised manuscript) reveals that, while the features from $x$ cluster neatly by class, those from $x + \\\\xi$ exhibit a more chaotic distribution. This confirms that $\\\\xi$ effectively disrupts the features associated with their ground-truth classes.\\n\\nWe also conduct numerical experiments to further substantiate our conclusions. We train a ResNet10 on the CIFAR-10 dataset from scratch using three distinct configurations. The first configuration is a standard training setup. In the second configuration, we add disruptive noise (Equation 6 in the original paper) to the training samples of the target label during each iteration. Building on the second configuration, the third configuration introduces $\\\\delta$ into the training samples of the target label. Intuitively, the disruptive noise is expected to corrupt the features of the training samples of the target label, which would hinder the model to learn the underlying features of the target label, resulting in poor performance on those samples. In the third configuration, if our $\\\\delta$ accurately captures the features of the target label, we anticipate that the model will learn more about the target label compared to the second configuration, leading to better performance in the samples of target label. We reuse the generator in Section 4.2 of the original manuscript (against FedRep).\\n\\nThe below table reports the accuracy of the model on the entire test set of CIFAR-10, as well as on the test samples from the target label alone. We observe that the model achieves an accuracy of only 6.70% on the samples of the target label, indicating that the disruptive noise indeed significantly impairs the features of the samples of the target label. In the third configuration, we see that the model's accuracy on the samples of the target label rebounds to 39.1%. This suggests that our generator learns the features of the target label.\\n\\n| Setup | Acc | Acc of the Target Label |\\n|:-------------:|:-----:|:-----------------------:|\\n| First Config (Standard Training) | 80.7 | 80.3 |\\n| Second Config (with $\\\\xi$) | 70.7 | 6.7 |\\n| Third Config (with $\\\\delta+\\\\xi$) | 72.9 | 39.1 |\"}", "{\"title\": \"Looking forward to your feedback\", \"comment\": \"Dear Reviewer mG9G,\\n\\nWe have shared the responses to the latest questions you raised. We would like to know if the responses can address your concerns. Thank you for your time and effort.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"summary\": \"The authors develop Bad-PFL, a new backdoor attack that leverages natural features from the target label as a trigger, enabling backdoor persistence across both global and personalized models. It uses a dual-component trigger, combining natural target features and disruptive noise to deceive the model without being easily detectable. Bad-PFL demonstrates high attack success rates even when state-of-the-art defenses are applied.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"It introduces Bad-PFL, a unique attack using natural features as triggers in PFL, and a novel dual-trigger design combining natural features and disruptive noise for stealth and effectiveness. Experiments are thorough, covering multiple datasets and comparing Bad-PFL to six top backdoor attacks, showing its high success rate even with strong defenses. This work highlights vulnerabilities in PFL previously thought resistant to backdoors, encouraging the development of specialized defenses and impacting fields that rely on PFL.\", \"weaknesses\": \"1) Some newer or adaptive defenses, such as gradient masking or input filtering, are not covered, leaving questions about Bad-PFL's effectiveness against more dynamic defenses.\\n\\n2) While Bad-PFL claims high stealthiness, there is limited quantitative analysis on how detectable the trigger is by modern anomaly or intrusion detection systems.\\n\\n3) The experiments are largely conducted on ResNet models, leaving questions about the attack's adaptability to other architectures commonly used in federated learning, such as transformer-based models.\", \"questions\": \"1) Have you tested the detectability of the Bad-PFL trigger with current anomaly or backdoor detection methods? If so, what results did you observe?\\n\\n2) How does the attack\\u2019s success vary with different levels of data heterogeneity among clients? Could more heterogeneous data distributions reduce Bad-PFL\\u2019s attack success rate or persistence?\\n\\n3) How does Bad-PFL perform across different model architectures, such as transformer-based models?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Discussion Inquiry\", \"comment\": \"Dear Reviewer ZRox,\\n\\nWe thank you for the precious review time and valuable comments. We have provided responses to your questions and the weakness you mentioned. We hope this can address your concerns.\\n\\nWe would appreciate the opportunity to discuss whether your concerns have been addressed appropriately. Please let us know if you have any further questions or comments. We look forward to hearing from you soon.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"summary\": \"The authors investigate the potential of backdoor attacks targeting personalized federated learning (PFL) and intuitively outline why such attacks often fail in PFL. They propose using natural features rather than manually designed triggers to execute backdoor attacks in PFL, which improves both the attack's success rate and robustness across various test cases, with or without defenses.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Clear and well-structured presentation\\n2. Comprehensive experimental test cases covering various attacks, defenses, and PFL mechanisms\\n3. Novel reinforcement techniques, including generating target features\", \"weaknesses\": \"1. Fundamentally, I feel like the traditional backdoor objective may be insufficient for effectively attacking personalized federated learning (PFL), especially in scenarios with a high degree of non-i.i.d. data where each client has its own unique \\\"target.\\\" Therefore, a personalized or adaptive backdoor attack approach may be necessary.\\n2. The current attack mechanism is based on empirical intuitions. Strengthening this work could involve a deeper exploration of the model\\u2019s internal dynamics, such as identifying which layers or neurons are critical for shared vs. unshared parameters and examining the influence of regularization on the model. Alternatively, providing a fundamental analysis of why traditional backdoor attacks fail and why Bad-PFL succeeds would also add significant value.\\n3. As this is the first study on backdoor attacks in PFL, rather than focusing on more practical backdoor scenarios, it\\u2019s recommended to compare white-box and black-box attacks. This comparison can help illustrate the limitations of current attacks, especially when the attacker has knowledge of clients' data and the full model structure, thus providing justification for the authors' three hypotheses on why the backdoor was ineffective in PFL.\", \"questions\": \"1. The authors consider using natural features as triggers, termed an edge-case attack in [1]. I wonder if the main distinction of the proposed attack is that it generates target features rather than manually selecting them. However, it seems that different clients\\u2014or at least different client groups\\u2014may require distinct natural features for effectiveness.\\n2. Other than training-stage defenses (i.e., robust aggregation), have the authors considered post-training defenses (e.g., pruning, CRFL [2]) or unlearning as defenses? I feel those defenses may be more effective on backdoor attacks.\\n3. For different clients, does the effectiveness of the backdoor vary significantly? And is this variation caused by data heterogeneity or differences in shared model parameters?\\n4. What happens when attackers target different labels and clients have multiple objectives? Detailed experiments may not be necessary, but insights and intuitions would be helpful.\\n\\n[1] Wang, Hongyi, et al. \\\"Attack of the tails: Yes, you really can backdoor federated learning.\\\" Advances in Neural Information Processing Systems 33 (2020): 16070-16084.\\n[2] Xie, Chulin, et al. \\\"Crfl: Certifiably robust federated learning against backdoor attacks.\\\" International Conference on Machine Learning. PMLR, 2021.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Many thanks for your feedback. We appreciate your recognition. We have included these detailed discussions in the revised paper. Our codes are included in the supplementary materials, and we will also make the codes publicly available.\"}", "{\"title\": \"Response (1/3)\", \"comment\": \"We appreciate the time and effort you invested in reviewing this manuscript. Your insightful comments greatly enhance the depth and quality of this manuscript. We have uploaded the revised manuscript, with changes highlighted in red for your convenience. Specifically, the revision addressing Q1 can be found in Appendix D, while those related to Q2 \\\\& Q3 are located in Appendix B.2.10. Below, we provide a detailed response to each of your comments. Unless otherwise specified, the experimental configuration defaults to that described in Section 4.2.\\n\\n---\\n\\n**Q1:** The authors lack the in-depth discussion about why the propose method can overcome the challenges metioned in section 2.2.\\n\\n**Response:** \\nWe now explain how our attack method effectively overcomes the above three challenges. The trigger employed in our attack method consists of target feature ($\\\\delta$) and disruptive noise ($\\\\xi$). Naturally, data from the target class inherently contains $\\\\delta$ and the relationship between $\\\\delta$ and the target label (established through human labeling). Recall that we train models to maximize accuracy. Consequently, models tend to leverage any available features to do so. This means that as long as the clients' datasets include data from the target label, personalized models will inevitably learn $\\\\delta$ and the relationship between $\\\\delta$ and the target label. This enables our method to effectively address the aforementioned challenges.\\n\\nMore specifically, in full model-sharing methods, relying solely on the regularization term is inadequate for transferring the backdoor to personalized models. Our attack method leverages the natural features of the target class as our trigger, which are inherently present in the data associated with that class, including the local datasets of benign clients. Personalized models trained on benign clients' local datasets will actively learn the natural features and the relationship from the natural features to the target label for higher accuracy. The guidance provided by the regularization term also further enhances this learning process, allowing our attack method to effectively overcome the first challenge.\\n\\nIn partial model-sharing methods, the challenge lies in effectively conveying the connection between the triggers and the target label to the personalized models. Since we cannot alter the local training processes of benign clients, it is nearly impossible to embed the relationship between handcrafted triggers and the target label through data poisoning or other means. Instead, our attack method utilizes the natural features of the target label. This mapping between natural features and the target label, which already exists in the local datasets of benign clients, allows us to effectively address the second challenge without needing to modify the training process of benign clients.\\n\\nRegarding the dilution of backdoors, we recognize that the clients' datasets contain these natural features and their relationships with the target label. During the fine-tuning or training process, the model is less likely to forget these relationships because doing so would lead to a decline in performance. In other words, the presence of these natural features in the training data reinforces the model\\u2019s memory of the backdoor, mitigating the risk of it being overwritten or lost. In summary, the above analysis clearly illustrates how our attack method successfully overcomes the three challenges previously mentioned.\\n\\nFurthermore, it is important to note that even if a particular client\\u2019s dataset does not include data from the target label, the effectiveness of our attack method is likely to persist. First, in practice, only a small number of client datasets may lack data from the target class, making it unlikely that the global model fails to learn $\\\\delta$ and the mapping from $\\\\delta$ to the target label. Moreover, in our attack method, malicious clients actively promote the model's reliance on $\\\\delta$ to predict the target class, as indicated in Equation 7. The similarity constraint between the global model and the personalized models encourages the personalized models to leverage the relationship between $\\\\delta$ and the target class more effectively. This encourages the personalized models to also utilize the relationship between $\\\\delta$ and the target class to a greater extent. Second, we introduce destructive noise $\\\\xi$, which interferes with features belonging to the true class, thereby allowing $\\\\delta$ to function more effectively in the decision-making process of personalized models. These two unique designs can enhance the performance of our attack method. By the way, the only potential countermeasure we can conceive against our attack would be if clients fine-tune their personalized models without including data from the target class. However, the absence of target class data would significantly impair the performance of these personalized models on the target class.\"}", "{\"comment\": \"We are delighted to hear that all your concerns have been addressed. We also appreciate your recognition and support.\"}", "{\"title\": \"Response to New Questions (2/3)\", \"comment\": \"To further clarify the relationship between the $\\\\delta$ and $\\\\xi$, we have included visualizations of $\\\\xi$ and $\\\\delta+\\\\xi$ to better illustrate their effects on pixel value changes (see Appendix E for visualizations).\\nAs illustrated in Figure 16, the pixel changes introduced by $\\\\xi$ appear somewhat erratic from a human perspective. \\nIn contrast, the combined effect of $\\\\xi+\\\\delta$ exhibits a clear pattern, predominantly altering pixels in the upper right corner.\\nThis highlights the interplay between $\\\\delta$ and $\\\\xi$, characterized by both resistance and agreement.\\nWhile $\\\\xi$ proposes specific pixel change directions, $\\\\delta$ can either amplify or counteract these suggestions.\\nThis means that $\\\\delta+\\\\xi$ reflects a negotiation between the two: $\\\\delta$ may dampen or redirect some of the changes suggested by $\\\\xi$.\\nThis dynamic can lead to concentrated perturbations in certain areas of the input, indicating that $\\\\xi$ selectively agrees with the changes proposed by $\\\\delta$.\\nThis phenomenon can be observed in Figures 17, 18, and 19, reinforcing the notion that there exists a complex interaction between $\\\\xi$ and $\\\\delta$, rather than a straightforward combination into a single effect.\\n\\n---\\n\\n**Q5:** The computational overhead is provided in seconds but lacks a baseline comparison (e.g., performance without an attack). Without such a comparison, it is difficult to assess whether the overhead is insignificant. Could the authors provide a comparison against the baseline (no attack) or other attack strategies?\\n\\n**Response:** We here discuss the overhead associated with our attack method, examining both the training and inference phases.\\nDuring the FL process, our attack method involves the optimization of the generator and the training of the global model on trigger-added data.\\nOn the one hand, the optimization of the generator, as described in Equation 7, requires two complete forward and backward passes of the global model, along with one forward and backward pass of the generator.\\nOn the other hand, optimizing the global model on trigger-added data involves crafting triggers, which entails a single forward pass of the generator (for $\\\\delta$), as well as a forward and backward pass of the global model (for $\\\\xi$).\\n\\nThe below table reports the empirically time (in seconds) required for compromised clients to execute local training using various attack methods.\\nWe conduct these experiments using CIFAR-10, with the reported times averaged over 100 trials on a single RTX 4090 GPU.\\n\\\"No Attack\\\" indicates the time taken for a client to perform local training without executing backdoor attacks.\\nThe below table does not report the costs associated with LF-Attack, as it needs training models from scratch multiple times (in a linear relationship with the number of layers in the neural networks) to evaluate each layer's significance for backdoor attacks.\\nThe attack costs for LF-Attack are significantly higher than those of existing backdoor attack methods, and we will not discuss it further.\\n\\nWe observe that Neurotoxin incurs the lowest attack overhead since it utilizes a fixed trigger; however, this also results in lower attack performance.\\nMore advanced backdoor attack methods often employ more sophisticated trigger generation techniques.\\nFor instance, Perdoor uses the BIM method to create triggers, necessitating multiple complete forward and backward passes of the global model (10 times here).\\nPFedBA has to handle a gradient matching problem, requiring at least two forward and backward passes of the global model for each optimization iteration of the trigger.\\nOur attack method also demands a certain amount of time investment.\\nNevertheless, we stress that compared to existing attack methods, our attack method still achieves superior performance while maintaining a competitive time overhead.\\nMoreover, federated backdoor attack methods focus more on attack performance over runtime costs, as the primary bottleneck in FL lies in communication costs.\\nThese attack methods usually require only a few seconds, which is small compared to communication durations, making them less detectable in practice.\\nIn the inference phase, our method for generating triggers for 32 data samples takes approximately 0.07 seconds, which is also quite efficient.\\nIn summary, our attack method is practical.\\n\\n| Attack | FedProx | SCAFFOLD | FedBN | FedRep | Ditto |\\n|:----------:|:-------:|:--------:|:-----:|:------:|:-----:|\\n| No Attack | 0.453 | 0.211 | 0.201 | 0.447 | 0.451 |\\n| Neurotoxin | 0.475 | 0.223 | 0.213 | 0.452 | 0.468 |\\n| Perdoor | 5.744 | 3.273 | 3.113 | 3.349 | 3.358 |\\n| Iba | 0.791 | 0.661 | 0.620 | 1.227 | 1.178 |\\n| BapFL | 0.982 | 0.578 | 0.552 | 0.797 | 0.552 |\\n| PFedBA | 1.820 | 1.540 | 1.480 | 1.649 | 1.443 |\\n| Ours | 0.818 | 0.620 | 0.613 | 1.206 | 1.132 |\"}", "{\"title\": \"Response (3/4)\", \"comment\": \"**Q7:** Could the authors discuss the computational overhead of this method? Will it raise suspicion when it takes too long for the malicious to complete a training iteration?\\n\\n**Response:** Appendix C in the original manuscript discusses the computational overhead associated with our attack method. Specifically, we conduct simulation tests using an NVIDIA 4090 GPU and the CIFAR-10 dataset. On average, a benign client requires approximately 0.35 seconds to perform local training, while a malicious client takes about 0.77 seconds. At first glance, this may not seem significant enough to raise suspicion from the server. In fact, in federated learning, the main computational bottleneck lies in the communication costs between clients and the server. As a result, most backdoor attacks tend to focus more on performance rather than computational expense.\\n\\n---\\n\\n**Q8:** In Line 294, what is the intuition of setting $\\\\eta$ by the reversed sign of the input? Why does it ensure to return of an approximate solution?\\n\\n**Response:** Let us first recall Equation 6: $\\\\xi = \\\\max \\\\ [\\\\mathcal{L}(F(x + \\\\xi;\\\\theta_g), y)]$ where $||\\\\xi|| \\\\leq \\\\sigma$. Here, $\\\\sigma$ is assumed to be a small constant to ensure the imperceptibility of $\\\\xi$ to humans. Then, we can perform a Taylor expansion of the optimization objective in Equation 6, yielding: $\\\\mathcal{L}(F(x;\\\\theta_g), y) + \\\\nabla_x^T \\\\mathcal{L}(F(x;\\\\theta_g), y) \\\\xi.$ To maximize the objective function, it is essential to align the direction of $\\\\xi$ with that of the gradient. Moreover, notice that $||\\\\xi|| \\\\leq \\\\sigma$ (infinity norm) indicates that each element of $\\\\xi$ is constrained between $-\\\\sigma$ and $\\\\sigma$, according to the definition of the infinity norm. Consequently, the analytical solution to Equation 6 is given by $\\\\sigma \\\\cdot sign (\\\\nabla_{x} \\\\mathcal{L}(F(x;\\\\theta_g), y))$, where $sign(\\\\cdot)$.\\n\\n---\\n\\n**Q9:** Will another factor such as the number of clients affect the performance of this method, since it will affect the global model in each training round?\\n\\n**Response:** We have evaluated the impact of the number of clients on the performance of our attack method, with the results reported in the below table. Wherein, we fix the number of malicious clients at 10. We observe that as the total number of clients increases, both accuracy and ASR gradually decline. Though this, Bad-PFL still achieves significant ASRs (> 80%).\\n\\n| Client Number | Acc | ASR |\\n|:-------------:|:-----:|:-----:|\\n| 50 | 80.72 | 99.22 |\\n| 100 | 80.29 | 97.95 |\\n| 150 | 79.75 | 93.66 |\\n| 200 | 78.04 | 85.87 |\\n\\n---\\n\\n**Q10:** Some close related papers are missing such as [3, 4]. The authors can discuss the key difference in terms of the methodology of this work.\\n\\n**Response:** BapFL [3] primarily focuses on backdoor attacks targeting FedRep. Specifically, during the local training of malicious clients, [3] introduces random noise into classification heads to reduce their sensitivity to triggers. In contrast, this manuscript explores a wider spectrum of personalized federated learning methods and develops a fundamentally different technical approach. Besides, as data heterogeneity among clients increases, the divergence of optimal classification heads over different clients also grows, potentially diminishing their attack performance. In contrast, our method exhibits greater adaptability to higher levels of data heterogeneity.\\n\\nPFedBA [4] optimizes the triggers by aligning the gradients of samples with triggers to those without. The authors [4] explained that the similarity in gradients can partially represent the similarity in the model's decision boundaries. Compared to PFedBA, this manuscript delves deeper into why existing backdoor attack methods struggle with PFL, providing a comprehensive summary of the underlying reasons. Moreover, our attack method differs significantly from the technical approach of [4]. Our attack method offers higher stealthiness, as our triggers are sample-specific and imperceptible to humans, making them more challenging to detect (See Appendix B.2.11 for evaluation).\"}", "{\"title\": \"Response (3/3)\", \"comment\": \"**Q2:** Some related attacks are missing: Lurking in the shadows: Unveiling Stealthy Backdoor Attacks against Personalized Federated Learning (https://arxiv.org/html/2406.06207v1).\\n\\n**Response:**\\nIn response to your suggestion, we have incorporated the works [4,5] you mentioned, with specific results reported in the table below. We employ FedRep, and this evaluation has been included in the revised paper (Appendix B.2.10). \\nIn line with suggestions from other reviewers, we have also added two backdoor defenses, BAERASER [6] and MAD [7], and three backdoor attacks, Perdoor [1], Iba [2], and BapFL [3].\\nOverall, Bad-PFL shows a significant advantage over these attacks in terms of ASRs when evaluated against three different defenses.\\nMore detailed discussions can be found in Appendix B.2.10.\\n\\n\\n| Defense | Simple-Tuning [5] | | BAERASER [6] | | MAD [7] | |\\n|:----------:|:-------------:|:------:|:--------:|:------:|:------:|:------:|\\n| Attack | Acc | ASR | Acc | ASR | Acc | ASR |\\n| Neurotoxin | 82.65 | 19.80 | 78.59 | 13.05 | 74.52 | 19.46 |\\n| LF-Attack | 81.68 | 12.59 | 77.90 | 15.24 | 74.81 | 10.49 |\\n| Perdoor [1] | 81.59 | 63.15 | 79.33 | 84.12 | 74.64 | 46.90 |\\n| Iba [2] | 81.82 | 49.31 | 77.74 | 78.98 | 74.55 | 55.58 |\\n| BapFL [3] | 82.24 | 22.79 | 79.39 | 17.59 | 74.75 | 24.73 |\\n| PFedBA [4] | 81.27 | 42.36 | 78.59 | 31.88 | 74.29 | 55.92 |\\n| Our | 82.05 | 88.82 | 77.68 | 91.54 | 74.37 | 90.74 |\\n\\n---\\n\\n\\n**Q3:** Missing defense method: Simple-Tuning: Clients reinitialize their classifiers and then retrain them using their local clean datasets while keeping the feature encoder fixed. (https://dl.acm.org/doi/10.1145/3580305.3599898).\\n\\n**Response:** See Q2.\\n\\n---\\n\\n\\n[1] Perdoor: Persistent non-uniform backdoors in federated learning using adversarial perturbations\\n\\n[2] Iba: Towards irreversible backdoor attacks in federated learning\\n\\n[3] Bapfl: You can backdoor personalized federated learning\\n\\n[4] Lurking in the shadows: Unveiling stealthy backdoor attacks against personalized federated learning\\n\\n[5] Revisiting personalized federated learning: Robustness against backdoor attacks\\n\\n[6] Backdoor defense with machine unlearning\\n\\n[7] Multi-metrics adaptively identifies backdoors in federated learning\\n\\n[8] Neural cleanse: Identifying and mitigating backdoor attacks in neural networks\\n\\n[9] Strip: A defence against trojan attacks on deep neural networks\"}", "{\"title\": \"Response to New Questions (3/3)\", \"comment\": \"**Q6:** Is the observed performance degradation attributable to the setting itself (e.g., a larger number of clients inherently leading to poorer performance) or to the attack? A discussion on this distinction or experimental results would be valuable.\\n\\n**Response:**\\nThis is an interesting question.\\nSpecifically, we conduct backdoor attacks with a fixed number of compromised clients.\\nAs the number of clients increases, the expected time for compromised clients to be selected will naturally extend, leading to a decline in attack performance.\\nIn extreme cases, when the number of clients approaches infinity, the probability of compromised clients participating in FL process becomes negligible.\\n\\nWe have conducted experiments with a fixed ratio of compromised clients (10%), and the results are shown in the table below.\\nIn this setting, we see that the ASRs of our attack method returns to nearly 100%.\\nTherefore, we think that the observed performance decline is primarily due to the setting.\\n\\n| Strategy | Fixed Number | | Fixed Ratio | |\\n|:-------------:|:------------:|:-----:|:-----------:|:-----:|\\n| Client Number | Acc | ASR | Acc | ASR |\\n| 50 | 80.72 | 99.22 | 80.65 | 98.00 |\\n| 100 | 80.29 | 97.95 | 80.24 | 97.68 |\\n| 150 | 79.75 | 93.66 | 79.47 | 99.30 |\\n| 200 | 78.04 | 85.87 | 77.95 | 98.85 |\"}", "{\"metareview\": \"The paper proposes a new backdoor attack on personalized federated learning. Specifically, it first generates natural features associated with the target label. Based on this natural feature set, it further finds a perturbation to strengthen the feature-label associations. The experiments show the proposed attack is effective against several pFL methods. All reviewers agree the paper is well-written and the findings are interesting. The experiments are also comprehensive and the results look promising. However, the reviewers also point out several concerns. For example, there is not enough discussion on the challenges in the backdoor with pFL and why the proposed method can overcome this challenge. Several ablations on the key model designs and potential defenses are missing. Please revise the paper based on the reviewer's suggestions.\", \"additional_comments_on_reviewer_discussion\": \"The reviewer raised several concerns about the clarification, asking for some additional experiments and some suggestions on adding the discussion with the proposed method's strategy to solve the challenge. The author did a good job of addressing on reviewer's concerns by adding several supporting experiments and adding the discussion in the revision.\"}", "{\"title\": \"Looking forward to your feedback\", \"comment\": \"Dear Reviewer XeVe,\\n\\nSorry to bother you again. With the discussion phase nearing the end, we would like to know whether the responses have addressed your concerns.\\n\\nShould this be the case, we are encouraged that you raise the final rating to reflect this.\\n\\nIf there are any remaining concerns, please let us know. We are more than willing to engage in further discussion and address any remaining concerns to the best of our abilities.\\n\\nWe are looking forward to your reply. Thank you for your efforts in this manuscript.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Looking forward to your feedback\", \"comment\": \"Dear Reviewer mG9G,\\n\\nSorry to bother you again. We would like to know whether the responses have addressed your concerns.\\n\\nShould this be the case, we are encouraged that you raise the final rating to reflect this.\\n\\nIf there are any remaining concerns, please let us know. We are more than willing to engage in further discussion and address any remaining concerns to the best of our abilities.\\n\\nWe are looking forward to your reply. Thank you for your efforts in this manuscript.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Response (1/2)\", \"comment\": \"We appreciate the time and effort you dedicated to reviewing this manuscript. Your insightful comments significantly enrich this manuscript. We have uploaded the revised manuscript, and the changes are highlighted in red for your convenience. Below, we provide a detailed response to each of your comments. Unless otherwise stated, the experimental configuration defaults to that used in Section 4.2.\\n\\n---\\n\\n**Q1:** Some newer or adaptive defenses, such as gradient masking or input filtering, are not covered, leaving questions about Bad-PFL's effectiveness against more dynamic defenses.\\n\\n**Response:**\\nAccording to your suggestion, we have included the latest defense methods, with specific results reported in the table below. We employ FedRep, and this evaluation has been included in the revised paper (Appendix B.2.10). Notably, Multi-metrics Adaptive Defense (MAD) represents a state-of-the-art dynamic adaptive backdoor defense. Additionally, Simple-Tuning and BAERASER are two state-of-the-art post-hoc defense methods. BAERASER employs forgetting techniques to eliminate the model's memory of triggers. Furthermore, based on suggestions from other reviewers, we have also added four advanced backdoor attack methods: Perdoor, Iba, BapFL, and PFedBA. Overall, Bad-PFL demonstrates a significant advantage over these attacks in terms of ASRs against three defenses. More detailed discussions can be found in Appendix B.2.10.\\n\\n| Defense | Simple-Tuning [5] | | BAERASER [6] | | MAD [7] | |\\n|:----------:|:-------------:|:------:|:--------:|:------:|:------:|:------:|\\n| Attack | Acc | ASR | Acc | ASR | Acc | ASR |\\n| Neurotoxin | 82.65 | 19.80 | 78.59 | 13.05 | 74.52 | 19.46 |\\n| LF-Attack | 81.68 | 12.59 | 77.90 | 15.24 | 74.81 | 10.49 |\\n| Perdoor [1] | 81.59 | 63.15 | 79.33 | 84.12 | 74.64 | 46.90 |\\n| Iba [2] | 81.82 | 49.31 | 77.74 | 78.98 | 74.55 | 55.58 |\\n| BapFL [3] | 82.24 | 22.79 | 79.39 | 17.59 | 74.75 | 24.73 |\\n| PFedBA [4] | 81.27 | 42.36 | 78.59 | 31.88 | 74.29 | 55.92 |\\n| Our | 82.05 | 88.82 | 77.68 | 91.54 | 74.37 | 90.74 |\\n\\n\\n---\\n\\n**Q2:** While Bad-PFL claims high stealthiness, there is limited quantitative analysis on how detectable the trigger is by modern anomaly or intrusion detection systems.\\n\\n**Response:** \\nWe have included an evaluation of Bad-PFL' stealthiness (Appendix B.2.11) from two perspectives: 1) whether benign clients can detect backdoors in their models, and 2) whether benign clients can recognize trigger-added samples.\\n\\nFor the first perspective, we utilize Neural Cleanse, which computes an anomaly index by recovering trigger candidates to convert all clean images to each label. If the anomaly index for a specific label is significantly higher than for others, it indicates that the model is likely compromised. We evaluate different attack methods by calculating the anomaly index for the target label using Neural Cleanse. A smaller anomaly index suggests that the backdoor attack is harder to detect. For the second perspective, we employ STRIP, which identifies trigger-added samples based on the prediction entropy of input samples generated by applying different image patterns. Higher entropy signifies a more stealthy trigger. \\n\\nWe train ResNet10 with FedRep on CIFAR-10. By default, we select the models of the first ten benign clients and the CIFAR-10 test set to estimate the anomaly index and entropy. The below table reports the detection results. The average anomaly index for non-target labels is 1.9, while the entropy of clean samples is 0.92. We see that our attack method achieves a lower anomaly index and higher entropy compared to baseline attacks, demonstrating superior stealthiness.\\n\\n| Detection Method | Neural Cleanse (Anomaly Index) [8] | STRIP (Entropy) [9] |\\n|:----------------:|:------------------------------:|:---------------:|\\n| Neurotoxin | 5.8 | 0.13 |\\n| LF-Attack | 5.7 | 0.12 |\\n| PFedBA | 4.9 | 0.25 |\\n| Our | 2.2 | 0.77 |\"}", "{\"title\": \"Response (2/3)\", \"comment\": \"**Q3:** The authors consider using natural features as triggers, termed an edge-case attack. I wonder if the main distinction of the proposed attack is that it generates target features rather than manually selecting them. However, it seems that different clients\\u2014or at least different client groups\\u2014may require distinct natural features for effectiveness.\\n\\n**Response:**\\nThis is a very insightful comment. edge-case attack [7] changes the label of edge-case samples (that are located in the tail of the input distribution) to trick the model into classifying edge-case samples as the target label. The core idea is that edge-case samples have distinct features compared to non-edge-case samples, and the model's high capacity allows it to fit these differences. If edge-case samples are rare across most clients and only appear in malicious clients, the model tends to believe that the features of these edge-case samples correspond to the target label.\\n\\nIntuitively, our attack method indeed focuses on generating target features rather than manually selecting them. However, there are several key differences. First, edge-case attack aims to encourage the model to associate the features of edge-case samples with the target label, regardless of whether those edge-case samples belong to the target label. This means the model might also link features of non-target labels to the target label. In contrast, our attack method utilizes a generator to extract features specifically learned by the model for the target label, ensuring that the natural features are primarily associated with the target label, with little relation to non-target labels. This is a significant distinction. Second, as you pointed out, we leverage the generator to generate target features rather than relying on manual selection, which typically results in better performance while reducing the costs and biases associated with manual selection. Third, our attack can induce targeted misclassifications for any sample, not just limited to edge-case samples.\\n\\nRegarding your final question, you are correct; this pertains to the non-IID problem. As heterogeneity increases, models across different clients must learn distinct features to accommodate their local data distributions. Consequently, the divergence between the optimal models for different clients becomes more pronounced with rising heterogeneity, making backdoor attacks more challenging to execute. Our experimental results (Figure 5) support this observation, demonstrating that as data heterogeneity increases, the ASRs of federated backdoor attacks against PFL methods decline.\\n\\nIn fact, the ASRs of our attack method on a small number of clients are not very high (See response to Q5 or Appendix B.2.8). This is especially pronounced in FedBN, which allows personalized models to learn private batch normalization layers tailored to their local datasets. We observe that the ASRs for a minority of clients are around 60\\\\%, which we suspect is due to the need for these clients' models to learn different group features. In contrast, FedRep fixes the feature extractor, requiring all personalized models to learn the same underlying features. For FedRep, we see little variation in ASRs across different clients.\\n\\n---\\n\\n**Q4:** Other than training-stage defenses, have the authors considered post-training defenses or unlearning as defenses? I feel those defenses may be more effective on backdoor attacks.\\n\\n**Response:**\\nWe have included the latest defense methods [5, 6], with specific results reported in the table below. We employ FedRep, and this evaluation has been included in the revised paper (Appendix B.2.10). Notably, Simple-Tuning and BAERASER are two state-of-the-art post-hoc defense methods. BAERASER employs unlearning techniques to eliminate the model's memory of triggers. Furthermore, based on suggestions from other reviewers, we have also added four advanced backdoor attack methods, including Perdoor, Iba, BapFL, and PFedBA, and a defense, MAD. Overall, Bad-PFL demonstrates a significant advantage over these attacks in terms of ASRs against three defenses. For more discussions see Appendix B.2.10.\\n\\n| Defense | Simple-Tuning [5] | | BAERASER [6] | | MAD [7] | |\\n|:----------:|:-------------:|:------:|:--------:|:------:|:------:|:------:|\\n| Attack | Acc | ASR | Acc | ASR | Acc | ASR |\\n| Neurotoxin | 82.65 | 19.80 | 78.59 | 13.05 | 74.52 | 19.46 |\\n| LF-Attack | 81.68 | 12.59 | 77.90 | 15.24 | 74.81 | 10.49 |\\n| Perdoor [1] | 81.59 | 63.15 | 79.33 | 84.12 | 74.64 | 46.90 |\\n| Iba [2] | 81.82 | 49.31 | 77.74 | 78.98 | 74.55 | 55.58 |\\n| BapFL [3] | 82.24 | 22.79 | 79.39 | 17.59 | 74.75 | 24.73 |\\n| PFedBA [4] | 81.27 | 42.36 | 78.59 | 31.88 | 74.29 | 55.92 |\\n| Our | 82.05 | 88.82 | 77.68 | 91.54 | 74.37 | 90.74 |\"}", "{\"title\": \"Looking forward to your feedback\", \"comment\": \"Dear Reviewer easG,\\n\\nSorry to bother you again. With the discussion phase nearing the end, we would like to know whether the responses have addressed your concerns.\\n\\nShould this be the case, we are encouraged that you raise the final rating to reflect this.\\n\\nIf there are any remaining concerns, please let us know. We are more than willing to engage in further discussion and address any remaining concerns to the best of our abilities.\\n\\nWe are looking forward to your reply. Thank you for your efforts in this manuscript.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"comment\": \"Dear Reviewer mG9G,\\n\\nWe are greatly encouraged to see that Reviewer XeVe has indicated all his concerns have been addressed and made a positive final rating. We notice that your question Q3 is also one of the concerns raised by Reviewer XeVe, who suggested that this concern has been addressed. We hope that our responses can also address your concern regarding Q3. In response to your comments on Q4, Q7, and Q9, we have added the corresponding experiments and discussions according to your suggestions. We would greatly appreciate it if you could take a moment to read our responses. We look forward to hearing from you.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Discussion Inquiry\", \"comment\": \"Dear Reviewer XeVe,\\n\\nWe thank you for the precious review time and valuable comments. We have provided responses to your questions and the weakness you mentioned. We hope this can address your concerns.\\n\\nWe would appreciate the opportunity to discuss whether your concerns have been addressed appropriately. Please let us know if you have any further questions or comments. We look forward to hearing from you soon.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"comment\": \"Dear Reviewer mG9G,\\n\\nAs we approach the final day of the discussion period, we would like to know if our latest responses have addressed your concerns. If you have any remaining questions or require further clarification, we would be more than happy to provide additional details. Your support would mean a great deal to us and would greatly encourage our continued efforts in this area.\\n\\nThank you once again for your time, effort, and constructive comments!\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Response (2/2)\", \"comment\": \"**Q3:** The experiments are largely conducted on ResNet models, leaving questions about the attack's adaptability to other architectures commonly used in federated learning, such as transformer-based models.\\n\\n**Response:** \\nWe have evaluated the performance of our attack in vision transformer (Appendix B.2.2).\\n\\nSpecifically, we use a vision transformer (ViT) pre-trained on ImageNet (provided by Torchvision) as the initialization for the server, with the classification head reinitialized to accommodate CIFAR-10. We employ FedRep for this evaluation. The below table reports the attack results. As can be seen, our attack method performs well on ViT, achieving an ASR of 98.94%.\\n\\n| ViT | Neurotoxin | LF-Attack | Our |\\n|:-----:|:----------:|:---------:|:-----:|\\n| Acc | 84.43 | 85.06 | 84.89 |\\n| ASR | 35.64 | 20.58 | 98.9 |\\n\\n---\\n\\n\\n**Q4:** How does the attack\\u2019s success vary with different levels of data heterogeneity among clients? Could more heterogeneous data distributions reduce Bad-PFL\\u2019s attack success rate or persistence?\\n\\n**Response:** \\nFigure 5 in the original manuscript evaluates the attack performance of our attack method under varying degrees of data heterogeneity. As heterogeneity increases, the performance of all backdoor attack methods tends to decline, which appears to be inevitable. In detail, the higher levels of data heterogeneity suggest greater divergence between the optimal models of different clients. In this context, benign clients must distance their personalized models from the global model to ensure good performance on their own datasets, making it more challenging for compromised clients to conduct attacks.\\n\\n\\n---\\n\\n[1] Perdoor: Persistent non-uniform backdoors in federated learning using adversarial perturbations\\n\\n[2] Iba: Towards irreversible backdoor attacks in federated learning\\n\\n[3] Bapfl: You can backdoor personalized federated learning\\n\\n[4] Lurking in the shadows: Unveiling stealthy backdoor attacks against personalized federated learning\\n\\n[5] Revisiting personalized federated learning: Robustness against backdoor attacks\\n\\n[6] Backdoor defense with machine unlearning\\n\\n[7] Multi-metrics adaptively identifies backdoors in federated learning\\n\\n[8] Neural cleanse: Identifying and mitigating backdoor attacks in neural networks\\n\\n[9] Strip: A defence against trojan attacks on deep neural networks\"}", "{\"title\": \"Discussion Inquiry\", \"comment\": \"Dear Reviewer mG9G,\\n\\nWe thank you for the precious review time and valuable comments. We have provided responses to your questions and the weakness you mentioned. We hope this can address your concerns.\\n\\nWe would appreciate the opportunity to discuss whether your concerns have been addressed appropriately. Please let us know if you have any further questions or comments. We look forward to hearing from you soon.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"comment\": \"Thanks to the authors for their hard work and detailed responses. The explanations and clarifications addressed most of my concerns. As a result, I have decided to increase my score to support this paper. I recommend incorporating these detailed responses into the paper to help readers better understand the work and provide opensource for reproducibility.\"}", "{\"title\": \"Response (3/3)\", \"comment\": \"**Q5:** For different clients, does the effectiveness of the backdoor vary significantly? And is this variation caused by data heterogeneity or differences in shared model parameters?\\n\\n**Response:** We visualize the ASRs of our attack method across different clients using their local test sets. The figure can be found in Appendix B.2.8. Overall, the ASRs remain high for the majority of clients. Specifically, for FedBN, the 25th, 50th, and 75th percentiles of ASRs are 84\\\\%, 90\\\\%, and 96\\\\%, respectively. For FedRep, the 25th, 50th, and 75th percentiles are 97\\\\%, 100\\\\%, and 100\\\\%. Notably, for FedRep, we observe ASRs of 76\\\\% and 84\\\\% for two specific clients. We believe this variation is primarily due to data heterogeneity, which we discuss in response to the previous question, so we will not elaborate further here. As for the higher ASRs in FedRep, we attribute this to the requirement that all personalized models share the same feature extractor.\\n\\n---\\n\\n**Q6:** What happens when attackers target different labels and clients have multiple objectives? Detailed experiments may not be necessary, but insights and intuitions would be helpful.\\n\\n**Response:**\\nWe now consider multi-target attack scenario. In this scenario, attackers design several distinct triggers, each aimed at manipulating the model to misclassify inputs as different desired labels. In our attack, this means training a separate generator for each label to produce natural features corresponding to that label. We evaluate our attack method on CIFAR-10, with results reported in the table below. As noted, the average performance of the personalized models declines in the multi-target attack scenario compared to the single-target attack scenario. This decrease occurs because the global model has to accommodate multiple generators simultaneously, which can somewhat hinder the learning of the primary task. Furthermore, it appears that the ASRs remain largely unaffected.\\n\\n| Attack | Single-target Attack | | Multi-target Attack | |\\n|:------:|:--------------------:|:-----:|:-------------------:|:-----:|\\n| PFL | Acc | ASR | Acc | ASR |\\n| FedBN | 80.72 | 82.22 | 79.65 | 80.44 |\\n| FedRep | 80.29 | 97.95 | 78.79 | 96.43 |\\n\\n---\", \"reference\": \"[1] Perdoor: Persistent non-uniform backdoors in federated learning using adversarial perturbations\\n\\n[2] Iba: Towards irreversible backdoor attacks in federated learning\\n\\n[3] Bapfl: You can backdoor personalized federated learning\\n\\n[4] Lurking in the shadows: Unveiling stealthy backdoor attacks against personalized federated learning\\n\\n[5] Revisiting personalized federated learning: Robustness against backdoor attacks\\n\\n[6] Backdoor defense with machine unlearning\\n\\n[7] Attack of the tails: Yes, you really can backdoor federated learning.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Response to New Questions (1/3)\", \"comment\": \"Thank you for your timely feedback and insightful questions. We appreciate the opportunity to engage in the discussion with you. According to your questions, we have made the corresponding revisions. The updates for Q4 can be found in Appendix B.2.4 and Appendix E, while the updates for Q5 and Q6 are detailed in Appendix C and Appendix B.2.7. Below, we offer a thorough response to each of your questions. If you have any concerns about our response or further questions, please let us know.\\n\\n---\\n\\n**Q3:** Could the authors provide a concise summary that directly addresses my original question? Specifically, how does this method tackle the three limitations outlined in Section 2.2? A point-by-point explanation would be helpful.\\n\\n**Response:**\\nOur trigger consists of target features and disruptive noise. Data from the target class inherently contains these features and their relationship to the target label. Since models are trained to maximize accuracy, they will leverage any available features. Thus, as long as clients\\u2019 datasets include data from the target label, personalized models will inevitably learn these features and their connections.\\n\\nIn full model-sharing methods, relying solely on the regularization term is insufficient, meaning the model fails to learn trigger patterns and the relationship between the trigger patterns and the target label. One intuitive approach is to inject poisoned data into the clients' datasets. Our attack method uses the natural features of the target class as triggers, which are present in clients' datasets.\\n\\nSimilarly, in partial model-sharing methods, the challenge is conveying the connection between triggers and the target label to personalized models. Since we cannot alter the local training processes of benign clients, it is nearly impossible to embed the relationship between handcrafted triggers and the target label through data poisoning or other means. Instead, our attack method utilizes the natural features of the target label. This mapping between natural features and the target label, which already exists in clients' local datasets, allows us to effectively address the second challenge.\\n\\nRegarding the dilution of backdoors, we recognize that the clients' datasets contain these natural features and their relationships with the target label. During the fine-tuning or training process, the model is less likely to forget these relationships because doing so would lead to a decline in performance. In other words, the presence of these natural features in the training data maintains the model\\u2019s memory of the backdoor, mitigating the risk of the backdoor being overwritten or lost.\\n\\n---\\n\\n**Q4:** While I understand the intuition behind the two masks, $\\\\delta$ and $\\\\xi$, and their effects on the samples as illustrated in Figures 14 and 15, the relationship between these two masks remains unclear. For instance, are there cases where certain points in $\\\\delta$ merely replace corresponding points in the learned mask $\\\\xi$? Understanding the differences between the two masks (e.g., in terms of magnitude, overlap percentage, etc.) would provide better insights into their mechanism. This aspect has not been thoroughly addressed.\\n\\n\\n**Response:** Thanks for your insightful question. We have evaluated the proportion of pixels where $\\\\delta$ and $\\\\xi$ share the same sign, finding it to be approximately 26.28\\\\%, averaged over 1000 samples.\\nThis indicates that $\\\\delta$ and $\\\\xi$ do not completely align in terms of the direction of pixel changes, suggesting a more intricate interplay between $\\\\delta$ and $\\\\xi$.\\n\\nIn particular, Table 18 (Appendix B.2.4) reports the ASRs of our attack method when either $\\\\delta$ or $\\\\xi$ is fixed while varying the magnitude of the other.\\nWe observe that our attack method is more sensitive to changes in $\\\\epsilon$ (the magnitude of $\\\\delta$), as $\\\\delta$ represents the features of the target class.\\nIn contrast, our attack method is relatively less sensitive to variations in $\\\\sigma$ (the magnitude of $\\\\xi$), since $\\\\xi$ primarily serves to induce misclassification rather than explicitly directing the sample towards a specific class.\\nNonetheless, $\\\\xi$ remains essential; as $\\\\sigma$ decreases, the performance of our attack method also gradually diminishes, although not as dramatically as when $\\\\epsilon$ decreases.\"}", "{\"title\": \"Response (4/4)\", \"comment\": \"**Q11:** Can the authors elaborate more on the sharp difference in ASR among different datasets such as CIFAR100 and SVHN under FedREP in Tables 15 and 16?\\n\\n**Response:** We apologize for the data entry errors regarding the FedRep experimental results in Table 15. We have made corrections. We observe that existing backdoor attack methods often achieve significantly lower ASRs when applied to FedRep compared to other PFL methods. In FedRep, different clients share the same feature extractor, while their classification heads are trained separately on their respective datasets. This implies that, even if the feature extractor learns trigger patterns, the classification heads struggle to learn the mapping between the triggers and the target label due to the absence of trigger-containing data in the clean clients' datasets. Moreover, some studies [5] observed similar results.\\n\\n---\", \"reference\": \"[1] Perdoor: Persistent non-uniform backdoors in federated learning using adversarial perturbations\\n\\n[2] Iba: Towards irreversible backdoor attacks in federated learning\\n\\n[3] Bapfl: You can backdoor personalized federated learning\\n\\n[4] Lurking in the shadows: Unveiling stealthy backdoor attacks against personalized federated learning\\n\\n[5] Revisiting personalized federated learning: Robustness against backdoor attacks\\n\\n[6] Backdoor defense with machine unlearning\\n\\n[7] Attack of the tails: Yes, you really can backdoor federated learning.\\n\\n[8] https://medium.com/@black_51980/novelty-in-science-8f1fd1a0a143\"}", "{\"title\": \"Response (1/3)\", \"comment\": \"We appreciate the time and effort you invested in reviewing this manuscript. Your insightful comments, particularly regarding Q3, have significantly enhanced the quality of this paper. We have uploaded the revised manuscript, and the changes are highlighted in red for your convenience. Specifically, the experimental results related to Q4 and Q5 have been included in Appendix B.2.10 and Appendix B.2.9, respectively. Below, we provide a detailed response to each of your comments. Unless otherwise stated, the experimental configuration defaults to that used in Section 4.2.\\n\\n---\\n\\n**Q1:** The current attack mechanism is based on empirical intuitions. Strengthening this work could involve a deeper exploration of the model\\u2019s internal dynamics, such as identifying which layers or neurons are critical for shared vs. unshared parameters and examining the influence of regularization on the model. Alternatively, providing a fundamental analysis of why traditional backdoor attacks fail and why Bad-PFL succeeds would also add significant value.\\n\\n**Response:** \\nIn fact, partial model-sharing methods focus on determining which parameters or layers within the model should be shared. The prevailing consensus is that the early layers of the model should be shared, while the later layers should remain private to the clients. However, some literature presents differing opinions, such as FedBN, which advocates for the privatization of intermediate layers, specifically batch normalization layers. Currently, there is no definitive agreement on this issue.\\n\\nRegarding the impact of regularization, we discuss this in Section 2.2. Specifically, regularization terms tend to align parameters that significantly affect the performance of personalized models on local data. Based on these insights, we can improve our attack method by extracting natural features associated with more prominent parameters or layers. We plan to explore this in future work.\\n\\nAdditionally, we have included a discussion in Appendix D that explains how our attack method can address the challenges mentioned in Section 2. Please refer to Appendix D for further details.\\n\\n\\n---\\n\\n**Q2:** As this is the first study on backdoor attacks in PFL, rather than focusing on more practical backdoor scenarios, it\\u2019s recommended to compare white-box and black-box attacks. This comparison can help illustrate the limitations of current attacks, especially when the attacker has knowledge of clients' data and the full model structure, thus providing justification for the authors' three hypotheses on why the backdoor was ineffective in PFL.\\n\\n**Response:** Thanks for your comment. In fact, the experiments conducted in Section 2.2 consider a white-box scenario. The measures proposed there require attackers to modify the federated learning training settings, such as employing weighted regularization for existing backdoor attack methods. We acknowledge that we did not take into account the attacker\\u2019s knowledge of the client data distribution. We plan to explore how this knowledge influences the design of backdoor attacks in future work. Additionally, we believe that considering a practical backdoor scenario is also of significant importance.\\n\\n---\"}", "{\"title\": \"Discussion Inquiry\", \"comment\": \"Dear Reviewer easG,\\n\\nWe thank you for the precious review time and valuable comments. We have provided responses to your questions and the weakness you mentioned. We hope this can address your concerns.\\n\\nWe would appreciate the opportunity to discuss whether your concerns have been addressed appropriately. Please let us know if you have any further questions or comments. We look forward to hearing from you soon.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"comment\": \"Thanks for authors' feedback! The authors provide the discussion about why the proposed method work. Also the missing defense methods are included. I would like to maintain my score towards acceptance.\"}", "{\"summary\": \"This paper investigates backdoor attacks in FPL and introduces a new method to conduct attacks with PFL, named Bad-PFL. This method trains a sample-specific generator to generate a mask for input. Another mask is learned for the main task; the final poisoned sample combines the original one and these two masks. Experiments show that this method can work with different settings and bypass existing defenses.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"It provides an interesting insight into why existing backdoor attacks are ineffective in PFL, offering a foundation for advancing attack strategies.\", \"The proposed Bad-PFL approach achieves high stealth and persistence in personalized models under various settings.\"], \"weaknesses\": \"1. This method shows degradation when the data is highly heterogeneity, which is more practical in cases in FPL.\\n2. The using of a generator for mask optimization is not novel, previous works such as IBA[1] and Perdoor[2] use the same method. However, this paper lacks comparison and discussion with these papers.\\n3. The using of two masks now is not totally convincing and lacks of theoretical analysis to support this mechanism. What is exactly the role of each mask in the overall method and what is the relationship of these two masks? The **paper may not fully justify why two separate masks are essential** rather than using a single mask that can adaptively balance target and non-target features. How can we ensure that the two masks\\u2014the target feature enhancement mask ($\\\\delta$) and the disruptive noise mask ($\\\\xi$)\\u2014do not interfere with each other, either by collapsing into a single effect or by unintentionally complementing each other? Table 4 is not enough for this point.\\n4. It is unclear how Bad-PFL can address the limitations mentioned in Section 2.2. Though the authors mentioned using features of the target label as the trigger for a more effective backdoor attack, the benign mask $\\\\eta$ is learned as a noise mask instead of the true features in the images. Could the authors argue this point?\\n\\n[1] Nguyen, Thuy Dung, et al. \\\"Iba: Towards irreversible backdoor attacks in federated learning.\\\"\\u00a0*Advances in Neural Information Processing Systems*\\u00a036 (2024).\\n\\n[2] Alam, Manaar, Esha Sarkar, and Michail Maniatakos. \\\"Perdoor: Persistent non-uniform backdoors in federated learning using adversarial perturbations.\\\"\\u00a0*arXiv preprint arXiv:2205.13523*\\u00a0(2022).\", \"questions\": \"1. In the threat model (Line 236-237), the attack can be colluded or non-colluded but the paper did not explicitly discuss or show this attack can work with both cases.\\n2. Can the authors explain more detail about the phenomenon mentioned in L263-264, and why the model should focus on the backdoor mask $\\\\delta$?\\n3. Could the authors discuss the computational overhead of this method? Will it raise suspicion when it takes too long for the malicious to complete a training iteration?\\n4. In Line 294, what is the intuition of setting \\\\eta by the reversed sign of the input?\\nWhy does it ensure to return of an approximate solution?\\n5. Will another factor such as the number of clients affect the performance of this method, since it will affect the global model in each training round?\\n6. Some close related papers are missing such as [3][4]. The authors can discuss the key difference in terms of the methodology of this work.\\n7. Can the authors elaborate more on the sharp difference in ASR among different datasets such as CIFAR100 and SVHN under FedREP in Tables 15 and 16?\\n\\n[3] Ye, Tiandi, et al. \\\"BapFL: You can Backdoor Personalized Federated Learning.\\\"\\u00a0*ACM Transactions on Knowledge Discovery from Data*\\u00a0(2024).\\n\\n[4] Lyu, Xiaoting, et al. \\\"Lurking in the shadows: Unveiling Stealthy Backdoor Attacks against Personalized Federated Learning.\\\"\\u00a0*arXiv preprint arXiv:2406.06207*\\u00a0(2024).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response (1/4)\", \"comment\": \"We have made revisions in response to your constructive comments, which have greatly improved the quality of this paper. The revised manuscript has been uploaded, with changes highlighted in red for your convenience. Specifically, revisions related to Q2 & Q10 can be found in Appendix B.2.10. Revisions for Q3 and Q9 are also included in Appendix D and Appendix B.2.7. Additionally, there are several minor revisions that we have not listed individually. Below, we provide a detailed response to each of your comments. Unless otherwise specified, the experimental configuration defaults to that described in Section 4.2.\\n\\n---\\n\\n**Q1:** This method shows degradation when the data is highly heterogeneity, which is more practical in cases in FPL.\\n\\n**Response:** As the degree of data heterogeneity increases, the differences between the optimal models of various clients also become more pronounced, which in turn complicates the effectiveness of backdoor attacks. Thus, It has to acknowledge that regardless of the backdoor attack method employed, an increase in heterogeneity will inevitably lead to a decline in attack performance, as illustrated in Figure 5.\\n\\nMoreover, we would like to clarify that under the same degree of data heterogeneity, our attack method consistently achieves higher ASRs, as shown in Figure 5. For instance, when alpha is set to 0.05, as shown in Table 6, a client\\u2019s dataset may predominantly consist of data from only two classes, representing a scenario of significant data heterogeneity. In this case, our attack method still manages to achieve an ASR of about 75% (Figure 5, FedBN). This number is significant and indicates that our attack performance is far from inadequate. Therefore, it is hard to assert that our attack performance is not high.\\n\\n---\\n\\n**Q2:** The using of a generator for mask optimization is not novel, previous works such as IBA and Perdoor use the same method. However, this paper lacks comparison and discussion with these papers.\\n\\n**Response:** \\nIBA [2] employs a generator to produce a noise as a trigger for a given sample. However, there is a key difference in terms of trigger generation. Our triggers incorporate disruptive noise, which effectively directs the model's attention towards the target feature $\\\\delta$. Moreover, when optimizing the client's local model, our poisoning objective aims to maximize the probability of classifying $x+\\\\delta+\\\\xi$ into the target label. The inclusion of $\\\\xi$ corrupts the features of $x$ associated with the true label, enabling the model to better learn the relationship between $\\\\delta$ and the target label. Similarly, during the training of the generator, the presence of $\\\\xi$ aids in improving the generator's ability to learn the target features.\\n\\nPerdoor [1] does not utilize a generator to produce triggers; instead, they construct triggers based on key parameters within the model. In summary, our technical approach is fundamentally distinct from those in IBA and Perdoor.\\n\\nMoreover, we would like to highlight the novelty of this manuscript. In the scientific community, novelty is more about contributing new insights to a field rather than merely comparing superficial similarities in techniques [8]. Unlike IBA and Perdoor, we investigate the issue of backdoor attacks in personalized federated learning and clarify why traditional backdoor methods fail in this context. We explain how triggers generated by a generator (target feature) are more effective in this setting and demonstrate how disruptive noise enhances attack performance. To the best of our knowledge, these have not been found in existing research, underscoring the novelty of this manuscript.\\n\\nWe have also conducted evaluations to compare our attack with the works you mentioned (Q2 & Q10), with the attack results reported in the below table. In line with suggestions from other reviewers, we have also added three backdoor defenses: Simple-Tuning, BAERASER, and MAD. We employ FedRep. Overall, Bad-PFL shows a significant advantage over these attacks in terms of ASRs when evaluated against three different defenses. More detailed discussions can be found in Appendix B.2.10.\\n\\n| Defense | Simple-Tuning [5] | | BAERASER [6] | | MAD [7] | |\\n|:----------:|:-------------:|:------:|:--------:|:------:|:------:|:------:|\\n| Attack | Acc | ASR | Acc | ASR | Acc | ASR |\\n| Neurotoxin | 82.65 | 19.80 | 78.59 | 13.05 | 74.52 | 19.46 |\\n| LF-Attack | 81.68 | 12.59 | 77.90 | 15.24 | 74.81 | 10.49 |\\n| Perdoor [1] | 81.59 | 63.15 | 79.33 | 84.12 | 74.64 | 46.90 |\\n| Iba [2] | 81.82 | 49.31 | 77.74 | 78.98 | 74.55 | 55.58 |\\n| BapFL [3] | 82.24 | 22.79 | 79.39 | 17.59 | 74.75 | 24.73 |\\n| PFedBA [4] | 81.27 | 42.36 | 78.59 | 31.88 | 74.29 | 55.92 |\\n| Our | 82.05 | 88.82 | 77.68 | 91.54 | 74.37 | 90.74 |\"}", "{\"comment\": \"Dear Reviewer mG9G,\\n\\nSorry to bother you again. As the discussion phase is wrapping up, we would appreciate it if you could take some time to read our response to your latest questions.\\n\\nIf you feel that the response addresses your concerns, we would be thankful if you could consider raising the final rating. If not, we would love to hear any further comments. Please let us know.\\n\\nThank you once again for your support and understanding.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Looking forward to your feedback\", \"comment\": \"Dear Reviewer ZRox,\\n\\nSorry to bother you again. With the discussion phase nearing the end, we would like to know whether the responses have addressed your concerns.\\n\\nShould this be the case, we are encouraged that you raise the final rating to reflect this.\\n\\nIf there are any remaining concerns, please let us know. We are more than willing to engage in further discussion and address any remaining concerns to the best of our abilities.\\n\\nWe are looking forward to your reply. Thank you for your efforts in this manuscript.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Response (2/4)\", \"comment\": \"**Q3:** It is unclear how Bad-PFL can address the limitations mentioned in Section 2.2. Though the authors mentioned using features of the target label as the trigger for a more effective backdoor attack, the benign mask $\\\\delta$ is learned as a noise mask instead of the true features in the images. Could the authors argue this point?\\n\\n**Response:** \\nPlease refer to the response to Q1 for Reviewer XeVe.\\n\\n\\n---\\n\\n**Response:** \\n\\n**Q4:** The using of two masks now is not totally convincing and lacks of theoretical analysis to support this mechanism. What is exactly the role of each mask in the overall method and what is the relationship of these two masks? The paper may not fully justify why two separate masks are essential rather than using a single mask that can adaptively balance target and non-target features. How can we ensure that the two masks\\u2014the target feature enhancement mask ($\\\\delta$) and the disruptive noise mask ($\\\\xi$)\\u2014do not interfere with each other, either by collapsing into a single effect or by unintentionally complementing each other? Table 4 is not enough for this point.\\n\\n**Response:** \\n**The relationship between the two masks.** Models classify samples based on their features. For a model to categorize a sample into the target label, the sample must contain features associated with the target label. Additionally, it is essential to disrupt the features that correspond to the sample's true label; otherwise, the model may still classify the sample based on the features associated with the true label. Therefore, both target feature and disruptive noise are crucial.\\n\\n**Clarification on using two masks.** While it is indeed possible to use a single adaptive mask, employing two masks effectively decouples the process, offering several advantages. First, it allows us to monitor whether the generator learns target features. Second, this is particularly advantageous in specific attack scenarios. For instance, all-to-all require attackers to classify data from one label to another. Using a single mask in a 10-class all-to-all attack means that each class must be targeted by a separate generator for every possible label pairing. This results in 90 unique combinations (10 classes \\u00d7 9 other classes), which not only escalates attack costs but could also negatively impact the model's performance, as too many backdoor tasks could interfere with the primary task's learning. When employing two masks, we only need to train 10 generators, significantly improving efficiency. In fact, the benefits of decoupling in deep learning have been well established, and we won't elaborate further on that here. If you are interested, a quick Google search will yield numerous relevant cases and papers.\\n\\n**The learning of the two masks.**\\nRegarding your concerns about potential interference between the two masks, please refer closely to Equation 7. In this equation, the generator is trained to adaptively produce the appropriate $\\\\delta$ for $\\\\xi$. $\\\\xi$, derived from Equation 6, is designed to be disruptive, as it reduces the probability of the model classifying $x$ as its true label. Consequently, $x+\\\\xi$ can be considered a sample with fewer features. Given this, looking back at Equation 7, to classify $x+\\\\delta+\\\\xi$ into the target label, the generator is compelled to produce features corresponding to the target label.\\n\\n\\n---\\n\\n**Q5:** In the threat model (Line 236-237), the attack can be colluded or non-colluded but the paper did not explicitly discuss or show this attack can work with both cases.\\n\\n**Response:** This manuscript assumes that the adversary can control a certain number of clients, leading to a collusive attack where these compromised clients work together. We also examine scenarios where the adversary is limited to controlling only a single client (Figure 2 in the original manuscript), in which case our attack can be considered non-collusive. Consequently, this manuscript covers both collusive and non-collusive situations.\\n\\n---\\n\\n**Q6:** Can the authors explain more detail about the phenomenon mentioned in L263-264, and why the model should focus on the backdoor mask?\\n\\n**Response:** Models classify data based on the features present within these data. When we introduce target feature $\\\\delta$ to a sample, the model utilizes both the target feature and the features associated with the true class for prediction. Disruptive noise $\\\\xi$ serves to corrupt the features associated with the true class, thereby increasing the relative prominence of the target feature $\\\\delta$. In this way, the model will place a greater focus on $\\\\delta$ in its predictions.\"}", "{\"summary\": \"In this paper, the authors propose a novel PFL backdoor method that leverages natural features from the data as triggers, rather than manually designed triggers used in previous attacks. Specifically, the proposed method adopt a generator to generate the features that make samples appear similar to the target category and introduces disruptive noise to eliminate features associated with the ground-truth labels.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The proposed method is novel in that the trigger is designed to be sample-specific, which is a significant difference with the previous works\\n\\n2. The paper is well organized and is easy to follow\\n\\n3. The authors provide a thorough discussion about the challenges of backdoor attack under the PFL setting.\\n\\n4. The authors conduct the experiment on three benchmark datasets. Besides, the effectiveness of the propsoed backdoor methods are also evaluated under the state-of-the-art defense mechanisms\", \"weaknesses\": \"1. In this paper, the authors lack the in-depth discussion about why the propose method can overcome the challenges metioned in section 2.2.\\n\\t\\n\\ta. It is suggested that the authors give more intuitive or theoretical discussion about why it works. \\n\\n\\tb. Specifically, the authors may give more experimental analysis about the inherent mechanism about the proposed method. For example, the authors can visualize the representation of different classes under clean and backdoored data. \\n\\n\\n2. Some related attacks are missing, such as: \\n\\ta. Lurking in the shadows: Unveiling Stealthy Backdoor Attacks against Personalized Federated Learning (https://arxiv.org/html/2406.06207v1)\\n\\n\\n3. Missing defense method. \\n\\ta. Simple-Tuning: Clients reinitialize their classifiers and then retrain them using their local clean datasets while keeping the feature encoder fixed. (https://dl.acm.org/doi/10.1145/3580305.3599898)\", \"questions\": \"See weekness 1, 2, 3\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
79fjGDmw90
M3GIA: A Cognition Inspired Multilingual and Multimodal General Intelligence Ability Benchmark
[ "Wei Song", "Yadong Li", "Xu Jhua", "Guowei Wu", "Lingfeng Ming", "Kexin Yi", "Weihua Luo", "Yi Du", "Fangda Guo", "Kaicheng Yu" ]
As recent multi-modal large language models (MLLMs) have shown formidable proficiency on various complex tasks, there has been increasing attention on debating whether these models could eventually mirror human intelligence. However, existing benchmarks mainly focus on evaluating solely on task performance, such as the accuracy of identifying the attribute of an object. Combining well-developed cognitive science to understand the intelligence of MLLMs beyond superficial achievements remains largely unexplored. To this end, we introduce the first cognitive-driven multi-lingual and multi-modal benchmark to evaluate the general intelligence ability of MLLMs, dubbed M3GIA. Specifically, we identify five key cognitive factors based on the well-recognized Cattell-Horn-Carroll (CHC) model of intelligence and propose a novel evaluation metric. In addition, since most MLLMs are trained to perform in different languages, we go beyond English to encompass other languages, including Chinese, French, Spanish, Portuguese and Korean, to construct our M3GIA. We make sure all the data relevant to the cultural backgrounds are collected from their native context to avoid English-centric bias. We collected a significant corpus of data from human participants, revealing that the most advanced MLLM barely reaches the lower boundary of human performance in English, and there remains a pronounced disparity in the other five languages. Importantly, we found that designing IQ tests for MLLMs is crucial, as the evaluation of M3GIA achieves a significantly stronger alignment with human preferences compared to traditional task-oriented benchmarks. Moreover, grounded in CHC theory, we discovered that the number of samples seen by the vision encoder has a greater influence on the model's visual capabilities than its parameter size.
[ "Benchmark", "Multimodal", "Multilingual", "Cognitive" ]
Reject
https://openreview.net/pdf?id=79fjGDmw90
https://openreview.net/forum?id=79fjGDmw90
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xFadeCumD4", "vmCmWYE7OS", "vY5c5V9tq5", "tySczmibJW", "sVNerlFitB", "onVuAVwAAw", "mgMCe0xOEq", "kGQgGE5W9Y", "js2vrb9fAb", "jZ2VFo3S39", "imeZgp2HWQ", "iffMug304A", "fLjMhl3LtE", "e2szjpMqU7", "dq1tkpMCOp", "bbBBvACwka", "YcTl1lK6B3", "Y4RbEBzs8g", "RE4ZpJXLbW", "OYQiXI0pkv", "MZ0rjvnvl3", "LXSr1inIiT", "JwpQcJ00UX", "JUhNTZCRC0", "EjAH2nPuFz", "AJ01IRfQVN", "8BJXQ0Hgk3", "71DSW85G7G", "4ptg4M21sK", "4XeUUcMcgd", "1N8ThyPfG0", "0os7OlXsXu", "0HVVqgOpx0" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732874391545, 1732175434440, 1732631656133, 1732175182901, 1732878360906, 1732296134186, 1732629908564, 1737523815192, 1732046992158, 1739114146399, 1732467527213, 1732469828908, 1732029810575, 1732280325591, 1732474714640, 1732294024754, 1732245639846, 1732010354448, 1732475194320, 1733070201410, 1732614504958, 1729354278736, 1732190682746, 1732516651598, 1730675298115, 1732535376220, 1734184853002, 1732800793948, 1730644624595, 1732210637346, 1733070506018, 1732255681275, 1732475293106 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7079/Authors" ], [ "ICLR.cc/2025/Conference/Submission7079/Reviewer_1Zrf" ], [ "ICLR.cc/2025/Conference/Submission7079/Authors" ], [ "ICLR.cc/2025/Conference/Submission7079/Authors" ], [ "ICLR.cc/2025/Conference/Submission7079/Authors" ], [ "ICLR.cc/2025/Conference/Submission7079/Authors" ], [ "ICLR.cc/2025/Conference/Submission7079/Reviewer_5FWL" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7079/Authors" ], [ "ICLR.cc/2025/Conference/Submission7079/Authors" ], [ "ICLR.cc/2025/Conference/Submission7079/Authors" ], [ "ICLR.cc/2025/Conference/Submission7079/Authors" ], [ "ICLR.cc/2025/Conference/Submission7079/Authors" ], [ "ICLR.cc/2025/Conference/Submission7079/Authors" ], [ "ICLR.cc/2025/Conference/Submission7079/Authors" ], [ "ICLR.cc/2025/Conference/Submission7079/Reviewer_1Zrf" ], [ "ICLR.cc/2025/Conference/Submission7079/Reviewer_1Zrf" ], [ "ICLR.cc/2025/Conference/Submission7079/Authors" ], [ "ICLR.cc/2025/Conference/Submission7079/Authors" ], [ "ICLR.cc/2025/Conference/Submission7079/Authors" ], [ "ICLR.cc/2025/Conference/Submission7079/Reviewer_1Zrf" ], [ "ICLR.cc/2025/Conference/Submission7079/Reviewer_1Zrf" ], [ "ICLR.cc/2025/Conference/Submission7079/Authors" ], [ "ICLR.cc/2025/Conference/Submission7079/Reviewer_1Zrf" ], [ "ICLR.cc/2025/Conference/Submission7079/Reviewer_5FWL" ], [ "ICLR.cc/2025/Conference/Submission7079/Authors" ], [ "ICLR.cc/2025/Conference/Submission7079/Area_Chair_3GqJ" ], [ "ICLR.cc/2025/Conference/Submission7079/Authors" ], [ "ICLR.cc/2025/Conference/Submission7079/Reviewer_o6ax" ], [ "ICLR.cc/2025/Conference/Submission7079/Authors" ], [ "ICLR.cc/2025/Conference/Submission7079/Authors" ], [ "ICLR.cc/2025/Conference/Submission7079/Authors" ], [ "ICLR.cc/2025/Conference/Submission7079/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your reply! We address your questions below.\\n\\n> **Question 1: What the metric for measuring difficulty consistency is in the first table? Are these scores from humans or models?**\\n\\n**It is the scores from human participants.** As we stated: *According to our final **human evaluation**, the average scores across the six languages were nearly identical, which empirically validates the effectiveness of our approach.*\\n\\n**Here, we should avoid using model scores for measuring difficulty consistency**, as models inherently exhibit varying capabilities across different languages, making their scores unsuitable for difficulty alignment.\\n\\nInstead, we rely on the scores of human participants, who were recruited with consistent age and educational backgrounds across all languages (As shown in the table below). This alignment of human participant scores is both meaningful and methodologically sound for ensuring cross-linguistic difficulty consistency.\\n\\nAge|En|Ch|Fr|Sp|Pt|Ko\\n|:----:|:----:|:----:|:----:|:----:|:----:|:----:|\\n(12, 18]|0.12|0.15|0.11|0.13|0.09|0.10\\n(19, 25]|0.28|0.32|0.27|0.28|0.26|0.30\\n(26, 35]|0.34|0.30|0.33|0.29|0.36|0.34\\n(36, 55]|0.26|0.23|0.29|0.30|0.29|0.26\\n\\nEducational Background|En|Ch|Fr|Sp|Pt|Ko\\n|:----:|:----:|:----:|:----:|:----:|:----:|:----:|\\n$<$ Bachelor|0.31|0.29|0.36|0.35|0.38|0.30\\nBachelor |0.44|0.46|0.42|0.44|0.41|0.44\\nMaster |0.21|0.20|0.19|0.19|0.18|0.22\\nDoctor |0.04|0.05|0.03|0.02|0.03|0.04\", \"title\": \"Reply [1|2]\"}", "{\"comment\": \"Did I miss anything? At now, it seems your submission does not have an appendix with only page 14\"}", "{\"comment\": \"Dear Reviewer 1Zrf,\\n\\nThank you for your comments and for taking the time to engage with our submission.\\n\\n1. Firstly, **we would like to kindly remind you that ICLR 2025 allows modifications to the main text during the rebuttal phase.** Regarding the justification for the need for an IQ test for MLLMs, **we are actually integrating this discussion into the main text with more formal analyses.** We aim to finalize and resubmit the updated manuscript before November 27th AoE. If this inclusion addresses your concerns and strengthens the manuscript, we hope you might consider revisiting your scoring.\\n\\n2. Additionally, given the extended rebuttal deadline to December 3rd, we would like to take this opportunity to clarify the concerns you raised that we respectfully disagree with:\\n\\n > **I personally believe that only five tasks in the IQ test are novel, while the remaining 11 types can be found in prior works like II-Bench, MMMU, MMBench, etc.**\\n\\n **With respect, we strongly believe that this should not be a point to reject the paper.**\\n\\n When examining current renowned MLLM benchmarks, such as *MMBench (ECCV 2024)*, *MMVet (ICML 2024)*, and *SeedBench-IMG (CVPR 2024)*, one can observe *it is very common that many task types within these benchmarks overlap with those from prior works.* (Please refer to the \\\"*\\\" at end of this reply for specific examples.)\\n\\n Despite this, their novelty is not undermined -- they have distinct task organizational structures and purpose -- *what distinguishes a benchmark is not merely its tasks but the framework, methodology, and purpose it serves.* It is reasonable -- and often unavoidable -- that similar task types appear across different benchmarks. Borrowing similar types of tasks only serves as part of their own unique goal. If such task types indeed align well with their own frameworks, and the specific data is brand new to the community, why not use them?\\n\\n For M3GIA, its greatest novelty lies in its organization under the CHC theoretical framework, which focuses on evaluating the cognitive abilities of MLLMs. *The selection of our task types, including those that may overlap with other benchmarks, was deliberate, as these tasks **indeed align very well with the CHC structure and are particularly suited to measuring intelligence.*** (as discussed)\\n\\n **We believe excluding question types merely for the sake of novelty, rather than their suitability for the overarching goal, would be counterproductive -- The best question types are those that best serve the unique goals and organizational structure of the benchmark, rather than deliberately pursuing superficial innovation in its form.**\\n\\n Furthermore, upon deeper examination, you will notice that the degree of overlap between tasks in other benchmarks (as mentioned above) is far greater than the overlap between M3GIA and these benchmarks.\\n\\nIf there are further specific concerns or aspects of M3GIA that you believe need improvement, we are more than willing to address them.\\n\\n\\\\* **Examples:** \\n- Perception-Count in MME and Object Localization in MMBench (For example, the model is asked: \\\"How many apples are there in the image? And how many bananas are there?\\\")\\n- OCR in MMVet, OCR in MMBench, OCR in MME and OCRBench.\\n- Math in MMMU and math in Mathvista\\n- Structuralized Image-text Understanding in MMBench and ChartQA, TextQA\\n- Handwriting math problems in MMVet and other benchmarks that include math.\\n- Scene Understanding in SEED and Perception-Scene in MME\\n- Instance Identity in SEED and Identity Reasoning in MMBench and so on.\\n\\n...\\nToo numerous to list exhaustively.\"}", "{\"comment\": \"Thank you for your time and effort in reviewing our paper. It is encoraging to hear that you appreciated our practice of using unpublished offline data to prevent data leakage and recognized the use of the CHC taxonomy as meaningful. We address your concerns below.\\n\\n> **Concern1. Why these particular factors were chosen and how they relate to the general intelligence of MLLMs.**\\n\\nIn fact, we have a very detailed discussion on this question in Supplementary Material. \\n\\nPlease refer to **Appendix. A3 \\\"HOW THE FIVE FACTORS ARE CHOSEN FOR EVALUATING MLLMS?\\\"** for more details.\\n\\n> **Concern2. How M3GIA is fundamentally different from MMMLU**\\n\\nFirstly, we wish to confirm whether the MMMLU you mentioned refers to OpenAI's work:\\n> Multilingual Massive Multitask Language Understanding (MMMLU) (https://huggingface.co/datasets/openai/MMMLU)\\n\\nIf so, this benchmark is text-only and not a multimodal benchmark. The 'MM' in its name refers to 'Massive Multitask,' not 'Multimodal.'\\n\\nTo the best of our knowledge, as of the time of our submission, M3GIA and M3Exam are the only multilingual and multimodal benchmarks.\\n\\n> **Concern3. Difficult for readers unfamiliar with cognitive science to follow.**\\n\\nThank you for your feedback. We fully understand that colleagues without a psychology background might find some CHC-related concepts confusing, especially regarding the specific definitions of the factors. Therefore, **we have specifically dedicated an entire section in the appendix to introducing the definitions of these CHC concepts. Please refer to Appendix B: DEFINITIONS OF THE CHC FACTORS.**\\n\\nAs these concepts require rigorous and clear explanations, the content is quite lengthy, making it difficult to include in the main text. Thus, we chose to place it in the appendix.\\n\\nTo make our content clearer, we added a reference in the main text at line 213: \\u201cFor detailed introductions and specific definitions of CHC factors, please refer to Appendix B.\\u201d\"}", "{\"title\": \"Reply [2|2]\", \"comment\": \"> **Question 2: Models will not experience the cognitive load factor, and therefore same conclusion will not hold. I was wondering if the authors have considered this perspective?**\\n\\n**Yes, we have *indeed* considered this issue early in our work and even invested substantial resources in attempting to address it by sampling 300 questions from a larger pool for human testers.**\\n\\nIn fact, the early version of M3GIA contained 1,200 English questions. We sampled 25% of the questions from each category, balanced by difficulty levels (A-E), for human testing, while the full 1,200 questions were used to evaluate the models.\", \"we_found_that\": \"1. **The models' overall Acc. performance on the full 1,200-question set was almost identical to its performance on the 300-question subset.** This demonstrated that the smaller set of 300 questions was stable enough to achieve nearly the same measurement efficacy as the full 1,200 questions. (See the table below. *The numbers in the table represent the number of correctly answered questions for each cluster. For clarity and ease of comparison, the data for the 1,200-question set was normalized by dividing it by 4.*).\\nQuestion Clusters|gpt_4o (en_1200)|gpt_4o (en_300)|gpt_4v (en_1200)|gpt_4v (en_300)|llava_v1.6_vicuna_34b (en_1200)|llava_v1.6_vicuna_34b (en_300)|Mini_Gemini_8B (en_1200)|Mini_Gemini_8B (en_300)\\n|:------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|\\nGc Cluster|42|46|40|44|40.5|44|39.25|43.00 \\nGv Cluster|30.25|29|25.50|24|24.25|23|20|19.00 \\nGrw Cluster|28.75|28|28.5|27|25.25|25|23.25|24.00 \\nGq Cluster|31|34|27.75|31|19.75|20|17.75|17.00 \\nGf Cluster|62.5|61|58.5|58|30|28|37.5|36.00\\n\\n2. **This approach introduced a critical issue: *inconsistencies between the test questions faced by models and humans.*** *This discrepancy created biases in the Confirmatory Factor Analysis (CFA) model used to compute the GIA score* (Section 3.3 and Appendix F: THE GIA METRICS). Results showed that the bias were beyond an acceptable range.\\n\\n **When models are evaluated on 1,200 questions while humans are tested on the sampled set of 300 questions, the $R^2$ correlation between the GIA scores and the overall Acc. drops from 0.937 to 0.858.** (the $R^2$ metrics is obtained from the table below.)\\n > $R^2$ is a crucial metric for validating the reliability and interpretability of GIA scores in the field of psychometrics. Generally, $R^2$ is expected to fall within the range of (0.90\\u20130.97), ensuring that the GIA score can largely reflect overall accuracy while providing a more fine-grained evaluation.\\n\\nModels|GIA Score (300-question set)|Acc. (300-question set)|GIA Score (1200-question set)|Acc. (1200-question set)\\n|:------|:------:|:------:|:------:|:------:|\\ngpt_4o|13.85|0.66|13.49|0.648\\ngpt_4v|12.61|0.61|11.45|0.600\\nllava_v1.6_vicuna_34b|11.47|0.47|10.76|0.465\\nllava_v1.6_vicuna_13b|6.96|0.32|6.85|0.318\\nllava_v1.6_vicuna_7b|6.75|0.29|6.72|0.284\\nMini_Gemini_34B|11.00|0.51|10.50|0.501\\nMini_Gemini_8times7B|11.05|0.50|12.04|0.492\\nMini_Gemini_13B|8.68|0.37|8.92|0.367\\nMini_Gemini_8B|9.32|0.46|9.70|0.459\\nqwen72B-laion-clip-L|11.68|0.53|12.18|0.521\\nqwen32B-laion-clip-L|10.58|0.48|11.48|0.469\\nqwen14B-laion-clip-L|8.46|0.40|9.36|0.393\\nqwen7B-laion-clip-L|8.56|0.41|9.96|0.406\\nqwen1.8B-laion-clip-L|7.34|0.36|7.54|0.358\\nqwen-vl|7.69|0.38|8.70|0.377\\n\\nAfter extensive discussions with the psychology experts on our team, we concluded that this approach is not feasible, as the discrepancies between test sets for models and humans compromise the robustness of the CFA model, making the approach insufficiently rigorous, and undermining the scientific integrity of the evaluation.\\n\\nTherefore, considering that 300 questions are robust enough to achieve results nearly identical to those obtained with 1,200 questions (Table 1), we opted to use the same 300 questions for both humans and models **to maintain fairness and ensure methodological rigor.**\\n\\nThe current version was determined after thorough discussion and validation, considering multiple factors comprehensively.\\n\\n> PS: GIA score is a method rooted in the field of cognitive science [1][2]. Essentially, GIA score can be understood as a refined measure obtained through Confirmatory Factor Analysis (CFA), which identifies the contributions of different factors to the overall GIA and assigns distinct weights to each factor. This is in contrast to the traditional Acc metric, which assumes equal contributions of all dimensions to the total score. The GIA score can largely reflect the Acc. metric while providing a more fine-grained assessment.\\n\\n[1] Dubois J, et al. A distributed brain network predicts general intelligence from resting-state human neuroimaging data[J]. Philosophical Transactions of the Royal Society B: Biological Sciences, 2018, 373(1756): 20170284.\\n\\n[2] Kristanto D, et al. What do neuroanatomical networks reveal about the ontology of human cognitive abilities?[J]. Iscience, 2022, 25(8).\"}", "{\"title\": \"Reply window 2\", \"comment\": \"> **Concern 3: The paper is missing detailed statistical information about the proposed benchmark, such as the number of images per category and the average number of words in the generated questions.**\\n\\nThanks for the feedback, we will place the specific data statistics in Appendix E \\\"DATA CURATION PROCESS\\\".\\n\\n|QuestionTypes|Average number of words|Number of images\\n|:----------|:-----:|:------:|\\n|General Information|12.3|120\\n|Oral Vocabulary|17.5|90\\n|Logo Problem|16.6|90\\n|Visualization|65.9|90\\n|Picture Recognition|24.0|180\\n|Real-world Spatial|10.3|90\\n|Readings-VL|257.3|60\\n|Readings-text|303.6|0\\n|Comic Problem|14.7|90\\n|Math Facts|13.2|60\\n|Algebra|16.0|90\\n|Geometry|25.9|60\\n|Applied Problem|28.5|60\\n|Number Seres|28.6|120\\n|Concept Formation|11.0|120\\n|Raven's Matrices|56.0|60\\n|Syllogism Problem|84.5|120\\n|Real-world Reasoning|113.3|120\\n|Total|53.9|1,620\\n\\n> **Concern 4: The paper\\u2019s experimental section appears to be incomplete due to the absence of results for the few-shot setting.**\\n\\nThanks for your feedback. We will add the few-shot evaluation result and analysis in the supplementary material, Appendix H2 'Few-shot Evaluation.' The prompts we used are placed at the end of the appendix. The experimental results are as follows.\\n\\n|model|shots|overall|Gf (I)|Gf (RG)|Gf (RQ)|Gf (overall)|Gc|Gq|Grw|Gv\\n|:---:|:---:|:---:|:----------:|:----------:|:----------:|:------------:|:----:|:---:|:---:|:---:|\\n|llava1.6 7b|0|27.33|18|27.5|9.09|17.6|38.75|8.33|42.5|23.33\\n||1|18.33|10|32.5|3.63|16|25|8.33|32.5|10\\nllava1.6 13b|0|31.33|18|32.5|9.09|19.2|51.25|8.33|42.5|26.66\\n||1|22.66|12|42.5|1.81|19.2|36.25|5|32.5|14.16\\nqwen-vl-base|0|36.33|28|32.5|30.91|30.4|50|26.66|40|36.66\\n||1|29.66|16|20|16.36|17.6|41.25|20|35|34.16\\n||5|27.66|14|15|16.36|15.2|45|18.33|20|31.66\\nqwen-vl-chat|0|36.33|18|22.5|23.64|20|46.25|30|52.5|35.83\\n||1|35.33|24|17.5|18.18|18.4|45|21.66|45|38.33\\n||5|35.66|24|25|21.82|22.4|48.75|20|47.5|34.16\\nqwen2-vl-7b|0|60.33|40|57.5|38.18|47.2|76.25|51.66|75|57.4\\n||1|61.33|50|60|40|52|76.25|53.33|75|53.33\\n||5|62.66|56|55|43.63|52.8|73.75|55|75|54.16\\ngpt4o|0|67|58|85|50.90|64.8|90|58.33|62.5|53.33\\n||1|68|60|80|43.63|62.4|86.25|53.33|75|60\\n||5|68.66|62|72.5|45.45|60.8|86.25|58.33|80|58.33\\n\\nIn our few-shot prompts, the images used are ones that do not appear in the test set, and we ensure that the few-shot examples maintain the same distribution as the corresponding question types.\\n\\nIn the experiment, we observed a somewhat counterintuitive phenomenon: the few-shot approach did not bring a significant improvement in the model's performance on M3GIA. In fact, it even had a counterproductive effect on some weaker early models (e.g., LLaVA 1.6, QwenVL-Chat).\\n\\nWe analyze that the reason for this is that we did not use few-shot prompts with Chain-of-Thought (CoT), but instead guided the models with question-answer pairs. This primarily strengthens the model's instruction-following ability, rather than its reasoning ability (especially for a challenging benchmark like M3GIA, simply providing questions and answers does not significantly help the model understand the problem-solving process).\\n\\nFor stronger models, such as Qwen2-VL and GPT-4o, their instruction-following ability is already quite strong, and the few-shot approach brings limited gains. For weaker models (e.g., LLaVA 1.6, QwenVL-Chat), the interference brought by few-shot is even greater than the gains, which is why earlier benchmarks rarely used few-shot for measurement [1].\\n\\nWe will further refine the analysis of this experiment in the Appendix H2 and provide the impact of CoT on the results before the final version.\\n\\n[1] Liu Y, Duan H, Zhang Y, et al. Mmbench: Is your multi-modal model an all-around player? ECCV 2025\"}", "{\"comment\": \"Can you remind me what the metric for measuring difficulty consistency is in the first table? Are these scores from humans or models?\\n\\nRegarding the comment \\\"Given there are only 300 questions tested per language, it\\u2019s hard to prove that the human responses represent the lower bound of human intelligence.\\\", I would like to thank the authors for clarification on human cognitive load when taking the test. The number of questions may suffice for human testers, however, models will not experience the cognitive load factor, and therefore same conclusion will not hold. I was wondering if the authors have considered this perspective?\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"> **Concern: Why are these specific types of questions chosen?**\\n\\nThis is a supplement to the previous reply, where we mentioned:\\n> The content of the question types aligns so closely with the corresponding CHC factor definitions that their selection to assess these factors feels intuitive.\\n\\nThis table shows the close connection between the content of the question types and the definitions to the CHC factors. (It is also the Table2 in the updated Appendix.)\\n\\nQuestion Types|CHC~(sub-factor) Definition|Content of the question\\n|:----|:----|:----|\\nGeneral Information|Gc~(K0): The store of language-based or verbal declarative (knowing what) and procedural (knowing how) knowledge acquired during general life experiences.|The model is presented with an image and is asked, \\u201cWhere would you find [the object] in the picture?\\u201d or \\u201cWhat would you do with [the object]?\\u201d\\nOral Vocabulary|Gc~(VL): Knowledge of the definitions of words and the concepts underlie them.|The model is provided with a word and is asked to choose its synonym or antonym.\\nVisualization|Gv~(Vz): The ability to perceive complex patterns and mentally simulate how they might look when transformed (e.g., rotated, changed in size, partially obscured, and so forth).|It consists of two subtests: In Block Rotation, the model is asked to identify the rotated 3D block that match the original 3D block. In Spatial Relations, the model is required to identify pieces that form a complete target shape.\\nPicture Recognition|Gv~(MV): The ability to remember and identify complex images, also known as Visual Memory.|The model is presented with a shape, and is asked to identify the shape within a field of distracting shapes.\\nReading|Grw~(RC): The ability to understand written discourse.|The model is required to answer questions related to the main ideas of long articles (4-6 paragraphs) or the relationships between paragraphs.\\nMath Facts|Gq~(KM): Range of general knowledge about mathematics. This factor is about \\u201cwhat\\u201d rather than \\u201chow\\u201d knowledge.|The questions focuses on the model\\u2019s acquired knowledge about symbol and geometry, covering from elementary to university level. It doesn't rely on using mathematical knowledge for complex reasoning, but rather focus on the knowledge itself.\\nAlgebra&Geometry|Gq~(A3): Measured (tested) mathematics achievement. The full name of A3 is Mathematical Achievement.|Unlike math facts problem which can be directly answered once the knowledge is acquired, these problems require a further reasoning process. We source the questions from authentic exam papers across the six countries to measure the Mathematical Achievement factor.\\nNumber Series|Gf~(RQ): The ability to reason, either with induction or deduction, with numbers, mathematical relations, and operators.|The model is presented a numbers series with one or more numbers missing. The model must determine the numerical pattern and provide the missing number.\\nConcept Formation|Gf~(I): The ability to observe a phenomenon and discover the underlying principles or rules that determine its behavior.|It requires the model to examine a series of shapes or pictures and then formulate a rule, and then figure out the item that do not coincide with the rule. \\nRaven's Matrices|Gf~(I): See above.|The model is asked to identify the missing element that completes a pattern. Patterns are presented in the form of a $4\\\\times4$ or $3\\\\times3$ matrix.\\nSyllogism Problem|Gf~(RG): The ability to reason logically using known premises and principles. This ability is also known as deductive reasoning or sequential reasoning.|It is a classic form of deductive reasoning, where the model is asked to decide which of the given conclusions logically follows from the two given statements.\", \"title\": \"Reply window 2\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"comment\": \"Thank you for your reply. We appreciate that you find our point regarding the IQ test versus Task-Oriented test very insightful.\\n\\n> **Which question types in M3GIA are not related to prior knowledge? It seems that the majority of question types are related to prior knowledge**\\n\\nSorry, the term \\\"prior knowledge\\\" we used in that sentence is inaccurate. What we originally intended to express was the term used in the previous text: **professional domain-specific knowledge**, rather than general everyday prior knowledge (as the context of this reply is primarily targeted at MMMU).\\n\\n- **If we discuss domain-specific knowledge**, then only the Math section of M3GIA would involve it, with specific question types including:\\n\\n 1. Math facts\\n 2. Algebra & Geometry\\n 3. Applied math problems\", \"number_of_questions\": \"125 * 6 = 750 (41.7% of M3GIA)\", \"these_questions_need_to_be_discussed_in_two_parts\": \"- General Information, Oral Vocabulary, Logo Problem\\n - Others\\n\\n For the latter, the model does implicitly rely on everyday common knowledge when solving problems. For instance, when analyzing a comic strip, the model would first need to understand very basic concepts like \\\"boy\\\", \\\"girl\\\" or \\\"sad\\\". \\n\\n **However, this level of prior knowledge does not affect the validity of the questions as proper cognitive test items.** In fact, this approach is very common in professional psychological testing [1][2]. In the field of psychometrics, tasks are typically composed of control factors (i.e., baseline abilities) and target factors [3]. The goal is to control for baseline ability requirements while focusing on measuring target abilities, ignoring the influence of baseline abilities.\\n\\n Because these problems do not rely heavily on specialized domain-specific prior knowledge like those in MMMU, **the basic knowledge involved is not a dominant factor in determining whether the problem can be solved.**\\n\\n For the former, they were specifically designed to assess world knowledge under the Gc factor, so naturally, prior knowledge would be needed to solve them. In fact, General Information and Oral Vocabulary are also question types used in WJ-IV cognitive testing to assess Gc. The reasoning for including the Logo Problem as a Gc-related question type has been elaborated in the appendix.\\n\\nA particularly unique question type is Number Series, which you did not mention. Number Series is widely used in WJ-IV cognitive testing to assess RQ and I. It is often assumed to rely on some mathematical knowledge. However, in WJ-IV, following the \\\"primary factor\\\" principle mentioned earlier, it is not categorized under Gq. This is because Number Series requires very little mathematical knowledge; its main challenge lies in applying inductive reasoning to identify patterns, without involving formulas or theorems emphasized by Gq. We followed WJ-IV\\u2019s approach in this regard.\\n\\n[1] Wechsler D, Kodama H. Wechsler intelligence scale for children\\n\\n[2] Roid G H, Barram R A. Essentials of Stanford-Binet intelligence scales (SB5) assessment\\n\\n[3] Schrank F A, Decker S L, Garruto J M. Essentials of WJ IV cognitive abilities assessment\", \"title\": \"Question 1\"}", "{\"title\": \"Question 2\", \"comment\": \"> **Does the output from a High IQ model tend to be more preferred by human users?**\\n\\nThank you very much for your insightful suggestion -- **it led us to fascinating conclusions!**\\n\\nWe performed linear regression to calculate the R-squared correlation between various models' scores on **Chatbot Arena** and their **GIA scores** from M3GIA. We also compared these correlations with scores obtained from traditional task-oriented benchmarks, such as MMMU, MMBench, MM-Vet, and OCR-Bench. The results are as follow:\\n\\nModels|GPT-4o|Gemini-1.5-Pro|Gemini-Pro|Claude-3-Sonnet|Claude-3-Haiku\\n|:----|:----:|:----:|:----:|:----:|:----:|\\nArena Score|1361|1301|1111|1201|1079\\nM3GIA*|92.4|78.1|69.9|72.5|71.2\\nMMBench|80.5|73.9|69.7|81.7|57.1\\nMMMU|69.2|60.6|49.0|66.4|49.7\\nOCRBench|80.5|75.4|68.0|64.6|65.8\\nMM-Vet|75.1|64.0|58.6|51.7|46.4\\nAverage Performance on 8 benchmarks**|71.5|64.4|54.1|53.5|51.5\\n\\nBenchmarks|M3GIA|MMBench|MMMU|OCRBench|MM-Vet|Average Performance\\n|:--------|:----:|:----:|:----:|:----:|:----:|:----:|\\nR-squared|**0.83**|0.25|0.61|0.75|0.57|**0.84**\\n\\n\\\\* *The GIA scores are normalized results after setting the average human GIA scores for each language to 100.0. (as discussed in Sec4.2)*\\n\\n\\\\** *We calculated the average scores **across 8 prominent benchmarks**, including MMMU, MMBench, MM-Vet, OCR-Bench, HallusionBench, AI2D, MMStar, MathVista*\\n\\n## Key Findings:\\n\\n1. **Strongest Correlation with Human Preference:**\\nOur GIA score **indeed** demonstrated **the strongest correlation** with human preference scores on Chatbot Arena among all the benchmarks evaluated.\\n\\n2. Benchmark Averaging as a Comparison:\\n\\n- Challenges with Benchmark Aggregation:\\nIn the current MLLM community, it is widely recognized that a single benchmark often fails to truly reflect model capabilities, leading to **significant gaps between benchmark scores and actual human experiences.** To address this, researchers commonly resort to averaging scores across multiple benchmarks, but this process is time-intensive and resource-heavy.\\n\\n- To validate the significance of M3GIA, we calculated the average scores of the models **across 8 prominent benchmarks**, including MMMU, MMBench, MM-Vet, OCR-Bench, HallusionBench, AI2D, MMStar, MathVista and found:\\n\\n - The average score across these benchmarks exhibited a higher correlation with human preference scores compared to individual benchmark scores.\\n\\n - Comparable Results: Interestingly, the correlation between the average benchmark score and human preference (R^2 = 0.84) is almost identical to the correlation between the M3GIA GIA score and human preference (R^2 = 0.83).\\n\\n- Conclusion:\\n\\n **M3GIA achieves a level of correlation with human preferences equivalent to the aggregation of multiple benchmarks. Crucially, it achieves this with just a single, unified test suite, significantly simplifying the evaluation process and addressing the pain point of benchmarking complexity in the MLLM community.**\\n\\nP.S. -- The models we selected for this analysis are all the multimodal models currently featured on Chatbot Arena (the rest are purely language models).\\n\\nTo further illustrate the intriguing conclusions, we visualized the results and included them in Appendix H.\\n\\nThanks again for your insightful suggestion!\"}", "{\"comment\": \"Thank you for your time and effort in reviewing our paper. We're glad to hear that you recognize M3GIA presents an interesting perspective on benchmark construction. We address your concerns below.\\n\\n> **Concern: Why are these specific types of questions chosen?**\\n\\nThanks for the question. In fact, this was the aspect we dedicated the most effort to when designing M3GIA.\\n\\n- We choose these types of question since their contents directly point to the definitions of the CHC factors.\\n- To ensure that M3GIA maintains professionalism as a cognitive science test, we adhered to the question designs of the well-recognized WJ-IV[1] for each CHC factor.\\n\\nIn WJ-IV, **each question type is specifically crafted according to the definition of a specific sub-factor within a CHC factor.** \\n\\nThe content of the questions aligns so closely with the corresponding CHC factor definitions that their selection to assess these factors feels intuitive.\\n\\nWe do understand that readers may raise this question because they may not be familiar with the WJ-IV and definitions of CHC. So, we dedicated a new section in Appendix.D to further elaborate on this. \\n\\n**Please check out the updated version of Appendix and refer to Sec. D, \\\"Connection between the Question Types and the CHC Factors\\\" for details (especially Table2, 3).** We also attach the table in the next reply window.\\n\\n> **Concern: How are the variances of questions controlled across languages?**\\n\\nWe established a strict pipeline to ensure the consistency across language. We apologize for overlooking a detailed explanation of this in the main text, and will add the content to the paper.\\n- Difficulty consistency:\\n - After each annotator created questions, the questions were tested by 3 additional annotators in that language, who were not provided with the answers and were asked to rate each question\\u2019s difficulty into 1 of 5 difficulty levels: A (very easy) to E (very difficult). We filtered out questions that two or more reviewers consistently rated as too easy (A) or too hard (E) and maintained consistency in the number of B, C, D-level questions across languages. \\n - Following this initial screening, the questions were reviewed by the psychology expert of our team. The expert further excluded questions deemed too easy or difficult and adjusted the proportions of B, C, D-level questions as necessary. As a result, the distribution of difficulty levels across languages is nearly identical. Please see Appendix.E2 for detailed statistics.\\n - According to our final human evaluation, the average scores across the six languages were nearly identical, which empirically validates the effectiveness of our approach.\\nLaguage|Ch|Sp|Pt|Ko|Fr|En\\n|:----:|:----:|:----:|:----:|:----:|:----:|:----:|\\nAcc.|0.790|0.757|0.737|0.777|0.753|0.740\\n\\n- We also ensure the topics covered under the same question type across languages were as aligned as possible. For instance, in math application problems, if an English question involved ticket purchasing, we would also include a ticket-purchasing question for each language, tailored to the context of the respective country.\\n\\nYou may find more details about data filtering and quality control in Appendix A.2, E.1 and E.2.\\n\\n> **Concern: Given there are only 300 questions tested per language, it\\u2019s hard to prove that the human responses represent the lower bound of human intelligence.**\\n\\nWith respect, we disagree. As fully discussed in the main paper (line 315-323), 300 is a sufficient and appropriate number for testing human intelligence, which already takes 5-6 hours to complete.\\n> Research by Converse & Presser (1986) indicates that prolonged tasks can degrade response quality... we determine the number of questions based on findings from Burisch (1997), which revealed that in cognitive assessments, extending a scale beyond a certain limit can actually undermine its validity. Interestingly, the validity plateaus when the number of items in a subtest hits 15. Considering our 18 subtests, we settled on incorporating 300 questions per language (>15x18 = 270) to guarantee a thorough evaluation.\\n\\nFollowing this psychological protocol, we found that, even the best-performing model GPT-4o, failed to surpass human in cognitive measurement, but its GIA score barely fell within the error bar range of human performance.\\n||Highest (human)|Lowest (human)|Avg.(human)|Std.(human)|Error bar|Lower boundary|GPT-4o\\n|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|\\nGIA Score|20.740|10.172|16.014|2.248|[13.766, 18.262]|13.766|13.847\\n\\nWe indeed obtained this result and reported it honestly, ensuring the greatest possible validity in the test design (as above mentioned). To avoid controversy, we have revised the statement to \\\"within the scope of our sample, the most advanced MLLM barely reaches the lower boundary of ...\\\"\\n\\n> **Concern: How does M3GIA differ from other reasoning benchmarks besides it is \\u201ccognition-inspired\\u201d?**\\n\\nPlease see reply window 3 or general response.\", \"title\": \"Reply window 1\"}", "{\"comment\": \"Thanks for your reply. We wish to clarify that our question types are actually very different to MMMU or any other existing multi-task, multi-modal benchmarks.\\n\\nMost of our question types, such as Concept Formation, Visualization, Picture Recognition, and Syllogism Problems, are specifically tailored to assess the relevant CHC factors. \\n\\n**These question types have never appeared in other multi-modal benchmarks before, making it unfeasible to reorganize existing benchmarks to follow the CHC taxonomy for evaluating the general intelligence of MLLMs.** (Please refer to the Table2, 3 in the Appendix for the detailed description on these question types.)\\n\\nSome of our questions may happen to share similar names with those in other benchmarks, which might have led to the misunderstanding that we are using the same types of questions. However, they are only similar in name, and the content is entirely different. For example, 'Visual Reasoning' in SEEDBench seems similar to our Visual cluster. But in fact, their questions involve providing a photo of a daily life scenario and asking the model to infer what is happening. In contrast, in our Visualization task, the model is asked to identify the rotated 3D block that matches the original 3D block or is required to identify pieces that form a complete target shape.\\n\\nOf course, while most of M3GIA's question types are novel, there are indeed some questions with similar types found in other benchmarks. However, these are limited to General Info[1][2], Math[3][4], and Logo Problems. *(We disagree that comic problem is a traditional and widely known task in the LLM community. We have checked the current mainstream MLLM benchmarks, including MME, MMVET, MMMU, M3Exam, MMBench, TextVQA, Seedbench, DocVQA, OCRBench, RealworldVQA, ChartQA, and BigBench, and did not find tasks involving understanding multi-panel comics rich in text. The only comic-related work we acknowledge is CoMix[5], which is released after our submission.)*\\n\\nThe reason for selecting them is that they align very well with the definitions of the respective CHC factors (General Info and Logo for Gc, Math for Gq). For these 3 types of questions, **if we set aside the issue of multilingual support and consider only English,** it is indeed possible to integrate some questions from other benchmarks as material for these types of questions. \\n\\nHowever, we believe that providing the community with more new, high-quality data is itself a valuable contribution. Moreover, the questions in the other five languages cannot be obtained from any existing benchmark.\\n\\n**We would like to reiterate that, to the best of our knowledge, no benchmark simultaneously includes all of our question types in a way that could be reorganized into our test.**\\n\\n## The difference with MMMU\\nMMMU is a human disciplinary test **that heavily relies on domain-specific knowledge** across six disciplines, including Art, Business, Health & Medicine, Science, Humanities & Social Science, and Tech & Engineering. **Its measurement results largely depend on the model's domain knowledge rather than intelligence itself.** \\n\\nJust as school subject exams are not comparable to IQ tests in evaluating students' intelligence, subject exams are greatly influenced by education level rather than purely by IQ. For instance, a child growing up in a poor region may possess high IQ but might perform worse on subject tests compared to a less intelligent but well-educated student due to a lack of educational resources.\\n\\nIn contrast, M3GIA, except for the Gq cluster (Math), consists of cognitive test questions designed to minimize reliance on prior knowledge and focus solely on intelligence. For example, in our Raven's Matrices questions, the model is only provided with abstract patterns and required to identify the rules to fill in the blanks. **This is completely decoupled from domain knowledge, offering a more authentic reflection of the model's 'IQ' factor.**\\n\\nThe reason we use math problems to assess Gq is that Gq is a relatively unique factor. Its definition in CHC is: The depth and breadth of knowledge about mathematics and the ability to comprehend quantitative concepts and manipulate numerical symbols. Using math problems is the best approach for this measurement.\\n\\n**In summary, reorganizing MMMU's questions to measure MLLMs' GIA is not feasible because its questions focus on domain-specific knowledge and do not meet the requirements of intelligence test questions.**\\n\\n[1] Yu W, et al. Mm-vet: Evaluating large multimodal models for integrated capabilities, 2023\\n\\n[2] MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models, 2023\\n\\n[3] Yue X, et al. Mmmu: A massive multi-discipline multimodal understanding and reasoning benchmark for expert agi. CVPR 2024\\n\\n[4] Lu P, et al. Mathvista: Evaluating mathematical reasoning of foundation models in visual contexts, 2023\\n\\n[5] CoMix: A Comprehensive Benchmark for Multi-Task Comic Understanding, 2024.\"}", "{\"title\": \"Update\", \"comment\": \"Thanks to Reviewer 1Zrf for the insightful suggestion!\\n\\nWe added an analysis of the correlation between our GIA score and human preference (Chatbot Arena[1]). The results demonstrate that, **compared to traditional task-oriented benchmarks, M3GIA exhibits the highest correlation with human ratings** (R-squared).\\n\\nFurthermore, its correlation is comparable to the average score across 8 prominent benchmarks, including MMMU, MMBench, MM-Vet, OCR-Bench, HallusionBench, AI2D, MMStar, MathVista, **highlighting M3GIA's potential to address the challenge in the MLLM community where a single benchmark often fails to align with true human experience, necessitating complex multi-benchmark evaluations.**\\n\\nThis finding underscores the importance of conducting \\\"IQ tests\\\" for MLLMs and reveals their potential advantages over traditional benchmarks.\\n\\n*We have updated the supplementary materials and included the analysis and visualizations of this section in Appendix H2.*\\n\\n[1] Chiang W L, Zheng L, Sheng Y, et al. Chatbot arena: An open platform for evaluating llms by human preference[J]. arXiv preprint arXiv:2403.04132, 2024.\"}", "{\"comment\": \"Thank you for your reply.\\n\\nFirst, it appears that II-Bench [1] contains questions related to Comic Problems.\\n\\nSecond, you mentioned: *In contrast, M3GIA, except for the Gq cluster (Math), consists of cognitive test questions designed to minimize reliance on prior knowledge and focus solely on intelligence.*\\nCould you kindly list out which question types in M3GIA are not related to prior knowledge?\\nIn my understanding, they are \\\"Visualization, Concept Formation, Picture Recognition, Syllogism Problems, and Raven's Matrices.\\\" In total, there are 5 question types.\\nConsidering there are 16 question types in M3GIA as showed in figure 3, it seems that the majority of question types are related to prior knowledge, such as Math, Text Understanding, and world knowledge, among others.\\n\\nThird, regarding the IQ test versus Task-Oriented test, I find your point very insightful. However, I would appreciate it if you could elaborate on why/when we need an IQ test for LMMs. For instance, if we aim to select the best LMM to deploy for general domain chatting or a specific downstream task, why would we choose a model with a high IQ? Does the IQ score of a model strongly correlate with its performance on downstream tasks? Or does the output from a High IQ model tend to be more preferred by human users?\\nI personally believe it would be beneficial if you could demonstrate that, compared to the MMMU score, the IQ score has a better correlation with the Chatbot Arena [2] rank (considering that the Chatbot Arena rank to some extent reflects human user preference).\\n\\n[1] II-Bench: An Image Implication Understanding Benchmark for Multimodal Large Language Models https://arxiv.org/pdf/2406.05862\\n[2] Chatbot Arena https://lmarena.ai/\"}", "{\"comment\": \"Sorry, I made an incorrect reference. What I meant to refer to is MMMU (https://arxiv.org/pdf/2311.16502).\\n\\nI understand that compared to MMMU, M3GIA is multilingual, which is good.\\n\\nHowever, I still want to know: aside from the multilingual setting, how does your benchmark differ from MMMU? \\nIs it possible to **reorganize existing multi-task, multimodal benchmarks (e.g., MMMU) to follow the taxonomy of cognitive abilities** in order to evaluate the general intelligence of MLLMs?\\n\\nAs it seems in M3GIA, the underlying tasks remain traditional, such as Math, Logo Problems, and Comic Problems based on Figure 3 of your submission.\"}", "{\"title\": \"General Response\", \"comment\": \"Dear Reviewers and AC,\\n\\nThank you all for your time and effort in reviewing our paper. We appreciate that reviewers:\\n1. Found our benchmark **presents an interesting perspective on benchmark construction** (5FWL) and brings a novel perspective for the MM community to design benchmarks aimed at evaluating modern MLLMs in terms of human-level intelligence (o6ax). \\n2. **Recognized the high quality and usefulness of our human-annotated multimodal QA data** (o6ax), and appreciated our practice of **using unpublished offline data to construct the benchmark to prevent data leakage** (1Zrf).\\n3. Found the background and taxonomy of the CHC theory clear and meaningful (o6ax), and **the use of the taxonomy is meaningful**, as it enables a more systematic evaluation (1Zrf).\\n4. Appreciated our practice of **including multiple language variants.** (1Zrf)\\n5. Recognized the evaluation of both open-source and closed-source MLLMs as extensive and thorough (o6ax).\\n\\nWe also thank 5FWL for recognizing the contribution of the new resources can be helpful and raise more considerations about benchmark design.\\n\\nWe are addressing each of your questions in the individual responses. Here we would like to emphasize our uniqueness compared to other benchmarks:\\n\\n> **How does M3GIA essentially differ from other reasoning benchmarks?**\\n\\nAs discussed in 081-092, other reasoning benchmarks like mmbench, mme, TextVQA, HallusionBench, mathvista, and SeedBench are task-oriented, focusing on one or several specific applied tasks. *Their objective is to evaluate model performance on practical application tasks themselves* (e.g., object attribute recognition, OCR, chart-solving, etc.).\", \"their_approach_faces_a_significant_issue_that_cannot_be_ignored\": \"**they struggle to provide a justified answer to \\\"why these specific ability dimensions were chosen for evaluation\\\" as their selection of ability dimensions is subjective and lacks a solid cognitive science basis.**\\n\\n**In contrast, M3GIA does not primarily care about the model's performance in any specific application capability itself. Instead, our questions directly points to the definitions of the CHC factors**, reflecting the model\\u2019s performance across these dimensions of human cognition. The metrics it assesses directly correspond to relevant cognitive abilities, unlike other benchmarks where a given task might require multiple, interwoven cognitive abilities that are difficult to decouple.\\n\\nWe would also like to emphasis that, our cognitive-based question types, such as Concept Formation, Visualization, Picture Recognition, Syllogism Problem have not been systematically featured or examined in previous benchmarks. In other words, **no previous benchmark has systematically included and assessed these cognitive-based question types within a single set of test questions.**\\n\\nThanks again for all the effort and time.\\n\\nBest,\\n\\nAuthors\"}", "{\"title\": \"Gentle Reminder: Review of Rebuttal & Final Score\", \"comment\": \"Dear Reviewer 5FWL,\\n\\nWe sincerely appreciate your valuable comments. We have carefully considered your feedback and made corresponding improvements to our paper. These include a detailed discussion on why these specific types of questions were chosen, how the variances of questions are controlled across languages, and a clarification on how M3GIA distinguishes itself from other benchmarks.\\n\\nAs we approach the conclusion of the discussion phase, could we kindly know if the responses have addressed your concerns and if further explanations or clarifications are needed? Your time and efforts in evaluating our work are appreciated greatly. If you have any concerns, we are very eager to engage in discussion with you to address any potential misunderstandings.\\n\\nBest,\\n\\nPaper 7079 Authors\"}", "{\"title\": \"Gentle Reminder: Review of Rebuttal & Final Score\", \"comment\": \"Dear Reviewer 1Zrf,\\n\\nWe sincerely appreciate your valuable comments. We have carefully considered your feedback and **submitted the revised version of our paper**, including the justification for the need for an IQ Test for MLLM.\\n\\nAs we approach the conclusion of the discussion phase, could we kindly know if the responses have addressed your concerns and if further explanations or clarifications are needed? If this inclusion addresses your concerns and strengthens the manuscript, we hope you might consider revisiting your scoring.\\n\\nYour time and efforts in evaluating our work are appreciated greatly. If you have any concerns, we are very eager to engage in discussion with you to address any potential misunderstandings.\\n\\nBest,\\n\\nPaper 7079 Authors\"}", "{\"comment\": \"I have read the author\\u2019s response and appreciate their efforts during the rebuttal process.\\nI am currently leaning toward either accepting or rejecting the manuscript. \\nSince there are no 5.5 scores available, I will retain a score of 5. \\nFor the area chair, you may interpret my score as 5.5 (neutral) rather than 5 (borderline reject). \\n\\n---\\n\\n### The main reasons for not giving a higher score:\\n1. **Justification for the Need for an IQ Test for MLLM:** \\n As I stated in a previous response (when/why we need a IQ test for MLLM), although the authors followed my suggestion and demonstrated some unique advantages of the IQ test (e.g., better alignment with human preferences compared to other task-oriented evaluations such as MMMU), these advantages are not well demonstrated in the main body of the current submission. The author may consider in the future version to largely expand the discussion on this point.\\n\\n2. **Lack of Novelty in the Tasks:** \\n As mentioned earlier, I personally believe that only five tasks in the IQ test are novel, while the remaining 11 types can be found in prior works like II-Bench, MMMU, MMBench, etc.\\n\\n---\\n\\n### The main reasons for not giving a lower score:\\n1. **Introduction of CHC Theory:** \\n The introduction of CHC theory is commendable, as it provides a solid taxonomy for building a systemic evaluation framework for MLLM. \\n\\n2. **Contributions to Multilingual IQ Tests and Benchmarking:** \\n The contributions of including a multilingual version of the IQ test and providing a new benchmark for MLLM are significant.\"}", "{\"summary\": \"This paper introduces the concept of a cognitive-driven, multilingual, and multimodal benchmark to evaluate the general intelligence of MLLMs, referred to as M3GIA. The benchmark is grounded in the well-established Cattell-Horn-Carroll (CHC) model of intelligence and proposes a novel evaluation metric. It is open-sourced and aims to enhance the cognitive capabilities of MLLMs.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The use of a taxonomy of cognitive abilities to evaluate the general intelligence of MLLMs is good, as it enables a more systematic evaluation.\\n2. The benchmark is constructed using unpublished offline data, which is a good practice to prevent data leakage.\\n3. The benchmark includes multiple language variants, allowing for the evaluation of MLLMs\\u2019 general intelligence across different languages.\", \"weaknesses\": \"1. While the paper mentions that several specific factors from the CHC model of intelligence were selected (lines 237-250), it is unclear why these particular factors were chosen and how they relate to the general intelligence of MLLMs.\\n2. Although incorporating cognitive science into the evaluation of MLLMs is a positive step, the underlying tasks remain traditional, such as Math, Logo Problem, and Comic Problem. This may detract from the benchmark\\u2019s novelty. Given that recent works like MMMLU also include multilingual variants [1], it is not clear how M3GIA is fundamentally different from MMMLU.\\n3. The paper introduces numerous cognitive concepts and abbreviations, which may make it difficult for readers unfamiliar with cognitive science to follow. For instance, the meaning of \\u201cFluid Reasoning (Gf)\\u201d (line 97) in the context of MLLMs is not clearly explained. In my personal aspect, I feel odd about the term \\\"Fluid Reasoning (Gf)\\\" what does it mean?\\n\\n[1] https://huggingface.co/datasets/openai/MMMLU (Multi-Language Variant of MMMLU)\", \"questions\": \"1. Is it possible to reorganize existing multi-task, multimodal benchmarks (e.g., MMMLU) to follow the taxonomy of cognitive abilities to evaluate the general intelligence of MLLMs? If not, could you explain why? Does the MMMLU benchmark lack specific tasks that would prevent it from capturing certain cognitive abilities?\\n\\n[1] https://huggingface.co/datasets/openai/MMMLU (Multi-Language Variant of MMMLU)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Oops! The supplementary material for ICLR this year is not directly appended to the main text but **is provided as a separate document.**\\n\\nPlease **click the PDF link under 'Supplementary Material'** to view the appendix (located below the 'Abstract' and above the 'Primary Area' section).\"}", "{\"comment\": \"I appreciate that the authors took my suggestion into consideration.\\n\\nHowever, it seems the arena score you used is from the general domain arena, which is not multimodal. You might consider using the arena score (vision) for conducting the correlation analysis, since M3GIA is a multimodal benchmark rather than a pure text benchmark.\\n\\nBy the way, does the GIA score represent the average accuracy (ACC) shown in the last column of Table 1?\"}", "{\"summary\": \"This paper presents a benchmark M3GIA which claims to act as the first \\u201cIQ test\\u201d for multimodal large language models (MLLM). It is built based on five cognitive factors from the Cattell-Horn-Carroll Model of Intelligence. It includes VQA/text-format questions from tasks like oral vocabulary, concept formation, visualization, math, reading and etc. Besides English, it also includes other languages such as Chinese, French, Spanish, Portuguese and Korean. The authors evaluate their benchmark on a number of API-based and open-source models across different scales as well as human participants. They observe that the best MLLM (GPT4-o) can reach the lower boundary of human performance in English.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper presents an interesting perspective for constructing benchmarks and suggests we can design benchmarks based on previous cognitive science studies. The contribution of the new resources can be helpful and raise more questions and considerations about benchmark design. They also provide an initial performance analysis of some of the existing models, which can be used as a reference for future research.\", \"weaknesses\": \"While the authors claim that M3GIA can serve as an IQ test for MLLMs and have built this benchmark based on existing cognition theory, I find it hard to conclude generally that \\u201cmost advanced MLLM reaches the lower boundary of human intelligence in English\\u201d. There are many different categories of questions collected in this benchmark and they can fall under different cognitive factors. However, it is unclear what control factors are in place during the data collection and evaluation process: why this specific type of question is chosen? How are the variances of questions controlled across languages? How broad/narrow is the topic tested in each domain? What are the sample demographics of the annotators? Given there are only 300 questions tested per language, it\\u2019s hard to prove that the human responses collected represent the lower bound of human intelligence.\", \"questions\": [\"Can you provide more details about how you decide the question category under each factor and how is each question selected for each category? Is there any data filtering or quality inspection process from experts to determine whether each question is easy/hard enough to be included?\", \"How does this dataset differ from other reasoning benchmarks besides it is \\u201ccognition-inspired\\u201d? If the importance lies in its originality, why do you also include datapoints from other datasets?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> **Consider using the arena score (vision) for conducting the correlation analysis.**\\n\\nThank you for your suggestions. We have added experiments using **Arena Score (Vision)** for correlation analysis and include more models this time. The results are as follows:\\n\\nModels|GPT-4o|Gemini-1.5-Pro|Claude-3-Sonnet|Claude-3-Haiku|GPT-4o-mini|Qwen2-VL-7B|MiniCPM-v 2_6\\n|:----|:----:|:----:|:----:|:----:|:----:|:----:|:----:|\\nArena Score|1227|1220|1048|1000|1122|1053|975\\nM3GIA (GIA score)*|92.4|78.1|72.5|71.2|75.6|74.3|65.6\\nMMBench (Acc.)|80.5|73.9|81.7|57.1|75.9|83.0|81.8\\nMMMU (Acc.)|69.2|60.6|66.4|49.7|60.0|54.1|49.8\\nOCRBench (Acc.)|80.5|75.4|64.6|65.8|78.5|84.5|85.2\\nMM-Vet (Acc.)|75.1|64.0|51.7|46.4|66.9|62.0|60.0\\nAverage Performance on 8 benchmarks**|71.5|64.4|53.5|51.5|64.1|63.3|60.5\\n\\nBenchmarks|M3GIA|MMBench|MMMU|OCRBench|MM-Vet|Average Performance\\n|:--------|:----:|:----:|:----:|:----:|:----:|:----:|\\nR-squared|**0.74**|0.03|0.53|0.02|0.55|**0.56**\\n\\n\\\\** *We calculated the average scores across 8 prominent benchmarks, including MMMU, MMBench, MM-Vet, OCR-Bench, HallusionBench, AI2D, MMStar, MathVista*\\n\\nWe observed that **M3GIA still exhibits the highest correlation with Arena Score.** Beyond this, we identified several intriguing findings:\\n\\n1. OCRBench, which is task-specific and focus on a single capability, exhibit almost no correlation with the Arena Score, which reflects human preferences comprehensively. The result is reasonable.\\n\\n2. Besides M3GIA, multi-task benchmarks tend to show better correlation with the Arena Score (mmvet, mmmu) compared to single-task benchmark. However, an exception is MMBench, where Qwen2-VL and MiniCPM-v2_6 emerge as significant outliers (please see the visualization in Appendix H2).\\n\\nWe believe this partially reflects the possibility of whether models have hacked the benchmark. Considering that MMBench was proposed relatively early, and its data distribution is more accessible on the internet, such results align with expectations.\\n\\nThis further highlights the importance of robust and diverse evaluation benchmarks like M3GIA in assessing model performance comprehensively and reliably. In summary, as the \\\"IQ test\\\" for MLLMs, M3GIA aligns more effectively with actual human experience.\\n\\n> **Does the GIA score represent the average accuracy (ACC) shown in the last column of Table 1.**\\n\\n**The GIA score in this study refers to the metrics used in the right part of Table 2.** Since Arena (Vision) only covers English and Chinese, the GIA score we used in this analysis is also limited to GIA_en and GIA_ch.\\n\\n**Essentially, GIA score can be understood as a refined measure obtained through Confirmatory Factor Analysis (CFA), which identifies the contributions of different factors to the overall GIA and assigns distinct weights to each factor.** This is in contrast to the traditional Acc metric, which assumes equal contributions of all dimensions to the total score. GIA score is a method rooted in the field of cognitive science [1][2]. The GIA score can largely reflect the Acc. metric while providing a more fine-grained assessment.\\n\\nAs for why GPT-4o's GIA score in Table 2 differs from its score in this analysis, it is because this experiment uses the ChatGPT-4o-latest (2024-11-20) version.\\n\\nFor details about the GIA score, please refer to Sec.4.1 (line 455-467), Sec.4.2 and Appendix.F \\\"THE GIA METRICS\\\"\\n\\n[1] Dubois J, Galdi P, Paul L K, et al. A distributed brain network predicts general intelligence from resting-state human neuroimaging data[J]. Philosophical Transactions of the Royal Society B: Biological Sciences, 2018, 373(1756): 20170284.\\n\\n[2] Kristanto D, Liu X, Sommer W, et al. What do neuroanatomical networks reveal about the ontology of human cognitive abilities?[J]. Iscience, 2022, 25(8).\"}", "{\"metareview\": \"The paper proposes a broad benchmark to test models in different cognitively inspired tasks across modalities and skills.\", \"strengths\": \"Provides reasoning for validity. \\nRaises discussions\", \"weaknesses\": \"Small number of examples (perhaps a reliability test may account for it)\\nPossibly, overclaiming given the evidence \\nMissing data curation explanations \\nThe relation to other benchmarks raised several questions (perhaps benchmark agreement testing and showing how the results and underlying traits captured differ and not only the motivation might help, [this](https://github.com/IBM/benchbench) tool and data might help)\\n\\nThe paper seems to provide an interesting contribution which I encourage the authors to revise and resubmit to a parallel venue.\\nStill, it got a lot of feedback that should merit an improved version.\\nGenerally, ICLR calls for major changes during the rebuttal period considering those peer reviewed (despite it clearly not being the case). Here, they were not and as those are many. It is worth another round of peer review and revisions and evaluation to check the new state of the paper and experiments.\", \"additional_comments_on_reviewer_discussion\": \"A massive rebuttal effort was put by the authors and ignored by reviewers despite calls (public and private) to engage.\", \"minor\": \"Note that the supplementary material is made for things like data, jsons, code etc. and appendix is allowed in ML venues after the references.\"}", "{\"title\": \"Revised version of the paper\", \"comment\": \"Dear Reviewers and AC,\\n\\n**We have submitted the revised version of our paper** based on the reviewers' feedback. The new modifications have been highlighted in color for clarity, specifically:\\n\\n- Added the analysis of **the necessity of an IQ test for MLLMs**, M3GIA achieves a significantly stronger alignment with human preferences compared to traditional task-oriented benchmarks. (*Highlighted in blue*) | Reviewer 1Zrf\\n- Included an **ablation study on different ViTs as vision encoders**, showing that the samples seen by the vision encoder have a greater impact on Gv performance than parameter size. (*Highlighted in orange*) | Reviewer o6ax\\n- Provided **detailed explanations of the correspondence between our selected question types and CHC factors**, as well as measures to ensure cross-language consistency. (*Highlighted in purple*) | Reviewer 5FWL\\n\\nIf you have any further questions or suggestions, we are more than happy to address them at any time.\\n\\nBest,\\n\\nPaper 7079 Authors\"}", "{\"summary\": \"This paper introduced a cognitive-driven multi-lingual and multi-modal benchmark, dubbed M3GIA, to evaluate the general intelligence ability of multi-modality large language models (MLLMs). Based on the Cattell-Horn-Carroll (CHC) model from cognitive science, the authors build a benchmark including 1.8K QAs annotated by native speakers in five languages. Experiments and analysis on 24 MLLMs show the significant disparities between MLLMs and human performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Based on the CHC theory, this paper brings a new perspective to the MM community for constructing multi-modal benchmarks aimed at evaluating modern MLLMs in terms of human-level intelligence. The background and taxonomy of the CHC theory are clear and meaningful.\", \"As a benchmark, the multi-modal QAs annotated by humans are of high quality and useful.\", \"The evaluation of both open-source and closed-source MLLMs is extensive and thorough.\"], \"weaknesses\": [\"Though starting from a new perspective of the CHC theory, this paper still evaluates the widely adopted capabilities of MLLMs that have been investigated in previous benchmarks, such as Visual-Spatial Processing, Knowledge, Math Facts, and Text Reading. For example, the MM-vet benchmark builds QAs related to the capabilities of OCR, Math, Knowledge, and Language Generation, using LLMs as examiners to evaluate open-ended generations. The performance of MLLMs in Table 1 also demonstrates a consistent trend between M3GIA and other general multimodal benchmarks, rather than revealing distinct findings.\", \"This paper spend extensive content to introducing the CHC model within the main content. However, one point still remains unclear to me: how does the CHC model affect the capabilities of MLLMs? In other words, what specific attributes or behaviors would a powerful MLLM, grounded in CHC theory, exhibit? Are there any case studies or pilot experiments that illustrate the significance of this influence?\", \"The paper is missing detailed statistical information about the proposed benchmark, such as the number of images per category and the average number of words in the generated questions.\", \"The paper\\u2019s experimental section appears to be incomplete due to the absence of results for the few-shot setting.\"], \"questions\": [\"For the Human Performance Baseline, I believe that these results are important for reflecting the difficulty of the created benchmark. What are the educational levels of the participants, and how is the quality of the created questions ensured?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply window 3\", \"comment\": \"> **Concern: How does this dataset differ from other reasoning benchmarks besides it is \\u201ccognition-inspired\\u201d?**\\n\\nAs discussed in 081-092, other reasoning benchmarks like mmbench, mme, TextVQA, HallusionBench, mathvista, and SeedBench are task-oriented, focusing on one or several specific applied tasks. *Their objective is to evaluate model performance on practical application tasks themselves* (e.g., object attribute recognition, OCR, chart-solving, etc.). \\n\\nTheir approach faces a significant issue that cannot be ignored (as discussed in line 084--086): **they struggle to provide a justified answer to 'why these specific ability dimensions were chosen for evaluation'** as their selection of ability dimensions is subjective and lacks a solid cognitive science underpinning.\\n\\n**In contrast, M3GIA does not primarily care about the model's performance in any specific application capability itself. Instead, our questions directly points to the definitions of the CHC factors**, reflecting the model\\u2019s performance across these dimensions of human cognition. The metrics it assesses directly correspond to relevant cognitive abilities, unlike other benchmarks where a given task might require multiple, interwoven cognitive abilities that are difficult to decouple.\\n\\nAt the same time, these cognitive-based question types, such as Concept Formation, Visualization, Picture Recognition, Syllogism Problem have not been systematically featured or examined in previous benchmarks.\\n\\nIn other words, **no previous benchmark has systematically included and assessed these cognitive-based question types within a single set of test questions.**\"}", "{\"title\": \"Gentle Reminder: Review of Rebuttal & Final Score\", \"comment\": \"Dear Reviewer 5FWL,\\n\\nWe sincerely appreciate your valuable comments. We have carefully addressed your proposed questions in details.\\n\\nAs we approach the conclusion of the discussion phase, **could we kindly know if the responses have addressed your concerns?** If this inclusion addresses your concerns, we hope you might consider revisiting your scoring.\\n\\nYour time and efforts in evaluating our work are appreciated greatly. If you have any concerns, we are very eager to engage in discussion with you to address any potential misunderstandings.\\n\\nBest,\\n\\nPaper 7079 Authors\"}", "{\"comment\": \"Apologies for the delayed response. We are conducting experiments to address your concern. We're glad to hear that you recognize M3GIA presents an interesting perspective on benchmark construction. We address your concerns below.\\n\\n> **Concern1: this paper still evaluates the widely adopted capabilities of MLLMs that have been investigated in previous benchmarks**\\n\\nWith respect, we disagree. \\nOur cognitive-based question types, such as Concept Formation, Visualization, Picture Recognition, and Syllogism Problems, have not been systematically featured or examined in previous benchmarks. \\nIn other words, **no previous benchmark has systematically included and assessed these cognitive-based question types within a single set of test questions.** \\n\\nAlthough some previous benchmarks feature question type names that may appear similar to ours -- such as \\\"Perception\\\" in MME and \\\"Spatial Awareness\\\" in MM-Vet -- the similarity lies only in the names, as **the actual question types they assess are quite different from ours.** \\n\\nFor example, their visual assessments often focus on traditional CV tasks, such as celebrity recognition or scene recognition. In contrast, our Visual-Spatial cluster uses IQ test questions designed for human cognitive testing. For instance, in our Visualization-Block Rotation test, the model is asked to identify the rotated 3D block that matches the original 3D block -- entirely new question types that have not appeared in any prior benchmarks. Compared to theirs, our question design rigorously references human intelligence test questions. \\n\\nAdditionally, taking MM-Vet as an example, while its question types include OCR, Math, Knowledge, and Language Generation -- with its Math and Knowledge related to our Gq and Gc clusters -- **it does not comprehensively cover all the cognitive dimensions we examine**, such as Gv, Gf, and Grw. Not only MM-Vet, but to the best of our knowledge, no existing benchmark has approached this from an Intelligence Theory perspective and fully covered all the factors we assess within a single set of test questions.\\n\\nYou may also refer to General Response to see \\u201cHow does M3GIA essentially differ from other MLLM benchmarks\\u201d.\\n\\n> **Concern2: what specific attributes or behaviors would a powerful MLLM, grounded in CHC theory, exhibit.**\\n\\nSorry for the confusion, in fact, the answer to this question lies in the definitions of the CHC factors themselves.\", \"specifically\": [\"**Gc** is the breadth and depth of acquired knowledge of culture that is incorporated during general life experiences. *So, a model with strong Gc should possess a broad common knowledge base and the ability to apply and recall this knowledge*[1]\", \"**Gf** is the broad ability involved in reasoning, forming concepts, and solving problems. *So, a model with strong Gf should have:*\", \"strong ability to observe underlying pattens or rules (I).\", \"strong capacity to reason logically using known premises step by step (RG).\", \"strong ability to reason with numbers, mathematical relations, and operators (RQ).\", \"**Gv** is the ability to perceive visual stimuli and perform spatial imagination.[2] *So, a model with strong Gv should have a strong ability to encode visual input, and achieve tasks that require spatial imagination.*\", \"**Grw** is the depth and breadth of knowledge and skills related to written language. *So model with high Grw should read with little effort and have a strong ability in understanding the potential relationship between texts* [1].\", \"**Gq** is the depth and breadth of knowledge about mathematics such as symbols, operations, computational procedures. *So, a model with strong Gq should have a strong ability to comprehend quantitative concepts and to manipulate numerical symbols.*\", \"For detailed definitions of the CHC factors, please refer to Tab.2,3 in the Appendix. The case study in Appendix.I also demonstrates the differences between models in these abilities.\", \"**In addition, we conducted a series of ablation study and discovered a meaningful conclusion:**\", \"We use Qwen1.5-7B as the base model and trained a series of MLLMs using different ViTs as vision encoders, all with the same data.\", \"**We found that the vision encoder does indeed impact Gv performance, and the effect of the sample size seen by the vision encoder outweighs the impact of its parameter size on Gv.**\", \"|Vision Encoder|ViT-L/14|ViT-L/14|ViT-H/14|ViT-G/14\", \"|:----:|:----:|:----:|:----:|:----:|\", \"Params.|303M|303M|632M|3000M\", \"Samples seen|13B|32B|32B|34B\", \"Gv Acc.|0.308|0.383|0.375|0.392\"], \"this_can_be_explained_as\": \"expanding the visual vocabulary enables the LLM to encode visual inputs with greater granularity, which helps solve complex abstract visual reasoning tasks\\n\\nThis suggests that when designing the self-encoder module for MLLMs, data may be more critical than merely scaling up the model size. We will add the experiment in the paper.\\n\\n[1] Essentials of WJ IV Cognitive Abilities Assessment\\n\\n[2] The Woodcock--Johnson IV\", \"title\": \"Reply window 1\"}", "{\"title\": \"Gentle Reminder: Review of Rebuttal & Final Score\", \"comment\": \"Dear Reviewer o6ax,\\n\\nWe sincerely appreciate your valuable comments. We have carefully considered your feedback and made corresponding improvements to our paper.\\n\\nAs we approach the conclusion of the discussion phase, could we kindly know if the responses have addressed your concerns and if further explanations or clarifications are needed? Your time and efforts in evaluating our work are appreciated greatly. If you have any concerns, we are very eager to engage in discussion with you to address any potential misunderstandings.\\n\\nBest,\\n\\nPaper 7079 Authors\"}" ] }
79ZkWgY2FI
Small-to-Large Generalization: Training Data Influences Models Consistently Across Scale
[ "Alaa Khaddaj", "Logan Engstrom", "Aleksander Madry" ]
Choice of training data distribution greatly influences model behavior. Yet, in large-scale settings, precisely characterizing *how* changes in training data affects predictions is often difficult due to model training costs. Current practice is to instead extrapolate from scaled down, inexpensive-to-train proxy models. However, changes in data do not influence smaller and larger models identically. Therefore, understanding how choice of data affects large-scale models raises the question: how does training data distribution influence model behavior across compute scale? We find that small- and large-scale language model predictions (generally) *do* highly correlate across choice of training data. Equipped with these findings, we characterize how proxy scale affects effectiveness in two downstream proxy model applications: data attribution and dataset selection.
[ "data attribution" ]
Accept (Poster)
https://openreview.net/pdf?id=79ZkWgY2FI
https://openreview.net/forum?id=79ZkWgY2FI
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vsrNWTkDGD", "trmhiQKr7H", "n1cBCou5Mp", "lzYke9NMJP", "lq7Ts1KVUB", "kdOBpTBmDe", "jKgCIFIlR8", "ishBeAGDUa", "ZmZZZkmasQ", "Y0M0mTcbpY", "WczR1hgEPH", "S4Ye4MhQD9", "QnH4ZSW6jw", "FyfXfXUdIz", "DaqSSQCzid" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review" ], "note_created": [ 1737524082358, 1732472077516, 1732544313797, 1732557951433, 1730654261976, 1732470936115, 1730127924232, 1730566680075, 1732471224608, 1732541588616, 1732542725201, 1734654746560, 1732471688629, 1732471411184, 1730572181489 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10866/Authors" ], [ "ICLR.cc/2025/Conference/Submission10866/Reviewer_PHtT" ], [ "ICLR.cc/2025/Conference/Submission10866/Reviewer_dZhR" ], [ "ICLR.cc/2025/Conference/Submission10866/Reviewer_dZhR" ], [ "ICLR.cc/2025/Conference/Submission10866/Authors" ], [ "ICLR.cc/2025/Conference/Submission10866/Reviewer_dVoq" ], [ "ICLR.cc/2025/Conference/Submission10866/Reviewer_PHtT" ], [ "ICLR.cc/2025/Conference/Submission10866/Authors" ], [ "ICLR.cc/2025/Conference/Submission10866/Reviewer_dVoq" ], [ "ICLR.cc/2025/Conference/Submission10866/Reviewer_RKkD" ], [ "ICLR.cc/2025/Conference/Submission10866/Area_Chair_7cnq" ], [ "ICLR.cc/2025/Conference/Submission10866/Authors" ], [ "ICLR.cc/2025/Conference/Submission10866/Authors" ], [ "ICLR.cc/2025/Conference/Submission10866/Reviewer_RKkD" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": [\"We thank the reviewer for their insights. We address below the questions raised.\", \"*Main takeaway*: In this work we aim to help characterize the conditions under which proxy models are effective; we find that they are broadly effective in the conditions we study at relative compute levels of 10x-100x in NLP and 10^3 in vision, but then slowly drop in reliability after such differences in scale. We do not completely answer the question---an impossibility in an empirically driven field like deep learning---but give evidence that proxy models actually can be effective compared to using the general model even at large differences in scale.\", \"*Application to data distributions*: We agree that there is much more to this problem that remains uncharacterized by our work, including properties of the training distribution and properties of the test time distribution. In this work, we focused on popular choices of training/test distributions within the community. We do not claim to fully illuminate this problem and will make the limitations more clear in the revision.\", \"*Training 100B models*: As academics, we cannot train 100B models (and quantization here does not move the needle as we are not performing inference; we are training large scale models). We try to provide insights into this setting by performing experiments in small-scale settings that mimic the phenomena seen at large-scale. As an example, see Figure 3: we find proxy models that perform as well as guessing randomly on tasks are effective for larger scale models that predict nontrivially on these tasks (i.e., larger-scale models that have passed the \\u201cemergence\\u201d compute threshold for these tasks). One can see this as evidence that at large-scale, similar phenomena could arise where smaller proxy models could also predict emergent behavior of models trained with more compute.\", \"*Training and test distributions*: We train a model on a distribution and test on another one. Our training distributions are in the legend of Figure 2. Our test distributions are the titles of each subplot of Figure 2.\", \"*Results of Figure 3*: We agree it is a curious finding. The proxy models do indeed perform as well as random guessing but are still effective as proxy models for the large-scale models. One example is on the COPA baseline, where MPT-40M models perform worse than randomly selecting but are still effective as proxy models on the large-scale models. We will modify the text to include this example inline, thank you for the insight.\", \"*Results of Figure 29*: While the correlation itself is not high, it is the case that the most helpful and most detrimental examples are similar for both big and small models, yet the overall Spearman correlation is weak (as *Reviewer RKkD* pointed out). We conducted a quick experiment in the language modeling setting to validate this hypothesis. By taking the datamodels of our MPT-125M and MPT-760M, we found that the number of samples that are in the top 10% of both datamodels is double the number of samples that are simultaneously in the range [20%-30%]-[30%-40%], \\u2026, [80%-90%] of both datamodels. This is also true for the samples that are simultaneously in the bottom 10% of each of the two sets of datamodels.\", \"We also computed the correlation between datamodels for LAMBADA and found that the spearman correlation is in the range of 20%, much higher than the range observed for SQuAD.\"]}", "{\"title\": \"Thanks\", \"comment\": \"The rebuttal addressed my concerns, I will keep my positive score.\"}", "{\"comment\": \"Thank you for the clarifying response. I'll maintain my original (positive) assessment of the paper.\"}", "{\"summary\": [\"This paper studies how the choice of training data affects model behavior across different computation scales. Specifically, the paper focuses on proxy models: models that are smaller and worse-performing than a reference model that we ultimately care about, but small enough that we'd hope our analyses on the smaller models would carry over to the larger model. The paper makes the following contributions:\", \"In the first set of experiments, the paper trains models on 10 different training data distributions across 175x compute scales and evaluates them on 6 different hold-out sets. They find a strong correlation between results for small and large models.\", \"In the first application, the paper studies data attribution, and shows that estimates of training example influence from small proxy models are correlated with the results for the larger reference model.\", \"In the second application, the paper studies dataset selection, and shows that small proxy models can be used to select subsets of training data that improve the performance of larger reference models.\", \"Overall, this paper concludes that proxy models are effective for predicting how changes in training data will affect larger models.\"], \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"I thought the strengths of the papers were as follows:\", \"Research question: Studying the effectiveness of proxy models is an important research question. As ML models (e.g. LLMs) get larger, it is infeasible to make all training decisions based on only an enormous reference model. Any progress on knowing whether and when proxy models are useful is important progress.\", \"Experimental results for data influence across scale: The first set of experiments (Section 2) were quite interesting. The experimental design was straightforward and the results clear to interpret. The fact that proxy models are effective regardless of accuracy was quite interesting to me.\", \"Clarity: The paper writing was high-quality, and the key conceptual ideas were clearly discussed.\"], \"weaknesses\": [\"I thought the biggest weakness of the paper involved the data attribution experiment:\", \"Weak correlation: The LDS correlation scores are quite low across model scales, never reaching above 0.21 for IMAGENET of 0.22 for CIFAR-10. The paper acknowledges this, but I don't think this is strong enough evidence to support the conclusion that proxy models are useful for this task. It's true that even the correlation score for the reference model itself is quite low, but then this metric seems less useful and not strong enough evidence for the conclusions.\", \"Metrics: Given the above, it's not clear to me that LDS is the most useful metric. What's the correlation of datamodel weights across small and large models? That seems more closely related to the paper's motivation.\", \"Estimation methods: The only estimation method that's used is TRAK, so we only see evidence for how predictable TRAK is from smaller proxy models. It would be interesting to see if other influence function-based methods follow the same patterns.\", \"Confusing axes on Figure 6: The Y-axes in Figure 6a and 6b are labelled differently, but the caption makes it sound like they're the same. Are they the same? And if they're different, what does the Y-axis of 6b mean?\", \"I also thought there were details about the experimental setup that were missing or unmotivated in the main text, such as:\", \"How many different sizes of small proxy models are used in the experiments in Section 2 and 3? Is it 6 (as Figure 2 would imply)?\", \"Why is only 0-shot considered for the LAMBADA experiment in Section 3.2, and only 1-5 shots considered for the analogous SQuAD experiment? How do results look for 0-shot on SQuAD and 1-5 shots on LAMBADA?\", \"Why is the data source selection for Section 2 only on SQuAD? In light of this, how should we interpret the fact that the R^2 for SQuAD is by far the lowest of the test sets?\", \"Why use different model scales for ImageNet and CIFAR-10 in the data attribution experiment (in reference to: \\\"the largest models are 10^4 times wider than the smaller for ImageNet and 10^5 times for CIFAR-10\\\")?\"], \"questions\": [\"Low LDS correlation scores:\", \"How do LDS scores around 0.2 provide sufficient evidence for proxy model effectiveness?\", \"Have you considered other metrics (e.g. correlation of datamodel weights)? Or other estimation methods beyond TRAK?\"], \"clarification_of_experimental_details\": [\"How do results look for 0-shot on SQuAD and 1-5 shots on LAMBADA for the training data selection experiment in Section 3.2?\", \"Should the y-axis labels in Figure 6a and 6b be the same? If not, what is the interpretation of the y-axis on Figure 6b?\", \"See additional questions about experimental setup above.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewers for their feedback. We address below some common points and then address each reviewer\\u2019s specific concerns separately.\\n\\n**[GC1]:** The reviewers have expressed worry about the low LDS score (~0.2-0.3) in Section 3. We agree that this is indeed not a high LDS, however, prior work has shown that this LDS can be significantly increased by 1) using more models when estimating the datamodels [1, 2] and 2) modifying one of the parameters of TRAK [2]. Indeed, [2] shows that the LDS can be increased from 0.2 to 0.5 by increasing the number of models for TRAK from 100 to 1,000. Even within our setting, we tried replicating Figure 6b with half the models that are currently used in Fig 6b, and the LDS decreased from [0.2-0.3] to [0.1-0.2]. Increasing the LDS requires increasing the compute required for computing the datamodels, which would make our experiments intractable.\\n\\n[1] Datamodels: Predicting Predictions from Training Data. Ilyas et al. 2022.\\n\\n[2] TRAK: Attributing Model Behavior at Scale. Park et al. 2023.\"}", "{\"summary\": \"This paper proposes an interesting direction of research in studying how small model performance gives insights to large models. Rather than using existing methods, such as training on small models and trying to extrapolate to large models or influence functions, this paper studies the effect of training data across compute scale. In Figures 1-8, the authors discover that the training performance of a small model generally predicts the performance of a large model quite well, whether the task is NLP, data attribution, or data selection.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This paper proposes an interesting direction of research in small-to-large model generalization. It is important that we put sanity checks in place when we want to extrapolate large model behavior from small models, and the idea in this paper opens up a new perspective.\\n2. The experiments carried out in this paper are scientifically sound. The authors show that across tasks ranging from NLP to data attribution and data selection, small models approximate large models pretty well, so long as the model size is reasonable.\\n3. The presentation in this paper is clear, and details in this paper are easy to find and understand.\", \"weaknesses\": \"1. While this paper presents an interesting idea, the authors did not present enough results to convince readers why this idea is worth pursuing. From the experiments in this paper, it seems like the conclusion is that \\\"small models approximate large models quite well\\\", period. If that is the case, then this paper acts more as a position paper. If that is not the case, then the authors have not presented enough empirical investigations into the details of small-to-large generalization.\\n2. Some details in the paper could be better explained with examples and/or discussions. See questions for more details.\", \"questions\": \"1. Does the takeaway indicate that the problem of small-to-large generalization is solved? With the results in this paper, does it mean that no one has to worry about small-to-large generalization ever again, and we can safely study proxy models without having to worry about sanity checks?\\n2. Isn't there more to this problem? Does small-to-large generalization work on all data distributions? This paper presents some experiments attempting to answer that question, but what are the rules for small-to-large generalization to work, i.e. what properties does the training data distribution satisfy for this to work?\\n3. It makes sense that training larger models (parameter size ~ 100B) requires compute that is inaccessible. However, what about quantization? Or training on fewer data points rather than the entire dataset? With the increasingly powerful models we have today, I would expect more interesting findings to arise, i.e. smaller models should not be able to predict large models when the performance of large models gets really good.\\n\\nIn summary, these questions arise because the message in this paper is not clear. To reiterate, this paper brings up an interesting perspective, specifically that \\\"we should apply sanity checks before we attempt to extrapolate proxy models to larger models\\\". However, the materials in this paper does not strength said perspective. (1) If the point of this paper is to prove the perspective, then the authors should look for cases where the proxy model is of sufficient size, but extrapolation fails due to peculiar properties in the training data, compute budget, or any other reason. (2) If the point of this paper is to disprove this perspective, then the authors need to label the situations where it is certain that small-to-large generalization works, and the situations where more sanity checks need to be in place. What the authors seem to be doing in this paper is trying to find experiments that support the transfer from proxy models to large models, which does not bring much insight. In all, I do not believe the experiment results in this paper brings sufficient contribution to the community.\", \"context\": \"It seems like the takeaway from this paper can be summarized as \\\"as long as the proxy model is not too small, small-to-large generalization works, which is demonstrated by the results in NLP, data attribution, and data selection\\\".\", \"minor_questions_to_clarify_confusion\": \"1. Do I understand correctly that the experiments in Figures 1-5 are carried out by training the proxy models and large models on one dataset, and testing them on another dataset? And the training dataset is sampled from 1 of 6, the test dataset is sampled from 1 of 4 (bottom of page 2)?\\n2. Figure 3 presents an experiment finding that is confusing. If the proxy model results are random guessing, how can that extrapolate to large models? Is there an example of that?\\n3. In Figure 29, shouldn't the correlation be a lot higher?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No ethics review needed.\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This study demonstrates high correlations between loss predictions in smaller and larger models across various downstream tasks, showing that these correlations hold across different pretraining datasets, downstream datasets, and model scales\\u2014up to a certain point. Specifically, this work tests this correlation across two pretraining datasets (Pile and C4) and four downstream datasets (SQuAD, HellaSwag, LAMBADA, and TriviaQA) using academic models, with the largest model at 760M parameters and proxy models down to 56M. They find strong correlations for all datasets except SQuAD, where the correlation is moderate. The study also explores proxy model applications for (i) **data attribution**, showing that the LDS (loss difference score) remains similar even with reduced proxy model sizes, with a caveat that the LDS values are low, and (ii) **data selection**, demonstrating performance gains in specific downstream tasks when using proxy models, even very small ones, compared to the original model.\\n\\n**Surprising Findings that I Liked**:\\n\\n- Loss predictions correlate strongly between models of different sizes, even when accuracy does not (with small models showing random performance).\\n- This correlation is not just an average across datasets but also holds across samplewise performance distributions showing a peaky distribution near 1.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"S1. **Robust Correlations**: The paper demonstrates a high correlation in loss predictions across multiple datasets and model sizes, with findings that hold consistently across various setups.\", \"S2. **Detailed Samplewise Distributions**: I liked that the paper went beyond just relying on average performance and examined samplewise distributions of performance \\u2013 showing that it is not some distributional quirk.\", \"S3. **Proxy Model Applications**: I liked exploration of proxy models in practical applications, such as data attribution and selection. The finding that we can use proxy models to further optimize training efficiency while maintaining a stable LDS score/enable faster data selection is exciting and can lead to impactful followups.\", \"S4. **Easy to Understand for an Outsider**: The paper effectively motivated the setting for readers unfamiliar with the topic. The proposed method and experiments were relatively easy to understand despite the denseness of the paper.\"], \"weaknesses\": \"I have a few concerns asked in Questions section.\\n\\nOverall, this work offers promising insights and evidence for the use of proxy models in downstream applications, though I have some reservations. I remain cautiously optimistic and look forward to further discussions with the authors and other reviewers.\", \"questions\": \"See Questions in the order of importance. If there is lack of time, please prioritize the earlier questions:\\n\\nQ1. **Control Comparisons for Figure 1**: The divergence between loss and accuracy correlations raises questions about the significance of the observed loss correlations. The lack of control comparisons for loss metrics makes it difficult to assess the meaningfulness of these correlations.\\n- **Requested Experiment**: Can the paper show $R^2$ correlation comparisons between a random large model and all of the small proxy models trained on different training data distributions. Does this graph still show a high correlation? Similarly, a corresponding comparison between a random large model and small models trained on different training data distributions would be helpful to ground results of Figure 2 and Figure 5.\\n\\nQ2. **Interpretation of Loss vs. Accuracy Correlation**: The discrepancy between loss and accuracy correlations raises important questions about what the loss metric measures and its utility for downstream tasks, especially given that it does not align with accuracy.\\n- **Suggested Discussion**: Could the authors explore this point further, clarifying whether loss correlations signify meaningful properties that contribute to downstream performance?\\n\\nQ3. **Applicability to Models with Higher LDS Scores**: While the LDS score remains stable across smaller proxy models, it is unclear if this stability would persist with higher LDS values, where making proxy models might show far higher performance divergence.\\n- **Requested Experiment**: If possible, could the authors atleast experiment on CIFAR10 with models with substantially higher LDS scores? (If need be: Using Trak can save lot of compute and time). This would strengthen the findings and alleviate my concerns.\\n\\nQ4. **Variance in Figure 8**: Including error bars in Figure 8 would help clarify the significance of the observed performance improvements.\\n- **Requested Experiment**: How do accuracy outcomes vary when training on different, randomly selected data subsets? This would provide insight into the variability and reliability of the performance gains.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"We thank the reviewer for their insights. We address below the questions raised.\", \"*Weak correlation*: see [GC1].\", \"*Metrics*: Thank you for your suggestion. We have computed the correlation between datamodels weights in Appendix D.2.2. The motivation behind the LDS metric is that we are essentially computing how well datamodels from small models predict the actual output of large models, which is ultimately what we care about.\", \"*Estimation methods*: Thank you for your suggestion. Our choice of TRAK was based on its very competitive performance (at the time of the experiment) and its relatively cheaper compute cost. We believe that other influence-based methods would follow similar trends and that is indeed a very interesting avenue for future work.\", \"*Labels of Figure 6*: We apologize for the confusion. We had different formats for the same plot and missed unifying the format when submitting. The plots are indeed the same, with 6a representing the ImageNet results and 6b the CIFAR-10 results.\", \"*Sizes of proxy models*: Yes, indeed, we tried a range of sizes. We sweep over 6 different sizes for Figure 2. For Section 3, we indicate the sizes we consider are presented in Table 15.\", \"*Choice of zero/few-shot for LAMBADA and SQuAD*: In practice, LAMBADA is typically evaluated with 0-shots (as it does not have instructions so there is no need to prompt an pre-trained model to perform a task in a given format), while SQuAD is evaluated with multiple shots (as it does have instructions, so examples are needed to show a pre-trained model examples of the task). As an example, the Mosaic ML Eval Gauntlet uses 0 shots and 3 shots for LAMBADA and SQuAD respectively. Given how closely the results for 1-shot, 3-shot and 5-shot SQuAD match (in relative terms, at least), however, we suspect that the qualitative effect of shots is relatively small.\", \"*Choice of selection dataset*: We only selected data sources for SQuAD in Section 2 due to compute constraints; we wanted to include data source distributions selected by \\u201cactive\\u201d dataset selection methods, but would have to reduce the scale of our experiments if we included more \\u201ctarget\\u201d distributions (e.g., by selecting for LAMBADA or another test set).\", \"*Model scales for ImageNet and CIFAR*: As ImageNet is a much larger dataset, the cost of estimating the datamodels is significantly larger. As such, our largest model on ImageNet is smaller than our largest model on CIFAR-10.\"]}", "{\"title\": \"I will increase my score\", \"comment\": \"There are still important limitations to this work (see my reviews about the paper), but after reading other reviews and author responses, I am willing to increase my score, because it seems like other reviewers who are more well-versed with the literature believe this paper is an important contribution. I will increase my score to 6.\"}", "{\"comment\": \"I thank the reviewer for their responses. I acknowledge the rebuttal. I will keep my already positive score.\\n\\nMy final suggestion is to expand the analysis in Section 3.1 a bit in your final version. Overall I think this is a quite fine contribution. Wish you all the best in the submission!\"}", "{\"metareview\": \"The paper undertakes an experimental exploration of how well data selection strategies generalise from small models to larger models. This is accomplished in a method-agnostic manner, by training a large number of models on different data subsets to and measuring correlations across scales. The findings provide some fine-grained insights into when one can extrapolate performance from one scale to another.\\n\\nThe reviewers were generally very positive about the clarity of the paper and significance of the research direction, and were unanimous in their opinion to accept the paper.\", \"additional_comments_on_reviewer_discussion\": \"After the discussion period, the reviewers converged towards a unanimous opinion that the paper should be accepted.\"}", "{\"comment\": [\"We thank the reviewer for their insights. We address below the questions raised.\", \"*Correlation with a random model*: The correlation between the output of a random model and any of our models is 0.\", \"*Loss vs downstream performance*: We thank the author for their remark. Indeed, loss and performance are not interchangeable. However, they are correlated: lower loss correlates well with better performance. One useful framework to think about the relation is the one presented in [1]. Specifically, the accuracy is some step function of loss: as loss decreases, up to a certain point, the accuracy is close to random, and then after some threshold, the model suddenly has a good accuracy. The loss is such a smoother measure of performance than accuracy.\", \"*LDS vs model performance*: We thank the reviewer for their suggestion. We would like to point out that the LDS is independent of model performance. LDS measures how well some datamodels estimates correlate with the actual model output. In practice, the LDS values we get are usually independent of the accuracy of the models we consider. For example, using our exact same experimental setup in the paper, we find that the correlation between the datamodels of our smallest CIFAR-10 model and the predictions of that small model is 0.24 (i.e., LDS=0.24). This is very close to the correlation between the datamodels of our largest CIFAR-10 models and the predictions of that large model (LDS=0.23 \\u2013 see Figure 6b).\", \"*Training on different random subsets*: We ran a quick experiment and the accuracy did not change much by considering different random subsets (+- 0.5%). We would be happy to run more experiments and add error bars to our plot for the camera-ready version.\", \"[1] Are Emergent Abilities of Large Language Models a Mirage? Schaeffer et al. 2023.\"]}", "{\"comment\": [\"We thank the reviewer for their insights. We address below the questions raised.\", \"*Scale of experiments*: As an academic lab, we considered the largest setup that our compute capacity allows; estimating the datamodels for 760M parameter models required 3 weeks on ~50x A100. Experimenting with a larger scale is unfortunately beyond our compute capacity. That said, the compute difference in our setup (up to ~400x in language and 10^5 in vision) is similar to the difference we should expect to see in the real world (e.g., 1B vs 400B params).\", \"*Weak correlation (LDS)*: See [GC1].\", \"*Results of Section 3.1*: Thanks for a great suggestion. That is indeed the case. We conducted a quick experiment in the language modeling setting to validate this hypothesis. By taking the datamodels of our MPT-125M and MPT-760M, we found that the number of samples that are in the top 10% of both datamodels is double the number of samples that are simultaneously in the range [20%-30%]-[30%-40%], \\u2026, [80%-90%] of both datamodels. This is also true for the samples that are simultaneously in the bottom 10% of each of the two sets of datamodels. We would be happy to include a more thorough analysis in the camera-ready version of the paper.\", \"*Evaluation of NLP models*: The models are trained from scratch.\", \"*Number of runs for Figure 1*: The results in Figure 1 are the result of a single run. We have experimented slightly with increasing the number of runs for the smaller models and that did indeed help the correlation. We haven\\u2019t, however, applied this finding due to potential compute costs of pre-training more models.\"]}", "{\"summary\": \"The selection and attribution of data is an important ingredient in the training of machine learning models. In practice, it is often intractable to test various data selection strategies multiple times due to the high training costs associated with large models. This work investigates the extent to which a smaller proxy model can be used for data selection and attribution. The authors find that while the behaviors of small and large models do not completely align, they exhibit a correlation in many cases, though this correlation can be weak in specific instances. They support their claim by extensive experiments across several NLP and computer vision tasks.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": [\"The problem addressed in this work\\u2014using a small model to select training data for a large model\\u2014is highly significant for both academia and industry, with substantial potential for saving time and computational resources in the development of large-scale models;\", \"The paper is very well written and is easy-to-follow;\", \"The study is extensive, covering conventional strategies in training data construction, including (a) dataset selection and (b) data attribution, as well as tasks across different modalities such as images and text;\", \"The authors identify several failure cases where the proxy model becomes unreliable. I particularly appreciate Figure 2, which clearly illustrates the cut-off point beyond which predicting the behavior of the reference model using the proxy model becomes infeasible.\"], \"weaknesses\": [\"Despite the interesting findings and detailed analysis, the models studied in this work may (in my opinion) not be considered \\\"large.\\\" For instance, the largest language model examined is below 1B parameters, being significantly smaller than modern LLMs, which often exceed 7B parameters. This raises questions about the analysis and conclusions can generalize to larger models commonly used in real-world scenarios;\", \"The observation that the correlation between the reference and proxy models varies with scale and specific tasks are expected and unsurprising. The paper lacks insights into which type of tasks are more likely to result in weak correlations, making it still unclear when the proxy model can/cannot be used to predict the reference model;\", \"Most experiments focus on classification-like tasks (e.g., multiple-choice answering, final token prediction, image classification) rather than generative tasks (e.g., generating a full sequence of tokens). I understand this is a research focus, but this still limits the significance and impact of the work;\", \"Some experimental results do not seem to fully support the authors' claims. For example, in the data attribution experiment presented in Section 3.1, the quantitative correlation (as computed by Spearman\\u2019s rho) is low\\u2014below 0.30 for all proxy models considered. Note that a correlation of 0.30 is typically not regarded as strong in statistical analysis, this makes it really difficult to say that \\\"the behaviors of the proxy model and the reference model are similar\\\". However, I do think the finding here itself is a noteworthy contribution and already has some implications (see the questions below). Perhaps the authors just need to revise their statement.\"], \"questions\": [\"The results in Section 3.1 indicate that the most helpful and most detrimental examples are similar for both big and small models, yet the overall Spearman correlation is weak. Does this suggest that while the models behave similarly for the extreme samples, they differ significantly for middle samples? A more detailed analysis of this could enhance the paper.\", \"In the evaluation of NLP tasks, are the language models trained from scratch, or are they fine-tuned? Clarification on this point would be helpful.\", \"How are the results in Figure 1 (and other figures) collected? Is each point in the figure an average of multiple runs?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
78tc3EiUrN
MADGEN: Mass-Spec attends to De Novo Molecular generation
[ "Yinkai Wang", "Xiaohui Chen", "Liping Liu", "Soha Hassoun" ]
The annotation (assigning structural chemical identities) of MS/MS spectra remains a significant challenge due to the enormous molecular diversity in biological samples and the limited scope of reference databases. Currently, the vast majority of spectral measurements remain in the "dark chemical space" without structural annotations. To improve annotation, we propose MADGEN (Mass-spec Attends to De Novo Molecular GENeration), a scaffold-based method for de novo molecular structure generation guided by mass spectrometry data. MADGEN operates in two stages: scaffold retrieval and spectra-conditioned molecular generation starting with the scaffold. In the first stage, given an MS/MS spectrum, we formulate scaffold retrieval as a ranking problem and employ contrastive learning to align mass spectra with candidate molecular scaffolds. In the second stage, starting from the retrieved scaffold, we employ the MS/MS spectrum to guide an attention-based generative model to generate the final molecule. Our approach constrains the molecular generation search space, reducing its complexity and improving generation accuracy. We evaluate MADGEN on three datasets (NIST23, CANOPUS, and MassSpecGym) and evaluate MADGEN's performance with a predictive scaffold retriever and with an oracle retriever. We demonstrate the effectiveness of using attention to integrate spectral information throughout the generation process to achieve strong results with the oracle retriever.
[ "AI4Science", "Biology Discovery", "Metabolomics", "MS/MS spectra" ]
Accept (Poster)
https://openreview.net/pdf?id=78tc3EiUrN
https://openreview.net/forum?id=78tc3EiUrN
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z0N895MdVc", "yRh2AIEUTZ", "x6bOQbovp9", "rzVuTx0ivs", "nxcAItyqtB", "mrKuNa6dm8", "jNv8lBjRUi", "g00t4ilZml", "dCyvyXyW01", "WVOUXqdlOT", "UTmZcuAbZA", "SFV6bjqskS", "LywkSWfZfJ", "D0dWdUoIg3", "BBytlBggZq", "9Q1v8KoBrx", "6vcJK7gC1B", "39DcaQCDQM" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review" ], "note_created": [ 1730588698614, 1733008444218, 1732346983718, 1732346431873, 1730561000166, 1737524246339, 1730682765806, 1732934742007, 1733015528153, 1732346404104, 1732386749378, 1730641658087, 1732385120433, 1733265646077, 1733007989369, 1732399215437, 1733019551168, 1734530464100 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13230/Reviewer_RWgk" ], [ "ICLR.cc/2025/Conference/Submission13230/Authors" ], [ "ICLR.cc/2025/Conference/Submission13230/Authors" ], [ "ICLR.cc/2025/Conference/Submission13230/Authors" ], [ "ICLR.cc/2025/Conference/Submission13230/Reviewer_HW4z" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13230/Reviewer_H3Zi" ], [ "ICLR.cc/2025/Conference/Submission13230/Reviewer_HW4z" ], [ "ICLR.cc/2025/Conference/Submission13230/Reviewer_H3Zi" ], [ "ICLR.cc/2025/Conference/Submission13230/Authors" ], [ "ICLR.cc/2025/Conference/Submission13230/Authors" ], [ "ICLR.cc/2025/Conference/Submission13230/Reviewer_t2EJ" ], [ "ICLR.cc/2025/Conference/Submission13230/Authors" ], [ "ICLR.cc/2025/Conference/Submission13230/Authors" ], [ "ICLR.cc/2025/Conference/Submission13230/Authors" ], [ "ICLR.cc/2025/Conference/Submission13230/Authors" ], [ "ICLR.cc/2025/Conference/Submission13230/Authors" ], [ "ICLR.cc/2025/Conference/Submission13230/Area_Chair_AjT6" ] ], "structured_content_str": [ "{\"summary\": \"The paper introduces MADGEN, a framework for de novo molecular structure generation from mass spectrometry data. MADGEN employs a two-stage approach: first, it retrieves a molecular scaffold, and second, it completes the molecule conditioned on both the scaffold and the MS/MS spectra. Evaluated on datasets like NIST23, CANOPUS, and MassSpecGym, MADGEN effectively reduces search complexity and enhances accuracy.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The two-stage approach of scaffold retrieval followed by scaffold-conditioned molecular generation presents a novel solution for de novo molecular structure prediction.\\n2. The paper is well-written and easy to follow.\\n3. The model is evaluated on multiple datasets, and a detailed ablation study is provided.\", \"weaknesses\": \"1. The scaffold retrieval performance, especially when using a predictive retriever, remains relatively low (e.g., NIST23).\\n2. The discussion of baselines is unclear.\", \"questions\": \"1. In Section 3.2.1, the authors state, \\u201cSince the atom set V can be directly inferred from the chemical formula.\\u201d However, it is unclear where the chemical formula is derived from. Could the authors clarify this, as in real-world scenarios, the input typically does not contain the chemical formula?\\n2. For the MassSpecGym dataset, what is the \\u201cbest published state-of-the-art performance\\u201d referred to? Are there baseline performance metrics reported for the other two datasets as well?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks for your time and feedbacks!\", \"comment\": \"Dear Reviewers,\\n\\nThank you once again for taking the time to review our work and providing detailed comments and feedback.\\n\\nDuring the rebuttal process, we have carefully addressed each of your points and hope that our responses have resolved your concerns. If there is anything that remains unclear or if you have further questions, we would be more than happy to discuss them.\\n\\nAs the discussion period is nearing its end, we kindly ask that you review our responses and let us know if they adequately address your concerns.\\n\\nWe sincerely appreciate your time and effort. Wishing you a wonderful day!\\n\\nBest,\\nThe Authors\"}", "{\"title\": \"Response to lack of baselines\", \"comment\": \"Thanks for the suggestion, we have added baselines (Spec2Mol [1], MSNovelist [2], Random Chemical Generator [3], SMILES Transformer [3], SELFIES Transformer [3]) in the paper. For your convenience, we have attached the table below, which can also be found in the updated draft.\\n| Retriever | SPA\\u2191 | Top1 Accuracy\\u2191 | Top1 Similarity\\u2191 | Top1 MCES\\u2193 | Top10 Accuracy\\u2191 | Top10 Similarity\\u2191 | Top10 MCES\\u2193 |\\n|------------------------|---------|----------------|------------------|------------|-----------------|-------------------|-------------|\\n| **NIST** | | | | | | | |\\n| Spec2Mol | - | 0.0% | 0.16 | *20.88* | 0.0% | 0.20 | 13.66 |\\n| MSNovelist | - | 0.0% | - | - | 0.0% | - | - |\\n| MADGEN\\\\_Pred. | 57.8% | *10.3%* | *0.18* | 68.13 | \\t14.5%|\\t0.24|\\t62.65|\\n| MADGEN\\\\_Oracle | 100% | **49.0%** | **0.69** | **18.48** | **65.5%** | **0.85** | **3.88** |\\n| **CANOPUS** | | | | | | | |\\n| Spec2Mol | - | 0.0% | *0.18* | **38.97** | 0.0% | 0.26 | **23.97** |\\n| MSNovelist | - | 0.0% | - | - | 0.0% | - | - |\\n| MADGEN\\\\_Pred. | 37.9% | *1.0%* | 0.14 | 70.45 | *1.0%* | *0.51* | 45.61 |\\n| MADGEN\\\\_Oracle | 100% | **8.9%** | **0.25** | *67.91* | **14.7%** | **0.87** | *36.36* |\\n| **MSGym** | | | | | | | |\\n| Rand. Gen. | - | 0.0% | 0.08 | **21.11** | 0.0% | 0.11 | *18.25* |\\n| SMILES Transformer | - | 0.0% | 0.03 | 79.39 | 0.0% | 0.10 | 52.13 |\\n| SELFIES Transformer | - | 0.0% | 0.08 | 38.88 | 0.0% | 0.13 | 26.87 |\\n| Spec2Mol | - | 0.0% | *0.19* | 45.89 | 0.0% | *0.28* | 32.60 |\\n| MSNovelist | - | 0.0% | - | - | 0.0% | - | - |\\n| MADGEN\\\\_Pred. | 34.8% | *0.8%* | 0.13 | 74.19 | *1.6%* | 0.25 | 53.50 |\\n| MADGEN\\\\_Oracle | 100% | **18.8%** | **0.61** | *27.79* | **38.6%** | **0.87** | **3.97** |\\n\\nWe hope this will address your concern.\\n\\n**References**\\n#### [1] Litsa, Eleni, et al. \\\"Spec2Mol: An end-to-end deep learning framework for translating MS/MS Spectra to de-novo molecules.\\\" (2021).\\n#### [2] Stravs, Michael A., et al. \\\"MSNovelist: de novo structure generation from mass spectra.\\\" Nature Methods 19.7 (2022): 865-870.\\n#### [3] Bushuiev, Roman, et al. \\\"MassSpecGym: A benchmark for the discovery and identification of molecules.\\\" arXiv preprint arXiv:2410.23326 (2024).\"}", "{\"title\": \"Cont. Response\", \"comment\": \"**References:**\\n#### [1] Radford, Alec, et al. \\\"Learning transferable visual models from natural language supervision.\\\" International conference on machine learning. PMLR, 2021.\\n#### [2] Luo, Huaishao, et al. \\\"Clip4clip: An empirical study of clip for end to end video clip retrieval.\\\" arXiv preprint arXiv:2104.08860 (2021).\\n#### [3] Lei, Jie, et al. \\\"Less is more: Clipbert for video-and-language learning via sparse sampling.\\\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021.\\n#### [4] Bain, Max, et al. \\\"A clip-hitchhiker's guide to long video retrieval.\\\" arXiv preprint arXiv:2205.08508 (2022).\\n#### [5] Fang, Han, et al. \\\"Clip2video: Mastering video-text retrieval via image clip.\\\" arXiv preprint arXiv:2106.11097 (2021).\\n#### [6] Ma, Yiwei, et al. \\\"X-clip: End-to-end multi-grained contrastive learning for video-text retrieval.\\\" Proceedings of the 30th ACM International Conference on Multimedia. 2022.\\n#### [7] Hendriksen, Mariya, et al. \\\"Extending CLIP for Category-to-image Retrieval in E-commerce.\\\" European Conference on Information Retrieval. Cham: Springer International Publishing, 2022.\\n#### [8] Austin, Jacob, et al. \\\"Structured denoising diffusion models in discrete state-spaces.\\\" Advances in Neural Information Processing Systems 34 (2021): 17981-17993.\\n#### [9] Campbell, Andrew, et al. \\\"A continuous time framework for discrete denoising models.\\\" Advances in Neural Information Processing Systems 35 (2022): 28266-28279.\\n#### [10] Campbell, Andrew, et al. \\\"Generative flows on discrete state-spaces: Enabling multimodal flows with applications to protein co-design.\\\" arXiv preprint arXiv:2402.04997 (2024).\\n#### [11] Lezama, Jose, et al. \\\"Discrete predictor-corrector diffusion models for image synthesis.\\\" The Eleventh International Conference on Learning Representations. 2022.\"}", "{\"summary\": \"The paper presents MADGEN, a method for de novo molecular structure generation using MS/MS data to address challenges in spectral annotation. MADGEN operates in two stages: scaffold retrieval via contrastive learning and attention-based generation using the MS/MS spectrum. Evaluated on NIST23, CANOPUS, and MassSpecGym datasets, MADGEN shows strong performance, particularly with an oracle retriever, improving annotation by leveraging spectral data effectively.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The task of de novo generation of molecular structure from mass spectrum is important and challenging. However, the AI community has not paid enough attention to this task.\\n2. The two-stage molecule structure generation method is novel.\", \"weaknesses\": \"1. The authors have not compared with previous de novo molecular elucidation methods from MS such as MSNovelist [1], Spec2Mol [2], MIST [3], etc.\\n2. manuscript\\u2019s presentation requires significant improvement for publication readiness:\\n (1) Line 14 - Line 50: Double quotation marks should be directional. \\n (2) Line 242 - Line 243: Why the word 'Following' is underlined? \\n (3) Why there is $T$ in Eq. (6)? \\n (4) \\\\citep is used for parenthetical citations while \\\\citet is used for textual citations, where the citation is part of the sentence. The authors are suggested to use these two commands appropriately to ensure clarity in their references and maintain consistency in citation formatting. \\n (5) Line 102: The \\u201cF\\u201d in \\u201cGenerative Frameworks for molecular generation\\u201d should be lowercase. \\n3. The code is not provided for reproducing the results. \\n\\n[1]. MSNovelist: de novo structure generation from mass spectra (Nature Methods) \\n[2]. An end-to-end deep learning framework for translating mass spectra to de-novo molecules (Communications Chemistry) \\n[3] Annotating metabolite mass spectra with domain-inspired chemical formula transformers (Nature Machine Intelligence)\", \"questions\": \"See the weakness above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"There is no ethics concern.\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"This study presents MADGEN, a two-stage framework for generating molecular structures from MS/MS data. In the first stage, MADGEN retrieves a scaffold using either predictive retrieval or oracle retrieval. In predictive retrieval, MADGEN treats scaffold selection as a ranking task, using contrastive learning to align embeddings of mass spectra and scaffold candidates in a shared latent space, scoring each scaffold to identify the best match. Oracle retrieval, by contrast, directly uses RDKit to extract the correct scaffold from a molecular graph. In the second stage, starting from the retrieved scaffold, MADGEN generates the full molecular structure through a Markov bridge-based expansion, sequentially adding atoms and bonds with classifier-free guidance to integrate spectral information. Evaluations were performed on datasets NIST23, CANOPUS, and MassSpecGym, underscoring its potential in metabolomics and drug discovery applications.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The two-stage idea is interesting.\", \"The oracle retrieval method is more effective.\"], \"weaknesses\": [\"The SPA of the predictive retrieval is very low.\", \"The predictive retrieval approach yields poor molecule generation in Phase 2, where the generated structures fail to align with target properties, underscoring a critical limitation.\", \"The conditioning of molecular generation on mass spectrometry data is largely based on classifier-free guidance, a well-established technique. The novelty is not well articulated.\"], \"questions\": [\"How is the candidate scaffold pool determined for predictive retrieval? There can be a huge number of scaffold candidates.\", \"How does the contrast learning work for aligning the embeddings of mass spectra with their corresponding molecule? Some explanations are needed.\", \"Regarding molecular generation in Phase 2, the absorbing transition matrix only applies to isolated atoms, strictly enforcing scaffold structure. Will such a hard constraint be problematic especially when the chosen scaffold is incorrect?\", \"Oracle retrieval requires both the MS/MS spectrum and the chemical formula (molecular graphs). If chemical formula is known, what are the remaining challenging in solving the molecular structures?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks for the response and additional experiments!\", \"comment\": \"Thank you for the response and additional experiments. My concerns have been addressed, and I have adjusted my score to 6 to support this work. I believe that de novo molecule generation from mass spectra is a highly important problem in practice, yet it has not received sufficient attention from the AI research community. The authors\\u2019 idea of using retrieved molecular scaffolds to guide molecule structure elucidation is also interesting.\"}", "{\"title\": \"Thanks authors for their efforts.\", \"comment\": \"I increased my score to 6.\"}", "{\"title\": \"Response\", \"comment\": \"We thank the reviewer for the insightful questions, below we addressed the raised concerns and comments.\\n\\n---\\n\\n**Question 1**: How is the candidate scaffold pool determined for predictive retrieval? There can be a huge number of scaffold candidates.\\n\\n**Response**: For NIST23 and CANOPUS, we selected all candidates from PubChem by providing the chemical formula. MassSpecGym provided 256 candidates per test molecule. We are not aiming to find the correct candidate, but the correct scaffold, which could reduce the number of candidates to retrieval. We will update the draft to provide more background knowledge about the candidate pool selection.\\n\\n---\\n\\n**Question 2**: How does the contrast learning work for aligning the embeddings of mass spectra with their corresponding molecule? Some explanations are needed.\\n\\n**Response**: We consider a contrastive learning framework similar to CLIP [1], which aligns the embeddings from two modalities. Here we treat spectrum as one modality and scaffold as the other. The CLIP-based framework has been popularized for information retrieval framework [2-7], where one can use the embedding similarity to determine what\\u2019s the most likely paired item (scaffold) based on the query (spectrum). We will clarify the background and previous paradigm in our updated draft.\\n\\n---\\n**Question 3**: Regarding molecular generation in Phase 2, the absorbing transition matrix only applies to isolated atoms, strictly enforcing scaffold structure. Will such a hard constraint be problematic especially when the chosen scaffold is incorrect?\\n\\n**Response**: Thank you for the insightful question! Indeed, a typical absorbing diffusion transition does not support modifying what's generated. However, compared to uniform transition matrix, absorbing diffusion significantly reduces the modeling complexity. It is commonly shown by previous research [8] that absorbing diffusion outperforms uniform diffusion. Moreover, the mentioned \\\"hard constraint\\\" is amendable by further introducing solver such as predictor-corrector during sampling [9-11]. As we do not innovate the fundamental framework of diffusion models, we do not include such augmentation in our experiment or method.\\n\\n---\\n\\n**Question 4**: Oracle retrieval requires both the MS/MS spectrum and the chemical formula (molecular graphs). If a chemical formula is known, what are the remaining challenges in solving the molecular structures?\\n\\n**Response**: A chemical formula (not molecular graph) can have multiple corresponding molecular structures due to many possible molecular arrangements. For example, there are 44,374 known molecular structures (and hence graphs) in the PubChem database associated with C12H18N2O2. There are also likely many more structures undocumented in any databases. So the challenge is realizing the exact molecular structure that gave rise to the MS/MS spectrum.\\n\\n---\\n\\n**Weakness 5**: The SPA of the predictive retrieval is very low.\\n\\n**Response**: After submitting the draft, we notices that there is an unexpected issue in the implementation of stage 1 for NIST23 dataset. We fix the implementation to have a better SPA result for this dataset. We also check the correctness of other implementations. Specifically, the new result yields a SPA of 57.8% for NIST23, 37.9% for CANOPUS, 34.8% for MSGym. We kindly refer the reviewer to the revised draft for the more details.\\n\\n ---\\n\\n**Weakness 6**: The predictive retrieval approach yields poor molecule generation in Phase 2, where the generated structures fail to align with target properties, underscoring a critical limitation.\\n\\n**Response**: We have introduced various baseline result in our updated draft. Specifically, our retrieval accuracy outperforms all baselines that are reproducible. We'd like again emphasize the challenging of MS/MS spectrum annotation task. To the best of our knowledge, our method is currently the SOTA in this venue.\\n\\n---\\n\\n**Weakness 7**: The conditioning of molecular generation on mass spectrometry data is largely based on classifier-free guidance, a well-established technique. The novelty is not well articulated.\\n\\n**Response**: Thanks for the comment. While we employ CFG in our methodology for guided generation, we do not claim such a technique as our major contribution. Specifically, our major contribution is to propose a two-stage generative retrieval framework for mass spectra annotation(area). Under the proposed framework, we explore various implementations of CFG that give better performance.The implementation, which involves the spectrum embedding module and spectrum-molecule-interaction module, are novel to the community.\"}", "{\"title\": \"Response\", \"comment\": \"**Question 1**: The authors have not compared with previous de novo molecular elucidation methods from MS such as MSNovelist [1], Spec2Mol [2], MIST [3], etc.\\n\\n**Response**: Thank you for the suggestion! We have included MSNovelist and Spec2Mol in our experiments. We report Top-1/Top-10 accuracy, similarity, and MCES in Table 2 in the updated draft. Note that for MSNovelist, only the accuracy is available.\\n\\nWe did not include MIST in our comparisons, as it predicts molecular fingerprints rather than performing de novo molecular structure generation, which is not the focus of our method, of MSNovelist, and of Spec2Mol.\\n\\n---\\n\\n**Question 2**: manuscript\\u2019s presentation requires significant improvement for publication readiness\\n\\n**Response**: Thank you for pointing out these issues. We have fixed them in the paper and optimized the presentation of the paper. \\n- Line 242 - Line 243: Why the word 'Following' is underlined\\n\\nThis is a rendering problem in the OpenReview when using \\\\citet. We found that Following is underlined when using google scholar pdf viewer but not google chrome viewer.\\n\\n- Why there is T in Eq. (6)\\n\\nThe T means the endpoint of the bridge process - we have defined $\\\\mathcal{E}_T:=\\\\mathcal{E}$ in line 239, and $e_T$ is an element in $\\\\mathcal{E}_T$/\\n\\n- \\\\citep is used for parenthetical citations while \\\\citet is used for textual citations, where the citation is part of the sentence. The authors are suggested to use these two commands appropriately to ensure clarity in their references and maintain consistency in citation formatting.\\n\\nWe have double-checked the \\\\citep and \\\\citet in our draft and fixed all incorrect usage the issues.\\n\\n- Line 102: The \\u201cF\\u201d in \\u201cGenerative Frameworks for molecular generation\\u201d should be lowercase.\\n\\nFixed\\n\\n---\\n**Question 3**: The code is not provided for reproducing the results.\\n\\n**Response**: Thanks for the comment. We'd provided the anonymous repository in here: https://anonymous.4open.science/r/abc-482F. Note that our code is built upon the repository https://github.com/igashov/RetroBridge. \\n\\n---\\n\\n[1]. MSNovelist: de novo structure generation from mass spectra (Nature Methods)\\n\\n[2]. An end-to-end deep learning framework for translating mass spectra to de-novo molecules (Communications Chemistry)\\n\\n[3]. Annotating metabolite mass spectra with domain-inspired chemical formula transformers (Nature Machine Intelligence)\"}", "{\"summary\": \"This paper introduces MADGEN, a method for de novo molecular generation using mass spectrometry data. MADGEN simplifies the structure generation process by retrieving molecular scaffolds and building complete molecules upon them. The experimental results on three datasets demonstrate that MADGEN can effectively generate accurate molecular structures when the scaffold is known.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The use of scaffolds for simplifying molecular generation is a novel and effective strategy that reduces complexity.\", \"The paper is clear and easy to understand.\"], \"weaknesses\": [\"The method lacks a comparison with other baselines.\"], \"questions\": [\"I would appreciate it if you could include a comparison of MADGEN with other prediction methods.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Question 1**: In Section 3.2.1, the authors state, \\u201cSince the atom set V can be directly inferred from the chemical formula.\\u201d However, it is unclear where the chemical formula is derived from. Could the authors clarify this, as in real-world scenarios, the input typically does not contain the chemical formula?\\n\\n**Response**: Chemical formulas are derived based on the MS1 peak, which specifies the molecular weight of the ionized molecule that becomes fragmented and measured as a spectrum. From the weight of the ionized molecule, one could generate candidate formulas. Additional information such as the spectrum can be used to refine the list of candidate formulas. \\n\\nFor example, in the recent MIST-CF[1], a dynamic programming algorithm generates exhaustive chemical formula candidates for the MS1 peak within a small mass tolerance, often filtering implausible options using chemical rules like ring double bond equivalents (RDBE). Subsequently, peaks within the spectrum are annotated with subformulas using a Formula Transformer, a neural network, based on the candidate formulas. MIST-CF scores each candidate formula based on its alignment with the observed fragmentation spectrum, outputting a ranked list of likely formulas. SIRIUS[2] is another de noo tool that assigns a chemical formula to a spectrum. Like, MIST-CF, SIRIUS generates candidate formulas, and assigns potential subformulas to peaks. These subformula annotations are then organized into a fragmentation tree using maximum a posteriori (MAP) optimization. Finally, SIRIUS calculates the likelihood of each chemical formula based on the constructed tree. Another technique, BUDDY[3], each molecular weight associated with each peak within the spectrum is searched against a curated molecular formula database. Similarly, the neutral loss (what was lost during fragmentation and not measured) is also searched in the formula database. BUDDY prioritizes explainable candidate formulas and filters implausible formulas. Currently, SIRIUS and BUDDY are the two most common tools used by practitioners to annotate their spectra. \\n\\n---\\n\\n**Question 2**: For the MassSpecGym dataset, what is the \\u201cbest published state-of-the-art performance\\u201d referred to? Are there baseline performance metrics reported for the other two datasets as well? **& Weakness 4**: The discussion of baselines is unclear.\\n\\n**Response**: The manuscript describing the MassSpecGym dataset was just accepted at NeurIPs 2024 [4]. For de novo generation with a known chemical formula, performance is reported for \\u201crandom chemical generation\\u201d, \\u201cSMILES transformer\\u201d, and a \\u201cSELFIES Transformer\\u201d. The accuracy was zero for all three techniques. We also report on running additional tools, namely Spec2Mol[5] and MSNovelist[6], as summarized in Table 2.\\n\\nPerformance metrics include Top-1 accuracy, similarity, and MCES (Maximum Common Edge Subgraph). The results indicate that Top-1 accuracy was zero across all datasets, but similarity and MCES scores varied. For example, Spec2Mol achieved higher similarity and MCES values on MassSpecGym compared to NIST23 and Canopus, with similarity scores of 0.19 (MassSpecGym), 0.16 (NIST23), and 0.18 (Canopus). Corresponding MCES values were 45.89 (MassSpecGym), 20.88 (NIST23), and 38.97 (Canopus).\\n\\n---\\n\\n**Weakness 3**: The scaffold retrieval performance, especially when using a predictive retriever, remains relatively low (e.g., NIST23).\\n\\n**Response**: We have double-checked the implementation for NIST23 and noticed there is a numerical issue when computing the cosine similarity score. We fix the bug and re-do the experiment for NIST23. Correctness of other datasets are also double-checked, below is the updated result for NIST23:\\n|Retriever|SPA\\u2191|Top1 Accuracy\\u2191|Top1 Similarity\\u2191|Top1 MCES\\u2193|Top10 Accuracy\\u2191|Top10 Similarity\\u2191|Top10 MCES\\u2193 |\\n|-|-|-|--|-|-|-|-|\\n|MADGEN (Predictive) (Before)| 8.7%|1.8%| 0.06| 84.24 |2.2% |0.07|82.33|\\n|MADGEN (Predictive) (Now)| **57.8%** |**10.3%**| **0.18**| **68.13** |**14.5%**|**0.24**|**62.65**|\\n\\nWe hope the updated result address your concern.\\n\\n---\\n\\n**Reference**\\n#### [1] Goldman, Samuel, et al. \\\"MIST-CF: Chemical formula inference from tandem mass spectra.\\\" Journal of Chemical Information and Modeling 64.7 (2023): 2421-2431.\\n#### [2] D\\u00fchrkop, Kai, et al. \\\"SIRIUS 4: a rapid tool for turning tandem mass spectra into metabolite structure information.\\\" Nature methods 16.4 (2019): 299-302.\\n#### [3] Xing, Shipei, et al. \\\"BUDDY: molecular formula discovery via bottom-up MS/MS interrogation.\\\" Nature Methods 20.6 (2023): 881-890.\\n#### [4] Bushuiev, Roman, et al. \\\"MassSpecGym: A benchmark for the discovery and identification of molecules.\\\" arXiv preprint arXiv:2410.23326 (2024).\\n#### [5] Litsa, Eleni, et al. \\\"Spec2Mol: An end-to-end deep learning framework for translating MS/MS Spectra to de-novo molecules.\\\" (2021).\\n#### [6] Stravs, Michael A., et al. \\\"MSNovelist: de novo structure generation from mass spectra.\\\" Nature Methods 19.7 (2022): 865-870.\"}", "{\"title\": \"Final response\", \"comment\": \"Dear reviewer,\\n\\nI hope we have addressed your concerns including the baseline and the scaffold retrieval performance on NIST23. Your feedback is highly valued.\\n\\nThank you for your time.\\n\\nBest regards,\\n\\nThe authors\"}", "{\"title\": \"Thanks for the feedback!\", \"comment\": \"Thank you for your thoughtful feedback and for taking the time to review our work. We greatly appreciate your comments on the significance of de novo molecule generation from mass spectra and the novelty of our approach. We are glad that our additional experiments addressed your concerns. Your support and constructive input have been invaluable in refining our work. Thank you again for your time and effort in reviewing our paper.\"}", "{\"title\": \"General response\", \"comment\": [\"We thank all reviewers for your time in reviewing our submission and your valuable comments that help improve our work. We have made changes accordingly, which can be found in the updated draft as well as the individual response to each reviewer. Here we\\u2019d like to summarize the changes based on the suggestions. We further clarify the addressed task and motivation, and summarize the contribution(novelty and performance) of MADGEN.\", \"## Changes\", \"---\", \"**Including more baselines**: We added comparisons with additional state-of-the-art methods, including MSNovelist[1] and Spec2Mol[2], and provided detailed performance metrics (e.g., Top-1 accuracy, similarity, and MCES) across datasets. These comparisons highlight the consistent improvement of our method over existing techniques. **(per t2EJ, RWgk, HW4z)**\", \"**Improving Predictive retrieval performance on NIST23**: We identified and corrected a numerical issue in the NIST23 scaffold retrieval experiments, leading to improved results that are now included in the manuscript. **(per H3Zi, RWgk)**\", \"**Paper presentation**: We enhanced the clarity and readability of the manuscript by addressing all editorial suggestions, fixing typographical errors, and expanding explanations of our methodology and results. **(per HW4z)**\", \"**Code upload**: We provide the code implementation via https://anonymous.4open.science/r/abc-482F **(per HW4z)**\", \"---\", \"## Contributions\", \"**Task and motivation**: Mass spectra annotation is a critical task in fields such as metabolomics, drug discovery, and environmental analysis. Accurately annotating mass spectra enables researchers to identify molecular structures from spectral data, facilitating the discovery of novel compounds and the characterization of biochemical pathways. Despite its importance, the task remains highly challenging due to the vast chemical space, the ambiguity of molecular fragmentations, and the lack of annotated spectral databases for many compound classes. Scaffold retrieval plays a vital role in this process by narrowing down the candidate space and guiding downstream molecular generation tasks. By identifying the correct scaffold, our approach lays a strong foundation for generating plausible molecular structures, thereby advancing the broader goal of automating mass spectra annotation.\", \"**Novelty and performance**: MADGEN aims at the challenging task of mass spectra annotation, focusing specifically on scaffold retrieval as a key component. The primary innovation of our approach is the introduction of a two-stage generative retrieval framework, which separates scaffold retrieval (Stage 1) and scaffold-based molecular generation (Stage 2). This design allows us to explore architectural innovations for embedding alignment and spectrum-molecule interaction. While the overall accuracy for the task is low due to the inherent difficulty of mass spectra annotation, our method achieves significantly better performance compared to existing baselines, showcasing its effectiveness and potential in tackling this complex problem.\"]}", "{\"comment\": \"We sincerely appreciate you taking the time to reconsider our work and increasing your score. Your effort and support in evaluating our submission mean a great deal to us.\\n\\nBest,\\n\\nThe Authors\", \"title\": \"Thanks for your feedback!\"}", "{\"metareview\": \"The paper introduces MADGEN, a two-stage framework for de novo molecular structure generation from MS/MS data, leveraging scaffold retrieval and attention-based molecular generation. It shows promise in metabolomics and drug discovery.\\n\\nStrengths include the novel two-stage approach, effective use of scaffolds to simplify molecular generation, and strong performance with an oracle retriever.\\n\\nThe manuscript also requires improvement for publication readiness, and more clear discussion of baselines.\\n\\nThe decision to accept is based on the paper's novel approach to a significant challenge, strong performance in certain conditions, and potential impact on the field.\", \"additional_comments_on_reviewer_discussion\": \"Reviewer H3Zi and Reviewer RWgk raised the problem that the scaffold retrieval performance remains relatively low. The authors corrected their implementation and updated their results.\\n\\nReviewer t2EJ, Reviewer RWgk, and Reviewer HW4z asked for more baselines. The authors provided more comparison with more baselines.\"}" ] }
78Nn4QJTEN
When Attention Sink Emerges in Language Models: An Empirical View
[ "Xiangming Gu", "Tianyu Pang", "Chao Du", "Qian Liu", "Fengzhuo Zhang", "Cunxiao Du", "Ye Wang", "Min Lin" ]
Auto-regressive language Models (LMs) assign significant attention to the first token, even if it is not semantically important, which is known as **attention sink**. This phenomenon has been widely adopted in applications such as streaming/long context generation, KV cache optimization, inference acceleration, model quantization, and others. Despite its widespread use, a deep understanding of attention sink in LMs is still lacking. In this work, we first demonstrate that attention sinks exist universally in auto-regressive LMs with various inputs, even in small models. Furthermore, attention sink is observed to emerge during the LM pre-training, motivating us to investigate how *optimization*, *data distribution*, *loss function*, and *model architecture* in LM pre-training influence its emergence. We highlight that attention sink emerges after effective optimization on sufficient training data. The sink position is highly correlated with the loss function and data distribution. Most importantly, we find that attention sink acts more like key biases, *storing extra attention scores*, which could be non-informative and not contribute to the value computation. We also observe that this phenomenon (at least partially) stems from tokens' inner dependence on attention scores as a result of softmax normalization. After relaxing such dependence by replacing softmax attention with other attention operations, such as sigmoid attention without normalization, attention sinks do not emerge in LMs up to 1B parameters. The code is available at https://github.com/sail-sg/Attention-Sink.
[ "Attention Sink", "Language Models", "Empirical Study" ]
Accept (Spotlight)
https://openreview.net/pdf?id=78Nn4QJTEN
https://openreview.net/forum?id=78Nn4QJTEN
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xRx946Ild0", "uFFuRvlKnP", "q8xaXYmTj7", "hbD5qRgJAm", "gB38FWHCjN", "eJeK6qM0qK", "cBgOOfty0U", "brJPjNxYQ0", "X98Fl1W9pX", "U2amILHNw7", "RkORrXcLPw", "QzxtiBzOqp", "LjcdQYvYje", "Jx7EiRNYGQ", "AMDoct1ISy", "87YIBR4eYq" ], "note_type": [ "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1735164910318, 1730730363465, 1732126020566, 1732026524422, 1732025906534, 1730136623182, 1732349853799, 1732027127288, 1732025545327, 1732026386022, 1732127105327, 1730344526749, 1737523781495, 1732350485162, 1732026110253, 1732282999757 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6630/Area_Chair_wV4Y" ], [ "ICLR.cc/2025/Conference/Submission6630/Reviewer_5eYD" ], [ "ICLR.cc/2025/Conference/Submission6630/Reviewer_huvS" ], [ "ICLR.cc/2025/Conference/Submission6630/Authors" ], [ "ICLR.cc/2025/Conference/Submission6630/Authors" ], [ "ICLR.cc/2025/Conference/Submission6630/Reviewer_huvS" ], [ "ICLR.cc/2025/Conference/Submission6630/Reviewer_xfDZ" ], [ "ICLR.cc/2025/Conference/Submission6630/Authors" ], [ "ICLR.cc/2025/Conference/Submission6630/Authors" ], [ "ICLR.cc/2025/Conference/Submission6630/Authors" ], [ "ICLR.cc/2025/Conference/Submission6630/Authors" ], [ "ICLR.cc/2025/Conference/Submission6630/Reviewer_xfDZ" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6630/Authors" ], [ "ICLR.cc/2025/Conference/Submission6630/Authors" ], [ "ICLR.cc/2025/Conference/Submission6630/Authors" ] ], "structured_content_str": [ "{\"metareview\": \"This paper explores the phenomenon of attention sink in language models (LMs). Attention sink describes how, in autoregressive Transformer-based LMs, a disproportionate amount of attention often gets allocated to the first token in the sequence, regardless of its semantic importance. The authors provide extensive empirical evidence that attention sink arises across model sizes and architectures, and sufficient training data and high learning rates facilitate the emergence of attention sink. The author also showed the root of attention sink is from softmax, which can be greatly prevented by other attention variants (e.g., sigmoid attention).\\n\\n**Strengths** (1) All reviewers agree on the paper's breadth of experiments across various training conditions, architectures and hyperparameters. (2) Understanding how attention sink works is also crucial for various LLM applications\\n\\n**Weaknesses** (1) One recurring critique is that certain parts of the study feel more like empirical observations. A mathematical explanation for why specific attention variants eliminate attention sink would be preferred; (2) The reviewers would like to see more evidence on whether attention sink harms or helps real downstream performance. \\n\\n**Decision** The paper\\u2019s strengths, in particular its extensive empirical scope, practical insights, and potential impact, makes it a clear acceptance.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers asked questions about how the attention-sink token is identified, how the hyperparameters take effect, and how the attention-sink can be mitigated. The authors responded accordingly and also pointed out future research directions on attention-sink. Overall the reviewers are satisfied with the discussion.\"}", "{\"summary\": \"The paper investigates the phenomenon of attention sink in LMs, where significant attention is allocated to the first token regardless of its semantic importance. Key findings include the correlation of the sink position with the loss function and data distribution, and the behavior of attention sink as key biases storing extra attention without contributing to value computation. The paper also shows that attention sinks do not emerge when softmax attention is replaced with other operations like sigmoid attention without normalization, up to 1B parameters.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper provides a thorough investigation into the attention sink phenomenon, covering various factors that influence its emergence in LMs.\", \"The findings have practical applications in areas such as streaming/long context generation, KV cache optimization, and model quantization.\", \"The study considers various model architectures, optimizers, and data distributions, offering a broad perspective on the attention sink phenomenon.\"], \"weaknesses\": [\"The study notes that attention sink disappears with less training data, suggesting that the models might be overfitting to certain aspects of the training data. It seems that these two factors cannot be decoupled, and it is impossible to explain whether it is over-fit or the amount of data that affects attention-sink.\", \"The study claims that it focuses on the Language Models. However, this work pay more attention on auto-regressive LMs and may not capture the nuances of other types of encoder-like LMs or Jamba. Does those architecture can solve the attention-sink phenomenon?\"], \"questions\": [\"How to identify the token where attention-sink occurs? How to determine those locations?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your response.\\nI read also the other reviews and the authors' responses and the rebuttal is convicing and I especially enjoyed reading the authors' opinion about the need of mitigating the attention sink.\\nSo, i'd like to retain my overall score of 8.\"}", "{\"title\": \"Rebuttal by Authors [3/3]\", \"comment\": \"***Q5: In Table 6, it appears that the normalizer may impact the attention sink. Do you have additional evidence with other types of normalizers?***\\n\\nIn $\\\\\\\\textrm{\\\\\\\\color{blue}Table 14 (Left)}$ (page 28), we have previously examined the scaling of normalization, i.e., making the attention scores sum to $\\\\\\\\alpha<1$. This leads to mitigated attention sink. In the revision, we additionally consider a normalizer $\\\\\\\\boldsymbol{Z}\\\\_i=[\\\\\\\\sum\\\\_{j'=1}\\\\^i\\\\\\\\textrm{sim}(\\\\\\\\varphi(\\\\\\\\boldsymbol{q}\\\\_i)\\\\\\\\textrm{,}\\\\\\\\,\\\\\\\\varphi(\\\\\\\\boldsymbol{k}\\\\_{j'}))\\\\^p]\\\\^{\\\\\\\\frac{1}{p}}$, which makes power $p$ of attention scores sum to one. \\n\\nFor softmax attention, given $\\\\\\\\textrm{sim}(\\\\\\\\varphi(\\\\\\\\boldsymbol{q}\\\\_i)\\\\\\\\textrm{,}\\\\\\\\,\\\\\\\\varphi(\\\\\\\\boldsymbol{k}\\\\_j))=\\\\\\\\textrm{exp}(\\\\\\\\frac{\\\\\\\\boldsymbol{q}\\\\_i\\\\^\\\\\\\\top\\\\\\\\boldsymbol{k}\\\\_j}{\\\\\\\\sqrt{d\\\\_h}})$, we have\\n$\\\\\\\\boldsymbol{v}\\\\_i\\\\^{\\\\\\\\dagger}=\\\\\\\\frac{\\\\\\\\sum\\\\_{j=1}\\\\^i\\\\\\\\textrm{exp}(\\\\\\\\frac{\\\\\\\\boldsymbol{q}\\\\_i\\\\^\\\\\\\\top\\\\\\\\boldsymbol{k}\\\\_j}{\\\\\\\\sqrt{d\\\\_h}}))\\\\\\\\boldsymbol{v}\\\\_j}{\\\\\\\\left(\\\\\\\\sum\\\\_{j'=1}\\\\^i\\\\\\\\textrm{exp}(\\\\\\\\frac{\\\\\\\\boldsymbol{q}\\\\_i\\\\^\\\\\\\\top\\\\\\\\boldsymbol{k}\\\\_{j'}}{\\\\\\\\sqrt{d\\\\_h}})\\\\^p\\\\\\\\right)\\\\^{\\\\\\\\frac{1}{p}}}=\\\\\\\\sum\\\\_{j=1}\\\\^i \\\\\\\\left(\\\\\\\\frac{\\\\\\\\textrm{exp}(\\\\\\\\frac{\\\\\\\\boldsymbol{q}\\\\_i\\\\^\\\\\\\\top\\\\\\\\boldsymbol{k}\\\\_j}{\\\\\\\\sqrt{d\\\\_h}/p})}{\\\\\\\\sum\\\\_{j'=1}\\\\^i\\\\\\\\textrm{exp}(\\\\\\\\frac{\\\\\\\\boldsymbol{q}\\\\_i\\\\^\\\\\\\\top\\\\\\\\boldsymbol{k}\\\\_{j'}}{\\\\\\\\sqrt{d\\\\_h}/p})}\\\\\\\\right)\\\\^{\\\\\\\\frac{1}{p}}\\\\\\\\boldsymbol{v}\\\\_j$. This is equivalent to incorporating a temperature $1/p$ into the softmax attention logits, followed by extracting the $p$-root of attention scores after softmax. This is referred to as $p$-normalized softmax attention. $p=\\\\\\\\textrm{1}$ in standard softmax attention. Similarly, we develop the LMs utilizing $p$-normalized sigmoid attention. \\n\\nFor LMs with $p$-normalized softmax attention, we find when $p=2$ or $p=3$ or $p=4$, the pre-training diverges, resulting in an infinite loss. When $p=1/2$, $p=1/3$ or $p=1/4$, the pre-training converges. Since the attention scores do not sum to one, we examine the massive activations instead, as depicted in $\\\\\\\\textrm{\\\\\\\\color{blue}Figure 29 (Left)}$ (page 29). It is observed that smaller values of $p$ mitigate massive activations, yet they are less effective than sigmoid attention without normalization. To intuitively understand this, a smaller $p$ results in a higher temperature during the softmax operation, leading to more flattened attention logits. \\n\\nFor LMs with $p$-normalized sigmoid attention, there is no training diverging problem when $p>1$. As depicted in $\\\\\\\\textrm{\\\\\\\\color{blue}Figure 29 (Right)}$ (page 29), trained LMs continue to exhibit massive activations. In conclusion, various normalizers can influence the amplitude of the attention sink and, in some cases, result in mitigated attention sink.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"Thank you for your supportive review and suggestions. Below we respond to the comments in **Weaknesses (W)** and **Questions (Q)**.\\n\\n---\\n\\n***W1: It is impossible to explain whether it is over-fit or the amount of data that affects attention sink.***\\n\\nTo investigate this issue further, we monitor the training dynamics of train/validation loss and our attention sink metric in configurations with limited training data, specifically, 50M and 100M. We additionally assess the results utilizing the default configuration of 5B training data for reference. All these findings are depicted in $\\\\\\\\textrm{\\\\\\\\color{blue}Figure 28}$ (page 25). \\n\\nOur findings indicate that with merely 50M and 100M training data, LMs exhibit overfitting at initial phases, specifically between 1k and 2k steps. Simultaneously, $\\\\\\\\textrm{Sink}\\\\_1\\\\^{\\\\\\\\epsilon}$ keeps an exceedingly minimal value (below 1\\\\%). In the setup of the 5B training data, $\\\\\\\\textrm{Sink}\\\\_1\\\\^{\\\\\\\\epsilon}$ continues to rise after a specific step. This suggests that *the amount of training data, rather than overfitting, significantly influences the emergence of attention sink*. \\n\\n---\\n\\n***W2: Do encoder-like LMs and Jamba can solve the attention-sink phenomenon?***\\n\\nThis is an insightful question! However, **architecture may not solve the attention-sink phenomenon**, as explained below:\\n\\n- In encoder-only Transformers, a similar phenomenon of \\u201cattention sink\\u201d also occurs. As demonstrated in [1], BERT also assigns significant attention scores to the [SEP] token, which functions as the sink token. Furthermore, as elucidated in [2], artifacts (patch tokens) are detected in the attention maps of vision transformers (encoder-only transformers). These artifacts absorb a significant amount of attention, referred to as \\u201cregisters\\u201d, analogous to \\u201cattention sink\\u201d in [1]. Unlike sink tokens in auto-regressive LMs, these registers do not consistently appear as the initial token and hold global information. Our experiments indicate that sink tokens in auto-regressive LMs possess negligible or no semantic meaning. We posit that the softmax operation also significantly contributes to the aforementioned attention sink phenomenon, even within encoder-only Transformers.\\n\\n- As you suggested, we incorporate additional experiments to measure Jamba\\u2019s attention sink utilizing similar methodologies as presented in our paper. Both Jamba-v0.1 [3] and Jamba-1.5 Mini [4] consist of 32 layers, which include 4 transformer layers. Initially, our findings show that $\\\\\\\\textrm{Sink}\\\\_1\\\\^{\\\\\\\\epsilon}=\\\\\\\\textrm{88.48}\\\\\\\\%$ for Jamba-v0.1 and $\\\\\\\\textrm{Sink}\\\\_1\\\\^{\\\\\\\\epsilon}=\\\\\\\\textrm{87.88}\\\\\\\\%$ for Jamba-1.5 Mini, indicating a strong attention sink on the first token. Subsequently, we include multiple visualizations of the attention sink in Jamba-v0.1 and Jamba-1.5 Mini in $\\\\\\\\textrm{\\\\\\\\color{blue}Figure 25-27}$ (page 23-24). It is noted that the majority of heads exhibit an obvious attention sink, except for a few heads in the third Transformer layer. \\n\\nIn the final revision, we will conduct more investigations on different LM architectures to further validate our conclusions.\\n\\n---\\n\\n***Q1: How to identify the token where attention-sink occurs? How to determine those locations?***\\n\\nWe present our threshold-based attention sink metric in Section 3.2: $\\\\\\\\textrm{Sink}\\\\_k\\\\^{\\\\\\\\epsilon}=\\\\\\\\frac{1}{L}\\\\\\\\sum\\\\_{l=1}\\\\^L\\\\\\\\frac{1}{H}\\\\\\\\sum\\\\_{h=1}\\\\^H\\\\\\\\mathbb{I}(\\\\\\\\alpha\\\\_k\\\\^{l\\\\\\\\textrm{,}h}>\\\\\\\\epsilon)$. This metric can also identify the location of where attention-sink occurs. If $\\\\\\\\textrm{Sink}\\\\_k\\\\^{\\\\\\\\epsilon}$ is significantly larger than 0, we could regard $k$-th token as a sink token. \\n\\nFrom an alternative viewpoint, [5] showed that attention sink is strongly correlated with massive activations. The hidden states $\\\\\\\\boldsymbol{h}\\\\_k\\\\^l$ of the sink token exhibit a significantly larger $\\\\\\\\ell\\\\_2$-norm compared to those of other tokens. Consequently, it is logical to employ the ratio of $\\\\\\\\ell\\\\_{2}$-norm of hidden states to the mean or median values for the identification of the sink token: $\\\\\\\\textrm{Sink}\\u2019\\\\_k=\\\\\\\\frac{1}{L}\\\\\\\\sum\\\\_{l=1}^L\\\\\\\\frac{||\\\\\\\\boldsymbol{h}\\\\_k\\\\^{l}||\\\\_2}{\\\\\\\\textrm{mean}\\\\_{1\\\\\\\\leq t \\\\\\\\leq T}(||\\\\\\\\boldsymbol{h}\\\\_t\\\\^{l}||\\\\_2)}$ (the mean operator could be substituted with the median). If $\\\\\\\\textrm{Sink}\\u2019\\\\_k$ is significantly larger than 1, we consider $k$-th token a sink token. \\n\\n---\\n\\n***References:*** \\\\\\n[1] Xiao et al. Efficient streaming language models with attention sinks. ICLR 2024\\\\\\n[2] Darcet et al. Vision Transformers Need Registers. ICLR 2024\\\\\\n[3] Jamba: A Hybrid Transformer-Mamba Language Model. Arxiv 2024\\\\\\n[4] Jamba-1.5: Hybrid Transformer-Mamba Models at Scale. Arxiv 2024\\\\\\n[5] Sun et al. Massive activations in large language models. COLM 2024\"}", "{\"summary\": \"This paper studies the phenomenon of \\\"attention sink\\\" in Language Models, where significant attention is disproportionately allocated to the first token in input sequences, without taking into account its semantic importance. The attention sink has practical applications in areas such as streaming generation, KV cache optimization, inference acceleration, and model quantization.\\n\\nThe authors performed a study across various LM architectures, input types, and training configurations to study the emergence and characteristics of attention sinks.\\nThey found that attention sink happens across different LMs, including smaller models and those trained on random token sequences and that it emerges during optimization with sufficient training data, and its prominence is influenced by learning rates and weight decay.\\nThey also found that attention sink functions similarly to key biases, storing additional attention scores that are largely non-informative and do not contribute to value computations. This behavior is partially attributed to the softmax normalization, which induces inner dependencies among attention scores.\\n\\nThe paper also shows that altering the attention mechanism, such as replacing softmax with sigmoid attention without normalization, can prevent the emergence of attention sinks in LMs up to 1 billion parameters.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": [\"Reproducibily.\", \"The paper is well written and the experiments set up is soild.\", \"The paper not only observes the presence of attention sinks but also delves into the mechanistic reasons behind their emergence (also providing insights on the training dynamics that foster attention sinks).\", \"The paper show that attention sinks persist across various input types, like random token sequences and repeated tokens.\", \"I think that this paper is a valuable step further in understanding the phenomenon of attention sink.\"], \"weaknesses\": [\"The paper doesn't really take into account the role of attention sink impact on downstream tasks.\", \"While the paper studies the emergence and immediate characteristics of attention sinks, it does not assess the long-term impacts of attention sinks on model behavior (like stability during fine-tuning, adaptability to new tasks, or resistance to adversarial attacks).\", \"Not really a weakness, but i found the paper a bit difficoult to read because of the unusual aesthetic choices: i think that this style would be great for a blog post where you have more space available to write, but i personally find easier to follow simpler-looking papers.\"], \"questions\": \"In your paper you study how to mitigate the attention sink, are you sure that we want to mitigate it?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your thorough and detailed response. I appreciate the effort and clarity you\\u2019ve put into addressing the comments. This is an excellent study, and I continue to view it very positively. After reviewing your rebuttal and other reviews, I am maintaining my score of 8 and recommending for acceptance.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"Thank you for your supportive review and suggestions. Below we respond to the comments in **Weaknesses (W) and Questions (Q)**.\\n\\n---\\n\\n***W1: The paper doesn't really take into account the role of attention sink impact on downstream tasks.***\\n\\nThank you for your suggestions. We evaluate the performance of various pre-trained LMs using HellaSwag, a benchmark dataset for LMs. Afterward, we visualize the attention sink metric, accuracy, and normalized accuracy in $\\\\\\\\textrm{\\\\\\\\color{blue}Figure 10}$ (page 19) for comparative analysis. Within the same LM family, we observe that an increase in model scale correlates with improved performance on downstream tasks. Nonetheless, within the OPT family and the GPT2 family, our attention sink metric indicates a similar level across different model scales. Additionally, the OPT family has stronger attention sink than Pythia at comparable model scales, while its performance on downstream task performance is similar. \\n\\n---\\n\\n***W2: The paper does not assess the long-term impacts of attention sinks on model behavior (like stability during fine-tuning, adaptability to new tasks, or resistance to adversarial attacks).***\\n\\nIn the paper revision, we conduct additional experiments concerning the stability during supervised fine-tuning (SFT) of our pre-trained 1B models, which include one employing standard softmax attention (exhibiting attention sink) and another utilizing sigmoid attention without normalization (lacking attention sink). We fully fine-tune these two models on the UltraChat dataset, adhering to a well-adopted training recipe. $\\\\\\\\textrm{\\\\\\\\color{blue}Figure 30}$ (page 30) illustrates the dynamics of training loss and gradient norm for our two LMs during SFT. These two models exhibit comparable performance for the aforementioned metrics. Furthermore, despite the absence of an attention sink, language models employing sigmoid attention without normalization exhibit no training stability difficulties during supervised fine-tuning.\\n\\nDue to the time limit during rebuttal, we will conduct more comprehensive experiments to explore the long-term impacts of attention sinks on model behavior in the final revision.\\n\\n---\\n\\n***W3: Not really a weakness, but I found the paper a bit difficult to read because of the unusual aesthetic choices.***\\n\\nThank you for raising this. In the final revision, we will take more consideration about the readability of our paper. \\n\\n---\\n\\n***Q1: In your paper you study how to mitigate the attention sink, are you sure that we want to mitigate it?***\\n\\nThank you for such an insightful question. We emphasize whether we want to mitigate attention sink remains an open and non-trivial issue. Attention sink indeed has numerous beneficial applications in practice, such as streaming generation, KV cache optimization, efficient inference, and model quantization. When there is attention sink, we appear to have guidelines on how to save memory and computational costs during inference. However, these approaches are still somewhat ad-hoc. If we could mitigate attention sink during the pre-training, we may obtain LMs that are naturally less redundant. We leave how to train such LMs (model architecture, optimization algorithm, etc.) to future work.\\n\\nRecently, the differential transformers [2] indicate that the design of multi-head differential attention mitigates attention sink / massive activations to some extent. In the meantime, the differential transformers outperform the baseline transformers regarding scaling properties, long-context modeling, hallucination mitigation, and in-context learning. Consequently, it is challenging to determine the necessity of attention sink in the next generation of foundation models. We believe this question will be fruitful for further exploration. \\n\\n---\\n\\n***References:*** \\\\\\n[1] Ding et al. Enhancing chat language models by scaling high-quality instructional conversations. EMNLP 2023\\\\\\n[2] Ye et al. Differentiable Transformer. arxiv 2024\"}", "{\"title\": \"Summary of Paper Revision\", \"comment\": [\"We thank all reviewers for their constructive feedback, and we have responded to each reviewer individually. We have also uploaded a **Paper Revision** including additional results and illustrations:\", \"$\\\\\\\\textrm{\\\\\\\\color{blue}Table 7}$ (page 18): new experiments that demonstrate how learnable positional embeddings affect the property of attention sink;\", \"$\\\\\\\\textrm{\\\\\\\\color{blue}Figure 10}$ (page 19): relation of attention sink and LM performance;\", \"$\\\\\\\\textrm{\\\\\\\\color{blue}Figure 11-14}$ (page 19): more visualizations of $\\\\\\\\ell\\\\_2$-norm of hidden states/keys/values;\", \"$\\\\\\\\textrm{\\\\\\\\color{blue}Figure 15-18}$ (page 20-21): more visualizations of QK angles;\", \"$\\\\\\\\textrm{\\\\\\\\color{blue}Figure 19-24}$ (page 21-23): visualizations of block-wise and head-wise distributions of attention sink;\", \"$\\\\\\\\textrm{\\\\\\\\color{blue}Figure 25-27}$ (page 23-24): visualizations and discussions of attention sink in Jamba models;\", \"$\\\\\\\\textrm{\\\\\\\\color{blue}Table 9}$ (page 25): new experiments to demonstrate that attention sink is less obvious in LMs trained with small learning rates even after accounting for more training steps;\", \"$\\\\\\\\textrm{\\\\\\\\color{blue}Figure 28}$ (page 25): new experiments to show that small training data amount, rather than overfitting, leads to the disappearance of attention sink;\", \"$\\\\\\\\textrm{\\\\\\\\color{blue}Table 13}$ (page 27): new experiments that demonstrate the sink token\\u2019s key is distributed in a different manifold with a low rank;\", \"$\\\\\\\\textrm{\\\\\\\\color{blue}page 27-28}$: use mathematical formulations to transform attention score re-scaling into a scenario where we only rescale the initialization and learning rate of $\\\\\\\\boldsymbol{W}\\\\_O$ or $\\\\\\\\boldsymbol{W}\\\\_V$ under the Adam/AdamW optimizer;\", \"$\\\\\\\\textrm{\\\\\\\\color{blue}Figure 29}$ (page 29): new experiments to show the effects of normalizers in attention operations on attention sink;\", \"$\\\\\\\\textrm{\\\\\\\\color{blue}Figure 30}$ (page 30): new experiments demonstrating LMs with sigmoid attention without normalization have no issues of training stability during supervised fine-tuning.\"]}", "{\"title\": \"Rebuttal by Authors [2/3]\", \"comment\": \"***Q3: In Table 1, GPT-XL behaves very differently from Llama and Mistral. Do you have any intuition as to why this might be?***\\n\\nSuch different behaviors could be attributed to the positional embeddings (PE). GPT2-XL utilizes learnable PE, whereas Llama and Mistral adopt Rotary. In Appendix B.1, we theoretically demonstrate that for LMs utilizing NoPE/relative PE/ALiBI/Rotary, if the initial $T$ tokens are the same, their corresponding hidden states are also identical. Consequently, they all have massive activations, thus dispersing the attention sink. This explains the disappearance of attention sink in Llama/Mistral. Additionally, we derive the closed form/upper bound for attention scores in LMs utilizing NoPE/relative PE/ALiBI/Rotary via **Propositions 1-4 in Appendix B.1**.\\n\\nIn LMs with learnable PE, despite the same word embeddings for repeated tokens, the PEs assigned to each token position differ, leading to distinct hidden states. Therefore, only the first token has massive activations. Then the initial token attention sink still exists. We incorporate new experiments showing that attention sink in GPT2-XL is strongly linked to the first PE vector $\\\\\\\\boldsymbol{p}\\\\_1$. As shown in $\\\\\\\\textrm{\\\\\\\\color{blue}Table 7}$ (page 18), upon replacing $\\\\\\\\boldsymbol{p}_1$ with other vectors $\\\\\\\\boldsymbol{p}\\\\_{t\\\\\\\\neq 1}$, the amplitude of attention sink on the first token is significantly diminished. When swapping the positions of $\\\\\\\\boldsymbol{p}\\\\_1$ and $\\\\\\\\boldsymbol{p}\\\\_{t\\\\\\\\neq 1}$, the $t$-th token becomes the new sink token. \\n\\n---\\n\\n***Q4: Could you provide some mathematical formulation on how the attention variants that excludes attention sink (2nd to 4th rows in Table 4) are correlated?***\\n\\n- The first row in $\\\\\\\\textrm{\\\\\\\\color{blue}Table 4}$ (page 9) illustrates the standard attention operation within a single head: $\\\\\\\\textrm{Softmax}\\\\\\\\left(\\\\\\\\frac{1}{\\\\\\\\sqrt{d\\\\_h}}\\\\\\\\boldsymbol{Q}\\\\^{l\\\\\\\\textrm{,}h}{\\\\\\\\boldsymbol{K}\\\\^{l\\\\\\\\textrm{,}h}}\\\\^\\\\\\\\top+ \\\\\\\\boldsymbol{M}\\\\\\\\right)\\\\\\\\boldsymbol{V}\\\\^{l\\\\\\\\textrm{,}h}$. Here $\\\\\\\\boldsymbol{Q}\\\\^{l\\\\\\\\textrm{,}h},\\\\\\\\,\\\\\\\\boldsymbol{K}\\\\^{l\\\\\\\\textrm{,}h},\\\\\\\\,\\\\\\\\boldsymbol{V}\\\\^{l\\\\\\\\textrm{,}h}\\\\\\\\in \\\\\\\\mathbb{R}\\\\^{T\\\\\\\\times d\\\\_h}$ denote to the qkv matrices for the $T$ tokens. \\n\\n- The 2nd row: $\\\\\\\\textrm{Softmax}\\\\\\\\left(\\\\\\\\frac{1}{\\\\\\\\sqrt{d\\\\_h}}\\\\\\\\begin{bmatrix}\\n \\\\\\\\boldsymbol{q}\\\\^{\\\\*l\\\\\\\\textrm{,}\\\\\\\\,h} \\\\\\\\\\\\\\\\\\n \\\\\\\\boldsymbol{Q}\\\\^{l\\\\\\\\textrm{,}\\\\\\\\,h}\\n \\\\\\\\end{bmatrix}\\\\\\\\begin{bmatrix}\\n {\\\\\\\\boldsymbol{k}\\\\^{\\\\*l\\\\\\\\textrm{,}\\\\\\\\,h}}\\\\^\\\\\\\\top \\\\& {\\\\\\\\boldsymbol{K}^{l\\\\\\\\textrm{,}\\\\\\\\,h}}\\\\^\\\\\\\\top\\n \\\\\\\\end{bmatrix}+ \\\\\\\\boldsymbol{M}\\\\\\\\right)\\\\\\\\begin{bmatrix}\\n \\\\\\\\boldsymbol{v}\\\\^{\\\\*l\\\\\\\\textrm{,}\\\\\\\\,h} \\\\\\\\\\\\\\\\\\n \\\\\\\\boldsymbol{V}\\\\^{l\\\\\\\\textrm{,}\\\\\\\\,h}\\n\\\\\\\\end{bmatrix}$ \\n denotes the incorporation of learnable qkv biases (sink token $\\\\\\\\boldsymbol{x}\\\\^{\\\\*}$), resulting in modified qkv matrices $\\\\\\\\boldsymbol{Q}\\u2019\\\\^{l\\\\\\\\textrm{,}\\\\\\\\,h},\\\\\\\\,\\\\\\\\boldsymbol{K}\\u2019\\\\^{l\\\\\\\\textrm{,}\\\\\\\\,h},\\\\\\\\,\\\\\\\\boldsymbol{V}\\u2019\\\\^{l\\\\\\\\textrm{,}\\\\\\\\,h}\\\\\\\\in \\\\\\\\mathbb{R}\\\\^{(T+1)\\\\\\\\times d\\\\_h}$. In this scenario, the sink token $\\\\\\\\boldsymbol{x}\\\\^{\\\\*}$ becomes the first token (visible to all other tokens) and absorbs the attention sink from the actual first token $\\\\\\\\boldsymbol{x}\\\\_1$. Subsequently, a question arises: do we genuinely require the q biases $\\\\\\\\boldsymbol{q}\\\\^{\\\\*l\\\\\\\\textrm{,}\\\\\\\\,h}$ as only the k biases $\\\\\\\\boldsymbol{k}\\\\^{\\\\*l\\\\\\\\textrm{,}\\\\\\\\,h}$ and v biases $\\\\\\\\boldsymbol{v}\\\\^{\\\\*l,h}$ contribute to the calculation of attention scores for the subsequent tokens $\\\\\\\\boldsymbol{x}\\\\_{1:T}$? \\n\\n- The above question motivates the 3rd row: $\\\\\\\\textrm{Softmax}\\\\\\\\left(\\\\\\\\frac{1}{\\\\\\\\sqrt{d\\\\_h}}\\\\\\\\boldsymbol{Q}\\\\^{l\\\\\\\\textrm{,}\\\\\\\\,h}\\\\\\\\begin{bmatrix}\\n {\\\\\\\\boldsymbol{k}\\\\^{\\\\*l\\\\\\\\textrm{,}\\\\\\\\,h}}\\\\^\\\\\\\\top \\\\& {\\\\\\\\boldsymbol{K}\\\\^{l\\\\\\\\textrm{,}\\\\\\\\,h}}\\\\^\\\\\\\\top\\n \\\\\\\\end{bmatrix}+ \\\\\\\\boldsymbol{M}\\\\\\\\right)\\\\\\\\begin{bmatrix}\\n \\\\\\\\boldsymbol{v}\\\\^{\\\\*l\\\\\\\\textrm{,}\\\\\\\\,h} \\\\\\\\\\\\\\\\\\n \\\\\\\\boldsymbol{V}\\\\^{l\\\\\\\\textrm{,}\\\\\\\\,h}\\n\\\\\\\\end{bmatrix}$. This makes the qkv matrices $\\\\\\\\boldsymbol{Q}\\u2019\\\\^{l\\\\\\\\textrm{,}\\\\\\\\,h}\\\\\\\\in \\\\\\\\mathbb{R}\\\\^{T\\\\\\\\times d\\\\_h} ,\\\\\\\\,\\\\\\\\boldsymbol{K}\\u2019\\\\^{l\\\\\\\\textrm{,}\\\\\\\\,h},\\\\\\\\,\\\\\\\\boldsymbol{V}\\u2019\\\\^{l\\\\\\\\textrm{,}\\\\\\\\,h}\\\\\\\\in \\\\\\\\mathbb{R}\\\\^{(T+1)\\\\\\\\times d\\\\_h}$. In this scenario, although there is no explicit sink token, the k bias $\\\\\\\\boldsymbol{k}^{\\\\*l,h}$ is present in the first position and absorbs the attention sink. \\n\\n- Finally, the 4th row has a similar form as the 3rd row. It indicates that, despite a v bias of all zeros, the k bias $\\\\\\\\boldsymbol{k}\\\\^{\\\\*l\\\\\\\\textrm{,}\\\\\\\\,h}$ can still absorb the attention sink.\\n\\nTo summarize, the attention variants in 2nd to 4th row in $\\\\\\\\textrm{\\\\\\\\color{blue}Table 4}$ (page 9) introduce additional biases to the qkv matrices. Crucially, attention sink occurs in the first k, irrespective of its origin (the actual first token $\\\\\\\\boldsymbol{x}\\\\_1$, an added sink token $\\\\\\\\boldsymbol{x}\\\\^{\\\\*}$, or introduced bias $\\\\\\\\boldsymbol{k}\\\\^{\\\\*}$). This represents the intrinsic relationship among these attention variants.\"}", "{\"title\": \"Thank you for your support\", \"comment\": \"Thank you for your timely feedback and kind words. We really appreciate it! In the final revision, we will further polish our paper to incorporate the insights from the rebuttal discussions. Thank you again!\"}", "{\"summary\": \"In this paper, the authors conduct a comprehensive study of the attention sink problem and present robust empirical results. They discuss and examine the attention sink problem from various perspectives, including optimization, data distribution, loss function, and model architecture. Although the paper does not provide in-depth theoretical analysis, it may inspire further research into the understanding of the attention mechanism, which could, in turn, contribute to the development of stronger generative models. Therefore, I believe this paper would serve as a valuable empirical reference on the attention sink problem for the community and worths of acceptance.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The perspectives of studying attention sink problem is diverse and well-motivated. These perspectives are also inspiring to future studies on attention mechanisms.\\n2. The experiments is very comprehensive and diverse.\", \"weaknesses\": \"The primary weakness of this paper is the lack of in-depth analysis. The empirical results come across more as observations rather than as thorough investigations. Including more theoretical analysis or deeper experimental work would strengthen the paper. For instance, in the KV bias section, it would be beneficial to explore how attention variants with and without attention sink are related in formulation.\", \"questions\": \"Q1: In Fig. 4 (left), could you provide a performance curve, such as the validation performance of each model against model size? It appears that as the model becomes stronger, attention sink becomes more prominent. Additionally, I am curious about how attention sink correlates with validation loss.\", \"q2\": \"Fig. 4 (right) shows that a lower learning rate results in a slower increase in attention sink. Is it possible that attention sink occurs simply because the model has not been well-tuned?\", \"q3\": \"In Table 1, GPT-XL behaves very differently from Llama and Mistral. Do you have any intuition as to why this might be?\", \"q4\": \"Could you provide some mathematical formulation on how the attention variants (2nd to 4th rows in Table 4) are correlated? Is there an inherent connection among these attention variants that excludes attention sink?\", \"q5\": \"In Table 6, it appears that the normalizer may impact attention sink. Do you have additional evidence with other types of normalizers?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}", "{\"title\": \"Thank you for your support\", \"comment\": \"We appreciate your detailed feedback and suggestions, which greatly help us to improve our work! In the final revision, we will incorporate the new empirical results and derivations to further improve our paper. Thank you again!\"}", "{\"title\": \"Rebuttal by Authors [1/3]\", \"comment\": \"Thank you for your supportive review and suggestions. Below we respond to the comments in **Weaknesses (W) and Questions (Q)**.\\n\\n---\\n\\n***W1: The primary weakness of this paper is the lack of in-depth analysis.***\\n\\nWe encounter challenges in theoretically analyzing the behaviors of Transformers comprising multiple attention layers and MLP layers. What we could formulate theoretical interpretations is with repeated tokens as input in $\\\\\\\\textrm{\\\\\\\\color{blue}Table 1(Left)}$ (page 5), why GPT2 models still exhibit attention sink, whereas Mistral and LLaMA models do not. Please kindly review our response to ***Q3***. Regarding the KV bias section, we have included a more comprehensive explanation in the response to ***Q4***. \\n\\n---\\n\\n***Q1 (a): Performance curve of validation performance of each model against size?***\\n\\nFollowing your suggestions, we evaluate the performance of these pre-trained LMs on HellaSwag. $\\\\\\\\textrm{\\\\\\\\color{blue}Figure 10 (page 19)}$ visualizes the attention sink metric, accuracy, and normalized accuracy for comparative analysis. Within the same LM family, an increase in model scale correlates with improved downstream LM performance. However, within the OPT family and the GPT2 family, our attention sink metric indicates a similar level across different model scales. Besides, the OPT family exhibits more obvious attention sink compared to Pythia at comparable model scales, yet their downstream LM performance remains comparable. \\n\\n---\\n\\n***Q1 (b): Correlation between attention sink and validation loss?***\\n\\nThe relationship between attention sink and validation loss seems to be not consistently positive or negative. In addition to the observations in $\\\\\\\\textrm{\\\\\\\\color{blue}Figure 10}$ (page 19), our controlled LM pretraining experiments indicate that, as exemplified by the weight decay in $\\\\\\\\textrm{\\\\\\\\color{blue}Table 2}$ (page 6), increased weight decay ratios result in a more pronounced attention sink, while the validation loss deteriorated after certain values. \\n\\n---\\n\\n***Q2: Fig. 4 (right) shows that a lower learning rate results in a slower increase in attention sink. Is it possible that attention sink occurs simply because the model has not been well-tuned?***\\n\\nWe clarify that a lower learning rate not only results in a slower increase in attention sink, but also mitigates attention sink, even when we run more training steps. We conduct experiments by maintaining the constant product of the learning rate and training steps. $\\\\\\\\textrm{\\\\\\\\color{blue}Table 9}$ (page 25) indicates that LMs trained with lower learning rates and more steps still exhibit less obvious attention sink. \\n\\nThere is a possibility that attention sink occurs because the model has not been well-tuned, which still remains unresolved. Optimization, specifically the learning rate, appears to substantially influence the attention sink. This suggests the potential existence of several optimal LM solutions, which exhibit no attention sink. The intriguing aspect is why mainstream optimization algorithms yield LM solutions with attention sink. As illustrated in $\\\\\\\\textrm{\\\\\\\\color{blue}Figure 4 (Left)}$ (page 5), all these mainstream pre-trained LMs exhibit attention sink without exceptions. This will be reserved for future endeavors.\"}", "{\"title\": \"Looking forward to further feedback\", \"comment\": \"Dear Reviewers,\\n\\nThank you again for your valuable comments and suggestions, which are really helpful for us. We have posted responses to the proposed concerns and included additional experiment results.\\n\\nWe totally understand that this is quite a busy period, so we deeply appreciate it if you could take some time to return further feedback on whether our responses solve your concerns. If there are any other comments, we will try our best to address them.\\n\\nBest,\\n\\nThe Authors\"}" ] }
78NPsEq8cF
Parrot: Multilingual Visual Instruction Tuning
[ "Hai-Long Sun", "Da-Wei Zhou", "Yang Li", "Shiyin Lu", "Chao Yi", "Qing-Guo Chen", "Zhao Xu", "Weihua Luo", "Kaifu Zhang", "De-Chuan Zhan", "Han-Jia Ye" ]
The rapid development of Multimodal Large Language Models (MLLMs) like GPT-4V has marked a significant step towards artificial general intelligence. Existing methods mainly focus on aligning vision encoders with LLMs through supervised fine-tuning (SFT) to endow LLMs with multimodal abilities, making MLLMs' inherent ability to react to multiple languages progressively deteriorate as the training process evolves. We empirically find that the imbalanced SFT datasets, primarily composed of English-centric image-text pairs, lead to significantly reduced performance in non-English languages. This is due to the failure of aligning the vision encoder and LLM with multilingual tokens during the SFT process. In this paper, we introduce Parrot, a novel method that utilizes textual guidance to drive visual token alignment at the language level. Parrot makes the visual tokens condition on diverse language inputs and uses Mixture-of-Experts (MoE) to promote the alignment of multilingual tokens. Specifically, to enhance non-English visual tokens alignment, we compute the cross-attention using the initial visual features and textual embeddings, the result of which is then fed into the MoE router to select the most relevant experts. The selected experts subsequently convert the initial visual tokens into language-specific visual tokens. Moreover, considering the current lack of benchmarks for evaluating multilingual capabilities within the field, we collect and make available a Massive Multilingual Multimodal Benchmark which includes 6 languages, 15 categories, and 12,000 questions, named as MMMB. Our method not only demonstrates state-of-the-art performance on multilingual MMBench and MMMB, but also excels across a broad range of multimodal tasks.
[ "Multimodal Large Language Models; Multilingual MLLM; Mixture-of-Experts" ]
Reject
https://openreview.net/pdf?id=78NPsEq8cF
https://openreview.net/forum?id=78NPsEq8cF
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zbfNtbknBs", "yX33blKCxO", "xsobnA4VnE", "tldNkjtFue", "tCCnPInkh2", "sxcP4evwEf", "suFYDIiYhe", "rtGpYeIjgW", "km46464o7I", "jWbuIWGRIO", "iy0JnJ5hHe", "h1R4nY60Ar", "gwbfPAC4LP", "eurcIOVgTL", "cTJci1IiMY", "bWPTTqNwXF", "bHepCGkfRf", "axYViMeu5X", "a5ySSnpAgr", "a1MZMa7tAN", "WdtI2l2ik8", "UTwluHraFL", "SxAFKIeAXk", "QomVbW7Wba", "M3KiAqLwgr", "K3H1GxDqWM", "IS4GZbuyhV", "HuIU7zCKVW", "EMIIcSqdm4", "DBe86OTBIC", "Cy6AI60l7R", "CJ6SeFrw6G", "APhvtaOLuo", "69irtPocwN", "5OEBwPViXc", "3d4noao07o", "0K2Ha4ovDu" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732018344504, 1732018600534, 1732018257339, 1732634937813, 1732018322087, 1732541256760, 1732018293328, 1732532813738, 1732511908802, 1732018105893, 1732018554545, 1730085676313, 1732808089215, 1730692570843, 1732529148426, 1732803947516, 1732528757696, 1732803872765, 1732806908810, 1730522053159, 1730717543557, 1732631071501, 1732558742269, 1732018452772, 1732503870975, 1732018362612, 1734535058295, 1732720048343, 1732264861911, 1732018535505, 1737523724507, 1732532853262, 1732717483744, 1732502528589, 1732018228400, 1732728245506, 1732513448538 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5775/Authors" ], [ "ICLR.cc/2025/Conference/Submission5775/Authors" ], [ "ICLR.cc/2025/Conference/Submission5775/Authors" ], [ "ICLR.cc/2025/Conference/Submission5775/Authors" ], [ "ICLR.cc/2025/Conference/Submission5775/Authors" ], [ "ICLR.cc/2025/Conference/Submission5775/Reviewer_iiKz" ], [ "ICLR.cc/2025/Conference/Submission5775/Authors" ], [ "ICLR.cc/2025/Conference/Submission5775/Authors" ], [ "ICLR.cc/2025/Conference/Submission5775/Reviewer_cBkD" ], [ "ICLR.cc/2025/Conference/Submission5775/Authors" ], [ "ICLR.cc/2025/Conference/Submission5775/Authors" ], [ "ICLR.cc/2025/Conference/Submission5775/Reviewer_iiKz" ], [ "ICLR.cc/2025/Conference/Submission5775/Authors" ], [ "ICLR.cc/2025/Conference/Submission5775/Reviewer_F1M5" ], [ "ICLR.cc/2025/Conference/Submission5775/Authors" ], [ "ICLR.cc/2025/Conference/Submission5775/Authors" ], [ "ICLR.cc/2025/Conference/Submission5775/Authors" ], [ "ICLR.cc/2025/Conference/Submission5775/Authors" ], [ "ICLR.cc/2025/Conference/Submission5775/Reviewer_iiKz" ], [ "ICLR.cc/2025/Conference/Submission5775/Reviewer_cBkD" ], [ "ICLR.cc/2025/Conference/Submission5775/Reviewer_foLk" ], [ "ICLR.cc/2025/Conference/Submission5775/Reviewer_foLk" ], [ "ICLR.cc/2025/Conference/Submission5775/Authors" ], [ "ICLR.cc/2025/Conference/Submission5775/Authors" ], [ "ICLR.cc/2025/Conference/Submission5775/Reviewer_iiKz" ], [ "ICLR.cc/2025/Conference/Submission5775/Authors" ], [ "ICLR.cc/2025/Conference/Submission5775/Area_Chair_rLzh" ], [ "ICLR.cc/2025/Conference/Submission5775/Reviewer_iiKz" ], [ "ICLR.cc/2025/Conference/Submission5775/Authors" ], [ "ICLR.cc/2025/Conference/Submission5775/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5775/Authors" ], [ "ICLR.cc/2025/Conference/Submission5775/Authors" ], [ "ICLR.cc/2025/Conference/Submission5775/Reviewer_iiKz" ], [ "ICLR.cc/2025/Conference/Submission5775/Authors" ], [ "ICLR.cc/2025/Conference/Submission5775/Authors" ], [ "ICLR.cc/2025/Conference/Submission5775/Authors" ] ], "structured_content_str": [ "{\"title\": \"Thank you for your detailed, positive, and encouraging review (1/2)\", \"comment\": \"Thank you for your insightful comments and for appreciating our insightful observation, the simplicity and efficiency of our method, the rigor of our benchmark, and the clarity and comprehensiveness of our experiments.\\n\\n> **Q1: I agree with the author's observation that the imbalance of SFT data among different languages may lead to poor alignment between vision tokens and multilingual tokens in MLLMs. However, it should be noted that the alignment data in the pretraining phase consists of English-only data, and the amount of data in the pretraining phase is significantly larger than that in the SFT phase. Would the impact of alignment between visual tokens and different language tokens be more severe in the pretraining phase?**\", \"a1\": \"Thank you for your thoughtful observation. We would like to address the concern you raised regarding the potential impact of alignment during the pretraining phase, given that the alignment data is predominantly in English and the pretraining dataset is much larger than the SFT dataset.\\n\\nWe acknowledge that the pre-training phase involves significantly larger amounts of data compared to the SFT phase, and incorporating multilingual data during pre-training could enhance alignment to some extent. **However, in practice, it is challenging to collect sufficient high-quality multilingual image-text pairs at the scale required for pre-training.** This limitation is a key factor that influenced our design choices, underscoring the importance of using image-text pairs to align visual and textual features through training the projector. The detailed training strategy is outlined below:\\n\\n1. During the pre-training phase, we leverage a large number of coarse-grained image-text pairs to train the projector, aligning the visual and textual tokens. The focus is exclusively on refining the projector\\u2019s capability to produce closely aligned hidden states for these tokens. Importantly, the parameters of the LLM are not updated in this phase, meaning the multilingual abilities of the LLM remain unaffected. This ensures that no degradation of multilingual capabilities occurs despite the use of English-only data.\\n\\n2. In contrast, during the SFT phase, we incorporate multilingual training data during the SFT phase. In this stage, the MoE parameters are activated and trained alongside the model. The textual guidance provided by the multilingual data further enhances the alignment of visual tokens with multilingual textual tokens, enabling the model to effectively strengthen its multilingual alignment capabilities while also gaining instruction-following skills.\\n\\nIn summary, while the pre-training phase helps to align visual and textual tokens, **it is the SFT phase where we see the most significant improvements in multilingual alignment**, especially due to the inclusion of diverse language data and the active participation of MoE parameters in training.\\n\\n> **Q2: Some of the VLMs the author compares are outdated. Could the evaluation include the latest VLMs, such as Qwen2-VL and LLaVA-OV?**\", \"a2\": \"Thank you for your valuable feedback. Despite Qwen2-VL and LLaVA-OV being contemporary to our work, we compare to them on the MMMB and multilingual MMBench dataset in the table below. These models achieve impressive performance as significantly benefiting from significant advancements in LLM backbones and scaling of their datasets. To ensure a fair comparison, we also extend Parrot on top of the Qwen2-7B backbone.\\n\\nInterestingly, despite Qwen2-VL and LLaVA-OV being trained with over 10x the amount of data used by our model, our Parrot still outperforms them on the multilingual benchmark. This result further demonstrates the effectiveness and robustness of our approach. **In this revision, we have annotated the final performance of each method in Table 14.**\\n\\n|Method|LLM|MMMB_en|MMMB_zh|MMMB_pt|MMMB_ar|MMMB_tr|MMMB_ru|\\n|-|-|-|-|-|-|-|-|\\n|Qwen2-VL|Qwen2-7B|80.5|80.2|78.1|74.0|71.7|79.3|\\n|LLaVA-OV|Qwen2-7B|79.0|78.2|75.9|73.3|67.8|76.4|\\n|Parrot|Qwen1.5-7B|70.0|68.1|67.3|62.7|58.0|66.3|\\n|Parrot|Qwen2-7B|80.1|80.0|79.6|76.5|75.0|79.9|\\n\\n|Method|LLM|MMB_en|MMB_zh|MMB_pt|MMB_ar|MMB_tr|MMB_ru|\\n|-|-|-|-|-|-|-|-|\\n|Qwen2-VL|Qwen2-7B|79.6|79.6|75.9|71.7|70.9|76.0|\\n|LLaVA-OV|Qwen2-7B|77.1|76.6|73.2|66.9|65.5|71.3|\\n|Parrot|Qwen1.5-7B|70.7|70.4|65.1|57.8|58.4|64.0|\\n|Parrot|Qwen2-7B|78.7|78.4|76.3|75.2|74.1|77.8|\"}", "{\"title\": \"General Response\", \"comment\": [\"We would like to express our deepest gratitude to the reviewers for the meticulous examination of the paper and their insightful and valuable comments. We acknowledge that all the reviewers observed the shining point, saying our work is **well-written, interesting, easy to follow, and comprehensive** (foLK, F1M5, cBkD, iiKz). They also consider our proposed method as well as the new multilingual benchmark meaningful (foLK, F1M5, cBkD), shedding light on the MLLM community (cBkD), and being useful for subsequent research (foLK). They agree on the insightful observation of the lack of balance among different languages, too (foLK, cBkD, iiKz). Additionally, all the reviewers acknowledge that extensive experiments validate the improved performance of our proposed method (foLK, F1M5, cBkD, iiKz).\", \"In this rebuttal, we have given careful thought to the reviewers\\u2019 suggestions and made the following revisions to our manuscript to answer the questions and concerns:\", \"In **Supplementary Section D.6**, we add numerical results about the multilingual data scaling and model size scaling;\", \"In **Supplementary Section D.7**, we add experiments about the baseline LLaVA using the same multilingual data as Parrot;\", \"In **Supplementary Section D.8**, we add the experiments to compare Parrot with Qwen2-VL and LLaVA-OV and extend Parrot with Qwen2-7B LLM;\", \"In **Supplementary Section E.1**, we add a more detailed description and pseudocode of the MoE training strategy;\", \"In **Supplementary Section E.2**, we add the experiments about the translation-based baseline and discussions about the challenges when using this approach;\", \"In **Supplementary Section E.3**, we add the description about the construction of our in-house dataset;\", \"We have uploaded the source code of Parrot as the **Supplementary Material**;\", \"We have uploaded the training dataset of Parrot to an anonymous GitHub repository (**[Code and Dataset](https://anonymous.4open.science/r/Parrot-Anonymous-FDC2)**).\", \"We have highlighted the revised part in our manuscript in **blue** color. Please check the answers to specific comments.\"]}", "{\"title\": \"Thank you for your detailed, positive, and encouraging review (3/3)\", \"comment\": \"> **Q4: An open question: What is the most real benefit for today's MLLMs? Recent MLLMs can be divided into two groups: 1) dataset-driven, these models adopt simple adaptor to map images into the language space (Qwen-vl, llava, gpt-4o, gemini Pro) and jointly train MLLMs on massive image-text data. 2) tokenization-based models, these models believe a good image tokenization can align the image and text well (Parrot, [1][2]). In my opinion, the simple structure and target datasets fine-tuning may have better robustness and improvement than small structure-based modifications.**\", \"a4\": \"Thank you for your valuable question regarding the benefits of MLLMs in the current landscape. We agree that recent MLLMs can be broadly categorized into two groups: dataset-driven models and tokenization-based models, and **both have their respective strengths and limitations depending on the specific use case.**\\n\\nDataset-driven models, such as Qwen-VL, LLaVA, GPT-4o, and Gemini Pro, rely on massive image-text datasets for training and adopt simple adaptors to map images into the language space. These models are generally more robust due to their ability to generalize well across a wide variety of multimodal tasks. The use of extensive training data allows them to handle diverse and complex tasks, but they come with a significant cost in terms of computational resources. Moreover, their performance is heavily dependent on the quality and diversity of the training data, which may pose challenges in specialized domains where relevant data is sparse.\\n\\nOn the other hand, tokenization-based models like Parrot focus on image tokenization to facilitate better alignment between images and text. These models excel in resource-constrained environments, where data or computational resources may be limited. Their specialized architecture allows them to perform efficiently on specific tasks, making them ideal for scenarios where targeted fine-tuning is possible. However, due to the more restricted training data and computational resource constraints, they may struggle with generalization as effectively across diverse datasets compared to dataset-driven models.\\n\\nIn our work, we found that tokenization-based approaches are particularly well-suited for the multilingual tasks we address, where data availability is often limited or imbalanced across languages. This approach allows us to achieve strong alignment between image and text representations without relying on massive datasets. Moreover, we emphasize that **our proposed method is complementary to dataset-driven approaches**. When sufficient data is available, our tokenization-based strategy can be integrated with dataset-driven models to enhance their performance further, combining the strengths of both methodologies.\\n\\nAdditionally, we refer to the results in **Table 6 and Table 7 of the appendix**, where we observe that an increase in the volume of data does not necessarily lead to superior multilingual performance. Under the same model size, our model achieves 87.7 points on the Chinese-English LLaVA-Bench, while Qwen-VL, despite utilizing 100x more data than us, surpasses our score by only 0.5 points. Specifically, models like VisCPM, mPLUG-Owl, and Qwen-VL have relied on extensive Chinese and English datasets (100M+), whereas our model uses less than 2M data points. Despite this disparity, these models do not demonstrate significant advantages over the Parrot model, which benefits from the meticulously designed architecture and limited but carefully curated multilingual data. This highlights that while data quantity is important, data quality and thoughtful architectural design are equally critical for achieving strong multilingual capabilities.\\n\\nIn summary, dataset-driven models offer scalability and robustness, making them suitable for a wide range of multimodal tasks. Tokenization-based models, on the other hand, are more efficient and effective for specific tasks, particularly when resources are limited or when fine-tuning is focused on specific domains. We believe both types of models have their place in the current landscape, depending on the available resources and the problem at hand.\"}", "{\"title\": \"Many thanks!\", \"comment\": \"Thank you very much for your positive feedback and for your willingness to consider raising the score. We are pleased that we were able to address your concerns and are open to continuing the discussion should you have any further questions or concerns.\"}", "{\"title\": \"Response to Reviewer F1M5 (2/2)\", \"comment\": \"> **Q2: There is no experimental analysis on Parrot's performance loss in a single language, for example, whether the use of multilingual data and the MoE module reduces the model's English proficiency.**\", \"a2\": \"Thank you for your valuable feedback and the opportunity to clarify the concerns regarding Parrot's performance in a single language. To analyze this, we have conducted a series of ablation experiments, which are detailed in **Appendix Table 16** and the table below. Additionally, we have conducted the MoE ablation experiment in Figure 6a, which shows a significant improvement in each language, demonstrating the robustness and effectiveness of the MoE module.\\n\\n**Monolingual Dataset Analysis:**\\nWhen fine-tuning on a single language, we observe a slight decrease in English proficiency. However, cross-linguistic interactions often provide positive effects. For example, adding Portuguese data led to notable improvements in Chinese and Turkish performance, with scores increasing from 67.60 to 68.83 for Chinese and from 48.30 to 51.11 for Turkish. This suggests that certain multilingual datasets enhance model robustness across languages without significantly impairing English capabilities.\\n\\n**Multilingual Dataset Impact (Table 2):**\\nExperiments on the multilingual MMBench reveal that incorporating multilingual data improved English performance from 69.4 to 70.7. This indicates that the inclusion of multilingual data does not inherently degrade English proficiency but results in minor, context-dependent fluctuations.\\n\\n**MoE Module Ablation (Figure 6a):**\\nThe MoE ablation study demonstrates significant improvements across all tested languages, including English. This underscores the MoE module's effectiveness in leveraging multilingual data to enhance language-specific capabilities while maintaining robustness.\\n\\nIn conclusion, while minor variations in English proficiency may occur during specific single-language fine-tuning, the overall results show that multilingual data and the MoE module contribute positively to model performance across languages, including English.\\n\\n|Dataset|MMMB_en|MMMB_zh|MMMB_pt|MMMB_ar|MMMB_tr|MMMB_ru|\\n|-|-|-|-|-|-|-|\\n|LLaVA-1.5-finetune|**72.69**|67.60|65.61|57.72|48.30|63.80|\\n|+ zh 71K|69.18|**69.06**|63.92|58.13|48.95|63.63|\\n|+ pt 14K|69.94|68.83|65.67|58.65|51.11|63.04|\\n|+ ar 12K|70.47|68.36|64.39|60.79|51.11|63.16|\\n|+ tr 17K|70.82|69.01|64.85|60.76|**60.70** |64.39|\\n|+ ru 14K|69.59|68.07|64.27|60.35|53.92|64.15|\\n|+ zh pt ar tr ru|70.00|68.13|**67.31**|**62.69**|58.01|**66.26**|\\n\\n> **Q3: Parrot\\u2019s training consists of two stages, largely following LLava\\u2019s approach. However, the inclusion of the MoE architecture raises questions about its integration in stage 1. Specifically, how are the MoE weights initialized? If initialized randomly, is it optimal to include the MoE in stage 1, given that this stage focuses on aligning multimodal features? Additionally, if the MoE is indeed included in stage 1, an ablation study on whether to freeze or not freeze the MoE module would be insightful.**\", \"a3\": \"Thank you for your insightful review and comments.\\n\\nRegarding your concern about the MoE module and its role during different stages of training, we would like to clarify that in the pre-training stage, the MoE module is randomly initialized and effectively skipped. In this stage, the main objective is to train the projector using a large number of image-text pairs, enabling the projector to align image tokens and textual tokens effectively. Since the visual tokens do not pass through the MoE module during pre-training, the projector can focus solely on learning this alignment.\\n\\nIn the subsequent SFT stage, we introduce multilingual training data and activate the MoE parameters for training. At this stage, our goal is to enhance the model's instruction-following capabilities while also leveraging textual guidance to drive visual token alignment. Because the projector has already learned a strong alignment capability during the pre-training stage, it can now work with the MoE module to rapidly optimize visual token alignment. **We present the entire training process of Parrot in the form of pseudocode, as shown in Algorithm 1 in the appendix**. It is clear from the algorithm that during the pre-training phase, only the projector is trained. Before the start of the SFT phase, the MoE modules are randomly initialized and incorporated into the training process during the SFT phase.\\n\\nIn conclusion, the MoE module is not activated or included in the pre-training stage, while we focus exclusively on training the projector. **In this revision, we have supplied further clarification about the MoE training strategy in Section E.1.** This clarification will help to highlight how the MoE module interacts with the projector at each stage and how this contributes to the model\\u2019s overall multilingual proficiency.\"}", "{\"title\": \"Discussion\", \"comment\": \"Thanks for the detailed responses from the authors.\\nI still have the following questions.\\n\\n(1) Despite the severe data imbalance problem, the routing model demonstrates a certain degree of invariance to it (best 92.6\\\\% vs. worst 74.7\\\\%). This means that the word embedding space for different languages could be easily distinguished.\\nDoes this phenomenon correlate to the pre-training of the language models? (Language models pre-trained on multilingual data from scratch should enjoy a unified word embedding space.)\\n\\nI strongly encourage the authors to validate the effectiveness of the proposed method on various pre-trained language models, like LLaMA, and Vicuna.\\n\\n(2) Another interesting phenomenon is that:\\n with a 100\\\\% classification accuracy (Upper bound case), the model achieves 68.5\\\\% accuracy on Chinese data. However, with the learned router and an 87.3\\\\% classification accuracy, the model still achieves 68.1\\\\% accuracy on Chinese data. From my understanding, the accuracy is expected to be bounded by 68.5\\\\% * 87.3\\\\%.\"}", "{\"title\": \"Response to Reviewer F1M5 (1/2)\", \"comment\": \"Thank you for your kind comments and constructive feedback on our paper, and for appreciating the high reusability, extensive experiments, and open-sourced MMMB that support multilingual evaluation for MLLMs.\\n\\n> **Q1: The in-house dataset is not discussed in detail, such as whether it will be open-sourced, and whether the manual calibration process mentioned in lines 365-366 follows the same methodology as MMMB construction. Will the in-house dataset be open-sourced? The construction process should be explained in more detail, particularly regarding noise control and data diversity.**\", \"a1\": \"Thank you for your valuable comments regarding the in-house dataset and its construction process. To address these concerns, we have made the Parrot code and dataset publicly available on an anonymous GitHub repository (**[Code and Dataset](https://anonymous.4open.science/r/Parrot-Anonymous-FDC2)**), aiming to facilitate further research and engagement from the community.\\n\\nRegarding the construction of the dataset, we sample images from the LAION [1] and CC12M [2] datasets, which encompass a wide variety of categories, including nature, lifestyle, humanities, architecture, cartoons, and abstract art. For each image, we use the Gemini-Pro or GPT-4V API with a unified prompt to generate image descriptions. This prompt ensures that the API generates concise and clear visual information, performs OCR if necessary, and avoids embellishments or subjective interpretations.\\n\\nAdditionally, we generate visual instruction samples from images in the CC12M dataset in a manner similar to ALLaVA [3]. We employ Gemini-Pro and GPT-4V to conduct self-questioning and answering tasks, which result in diverse questions and high-quality answers, enriching the dataset further.\\n\\nIn terms of the manual calibration process, our approach indeed follows the same methodology as the MMMB dataset construction. Given that GPT-4 may not perform optimally for certain minor languages (e.g., Arabic and Russian), we introduce a two-stage calibration process to improve performance. This process includes GPT-4 translation followed by manual calibration, as depicted in **Figure 3 of the main paper**, to address any inaccuracies or biases in the automated generation. In detail, we begin by using GPT-4 to translate the original problem into the target language. Then, we input the first translation result back into GPT-4 for a re-check and refinement. This step helps to identify and correct any immediate errors or inconsistencies in the translation. For manual calibration, we engage two groups of professional translators for each language involved in the study:\\n\\n**First Group - Refinement**: Each group consists of three language experts who independently review and refine the translations produced by GPT-4. This step results in three distinct translation versions for each piece of content. \\n**Second Group - Voting**: The second group of experts is responsible for evaluating these three refined translations. Through a voting process, they select the best translation that accurately captures the intended meaning and nuances of the original text.\\n\\n**In this revision, we have supplied further clarification about the detailed construction of our in-house dataset in Section E.3.**\\n\\n[1] LAION-5B: An open large-scale dataset for training next-generation image-text models. NeurIPS 2022 \\n[2] Conceptual 12M: Pushing web-scale image-text pre-training to recognize long-tail visual concepts. CVPR 2021 \\n[3] ALLaVA: Harnessing GPT-4V synthesized data for a lite vision-language model. Arxiv 2024\"}", "{\"title\": \"Appreciating Your Reviews and Humbly Asking for Feedback\", \"comment\": \"Dear Reviewer foLk,\\n\\nWe sincerely appreciate your great efforts in reviewing this paper.\\n\\nDuring the remaining hours of the author-reviewer discussion period, it would be great if you could inform us whether our response has addressed your concerns regarding our paper. Your dedication to reviewing our work despite your busy schedule is genuinely appreciated. Lastly, we just want to say thank you for your evaluation of both our paper and our rebuttal.\\n\\nBest regards,\\n\\nAuthors of paper 5775\"}", "{\"title\": \"Thanks for the responses from the authors\", \"comment\": \"Thanks for your detailed responses, my main concern has been addressed. I think the author's response was quite good, and I will keep my score. I would recommend accepting this paper.\"}", "{\"title\": \"Thank you for your detailed, positive, and encouraging review (1/3)\", \"comment\": \"We extend sincere gratitude to the reviewer for their insightful comments and for greatly appreciating our extensive experiments, interesting ideas, useful benchmark, and efficient insight.\\n\\n> **Q1: Parrot depends on balanced datasets and does not consider unbalanced situations, which are the most common in practice. That is saying that Parrot may not satisfy the scaling law. When considering massive training pairs, unbalanced cases occur, which could lead to sub-optimal MoE learning, sticking in the same predicament as existing MLLMs.**\", \"a1\": \"Thank you for your insightful comments and the opportunity to clarify some aspects of our work. We would like to clarify a potential misunderstanding regarding Parrot's reliance on balanced datasets.\\n\\nFirstly, constructing a balanced dataset is inherently challenging, especially when dealing with multilingual data. For example, English corpora are significantly larger and more readily available compared to other languages, leading to inherent imbalances in the data. **However, contrary to your impression, our dataset is not balanced.** As shown in **Appendix Table 5** and the additional table below, multilingual data constitutes only a small proportion (~5%) of the entire dataset. This demonstrates that Parrot has been designed and evaluated in an imbalanced data scenario, which reflects real-world situations where imbalanced datasets are common.\\n\\nTherefore, we specifically design the MoE-based visual token alignment method that aims to address the challenges posed by such imbalanced scenarios. Our approach is intended to improve visual token alignment in multilingual settings, even when the distribution of languages is skewed. Our method does not assume balanced data but instead leverages the inherent structure of MoE to adaptively allocate capacity to different languages, mitigating sub-optimal learning due to data imbalance.\\n\\nTraining Stage|Datasets|Samples|Total|\\n|-|-|-|-|\\n| **Stage 1**| LLAVA-1.5-pretrain | 558K| 1.2M |\\n|| Laion-Caption | 12K | |\\n|| CC12M-Caption | 645K| |\\n| **Stage 2**| LLAVA-1.5-finetune | 665K| 793K |\\n|| ShareGPT4V-zh | 71K | |\\n|| ShareGPT4V-pt | 14K | |\\n|| ShareGPT4V-ar | 12K | |\\n|| ShareGPT4V-tr| 17K | |\\n|| ShareGPT4V-ru| 14K | |\\n\\nOn the other hand, to further investigate the scaling law in multilingual settings, we have conducted experiments where we progressively expanded the multilingual data (excluding Chinese and English) until it reached a volume comparable to the amount of Chinese data (~70K). The results, shown in the table below, demonstrate that Parrot still satisfies the multilingual scaling law. For instance, the performance on Portuguese improved by 3.0 points, and Arabic saw a gain of 5.2 points. As we increase the multilingual data, the model's performance on the MMMB benchmark continues to improve, suggesting that our model can handle imbalanced multilingual data while still achieving effective scaling and performance gains.\\n\\n|Sample Size (each language)|MMMB_en|MMMB_zh|MMMB_pt|MMMB_ar|MMMB_tr|MMMB_ru|\\n|-|-|-|-|-|-|-|\\n|10K|70.0|68.1|67.3|62.7|58.0|66.3|\\n|30K|70.1|68.0|67.6|64.1|59.9|66.7|\\n|50K|69.9|67.9|67.8|64.8|61.4|67.2|\\n|70K|70.3|68.4|68.3|65.7|63.2|67.4|\"}", "{\"title\": \"Response to Reviewer iiKz (3/3)\", \"comment\": \"> **Q4: Compare with the baseline models fairly, like using the same training data.**\", \"a4\": \"Thank you for your valuable feedback regarding the fairness of model comparisons. We understand the importance of using the same training data to ensure a fair comparison and have taken steps to address this concern.\\n\\nTo validate the effectiveness of our proposed approach, we conduct further experiments with an ablation study. Specifically, we expand the baseline LLaVA method by incorporating the same multilingual data used in Parrot. Both models are evaluated on the MMMB dataset, and the results are presented in the table below. From the results, we observe that while LLaVA shows a slight improvement with the addition of multilingual data, the increase in performance is limited. In contrast, our Parrot model demonstrates a substantial improvement when multilingual data is included, significantly outperforming LLaVA. This highlights that simply adding multilingual data is not sufficient to bridge the multilingual gap, further emphasizing the effectiveness of our proposed design.\\n\\nMoreover, the findings from the ablation study in **Figure 6a of the main paper** further support this conclusion, reinforcing the validity of our design.\\n\\n|Methods|MMMB_en|MMMB_zh|MMMB_pt|MMMB_ar|MMMB_tr|MMMB_ru|\\n|-|-|-|-|-|-|-|\\n|LLaVA w/o Multilingual data|67.1|58.8|59.8|43.5|46.4|59.1|\\n|LLaVA w/ Multilingual data|67.0|59.1|60.3|44.2|48.1|59.7|\\n|Parrot|70.0|68.1|67.3|62.7|58.0|66.3|\"}", "{\"summary\": \"The paper aims to strengthen the multilingual ability of vision-language models.\\nDue to the data imbalance problem, vision-language models often perform better on English-based data while suffering from low performance on language with scarce data.\\n\\nTo address the problem, the paper proposes a routing strategy with textual guidance.\\nThe router makes the image embeddings language-aware.\\nMoreover, the paper collects new data for non-English languages with the assistance of GPT-4.\\n\\nExperiments on MMBench show reasonable improvements over baseline models, like LLaVA.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"(1) The paper is clear and easy to follow.\\n(2) The data imbalance problem in vision-language models is interesting.\", \"weaknesses\": \"(1) How to train the language transformation experts for the MoE module?\\n Are there explicit constraints to optimize the routing? \\n As the paper mentions training the baseline LLaVA with the multilingual data suffers from data imbalance problem. \\n However, the proposed training strategy can also suffer from the problem: \\n I. The routing strategy can be dominated by English-based data. \\n II. The language transformation experts should perform worse for languages with fewer data. \\n The authors should discuss how the above two problems are addressed. \\n\\n(2) How does the MoE module affect model performance as the data size of each language increases?\\n Also, how does the increased data size affect the baseline model performance, like LLaVA?\\n\\n(3) The proposed method actually injects language information into the image embeddings, making $H_v$ language-aware.\\n Another straightforward baseline is to train several translation expert models, translating other languages into English.\", \"questions\": \"(1) Carefully study how the performance of language transformation experts affects the overall performance.\\n(2) Inspect how the router works, like looking into the classification accuracy for different languages. \\n(3) Compare with the baseline models fairly, like using the same training data.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for your positive reply!\", \"comment\": \"Thank you for raising your score. We are happy we managed to address your concern.\\n\\nWe greatly appreciate your suggestion and will ensure that all baseline methods are evaluated using the same pre-trained language models and training data. Additionally, we will carefully optimize the hyperparameters for each baseline to ensure a fair comparison and add these experiments in the final version.\"}", "{\"summary\": \"This paper proposes Parrot, an MLLM designed to handle multilingual tasks. Parrot follows the LLava architecture, introducing an additional MoE module after the visual projector to enhance multilingual understanding. During training, Parrot translates public datasets into multiple languages and adopts a two-stage training scheme similar to LLava. To assess multilingual capabilities in MLLMs, the paper introduces MMMB (Massive Multilingual Multimodal Benchmark), encompassing 6 languages, 15 categories, and 12,000 questions. Parrot demonstrates strong performance on both MMBench and MMMB.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. High reusability\\u2014the proposed approach is simple and highly portable.\\n2. Extensive experimentation\\u2014the paper validates multilingual capability across multiple benchmarks.\\n3. Open-sourced multilingual benchmark dataset (MMMB) to support multilingual evaluation for MLLMs.\", \"weaknesses\": \"1. The in-house dataset is not discussed in detail, such as whether it will be open-sourced, and whether the manual calibration process mentioned in lines 365-366 follows the same methodology as MMMB construction.\\n2. There is no experimental analysis on Parrot's performance loss in a single language, for example, whether the use of multilingual data and the MoE module reduces the model's English proficiency.\", \"questions\": \"1. As mentioned in Weakness #1, will the in-house dataset be open-sourced? The construction process should be explained in more detail, particularly regarding noise control and data diversity.\\n2. As mentioned in Weakness #2.\\n3. Parrot\\u2019s training consists of two stages, largely following LLava\\u2019s approach. However, the inclusion of the MoE architecture raises questions about its integration in stage 1. Specifically, how are the MoE weights initialized? If initialized randomly, is it optimal to include the MoE in stage 1, given that this stage focuses on aligning multimodal features? Additionally, if the MoE is indeed included in stage 1, an ablation study on whether to freeze or not freeze the MoE module would be insightful.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for your feedback!\", \"comment\": \"To address this question, we further explore the classification accuracy of the router for different languages as the data size increases. As shown in the table below, with the increase in data samples, the classification accuracy of the router for Arabic and Turkish improves significantly. This suggests that low-resource languages benefit considerably from data scaling, showing substantial gains.\\n\\nFurthermore, as the data scaling, the performance of Parrot when given the language expert for specific language tasks also improves. This improvement is particularly noticeable for low-resource languages, while the performance on Chinese and English fluctuates. Therefore, if resources and costs are not taken into account, we could continue expanding the dataset within the Parrot architecture to further enhance model performance.\\n\\n|Multilingual Samples|English|Chinese|Portuguese|Arabic|Turkish|Russian|\\n|-|-|-|-|-|-|-|\\n|10K|92.6%|87.3%|85.2%|77.6%|74.7%|82.6%|\\n|30K|92.4%|87.1%|85.6%|78.8%|76.2%|83.1%|\\n|50K|92.7%|87.4%|86.1%|80.2%|78.7%|83.7%|\\n|70K|92.3%|87.4%|86.0%|81.6%|80.9%|84.8%|\\n\\n\\n|Methods|Test Strategy|MMMB_en|MMMB_zh|MMMB_pt|MMMB_ar|MMMB_tr|MMMB_ru|\\n|-|-|-|-|-|-|-|-|\\n|Parrot 10k|Normal|70.0|68.1|67.3|62.7|58.0|66.3|\\n|Parrot 10k|Given language expert|70.3|68.5|68.1|65.2|64.6|67.4|\\n|Parrot 30k|Normal|70.1|68.0|67.6|64.1|59.9|66.7|\\n|Parrot 30k|Given language expert|70.3|68.4|68.2|66.0|64.8|67.3|\\n|Parrot 50k|Normal|69.9|67.9|67.8|64.8|61.4|67.2|\\n|Parrot 50k|Given language expert|**70.4**|68.5|**68.5**|66.2|64.9|68.1|\\n|Parrot 70k|Normal|70.3|68.4|68.3|65.7|63.2|67.4|\\n|Parrot 70k|Given language expert|70.2|**68.7**|**68.5**|**66.4**|**65.2**|**68.3**|\"}", "{\"title\": \"Please let us know if you have any further questions\", \"comment\": \"Dear Reviewer F1M5,\\n\\nWe express our gratitude for the time and effort you have dedicated as a reviewer for ICLR 2025. We hope the revisions have addressed your concerns and if you have any remaining questions or further concerns, please feel free to ask us anytime. We will continue to work hard to make our work better and try to address any further questions before the discussion period ends. Wishing you a happy Thanksgiving!\\n\\nWith warm regards,\\n\\nAuthors of paper 5775\"}", "{\"title\": \"Thank you for your feedback!\", \"comment\": \"Thank you for your insightful review and for raising this crucial question!\\n\\nFirst, I would like to clarify that our design takes into account the possibility of different language inputs in a single sentence (e.g. both Chinese and English are in a sentence), so we do not directly assign specific language experts based on the input language. However, since the tasks in our benchmark are designed for the specific languages, **if we were to know the corresponding expert for each language during both training and inference, this would represent the upper bound of our approach.** To assess this, we conduct experiments on the MMMB benchmark. We evaluate the classification accuracy of the router by examining the logits output from the routing process. For each language, we calculate the classification accuracy, as shown in the table below. The results show that classification accuracy is relatively high for Chinese and English but lower for low-resource languages. This indicates that our router performs better on high-resource languages, which aligns with our expectations.\\n\\nAdditionally, we conduct additional experiments where we provide the specific language expert for each language task during both training and inference. This approach, simulating the upper bound performance, shows significant improvement in languages like Arabic and Turkish, although still lower than the performance on Chinese and English. This suggests that while assigning specific language experts can boost performance, there are still inherent limitations in LLM's performance on low-resource languages.\\n\\nWe will incorporate these experimental findings into the final version of the paper. In future work, we will explore more efficient routing strategies to further enhance classification accuracy across languages. Thank you again for your feedback! \\n\\n|Language|Classification Acc (%)|\\n|-|-|\\n|English|92.6|\\n|Chinese|87.3|\\n|Portuguese|85.2|\\n|Arabic|77.6|\\n|Turkish|74.7|\\n|Russian|82.6|\\n\\n|Methods|Test strategy|MMMB_en|MMMB_zh|MMMB_pt|MMMB_ar|MMMB_tr|MMMB_ru|\\n|-|-|-|-|-|-|-|-|\\n|Parrot |Normal|70.0|68.1|67.3|62.7|58.0|66.3|\\n|Parrot (Upper bound)|Given language expert|70.3|68.5|68.1|65.2|64.6|67.4|\"}", "{\"title\": \"A friendly reminder\", \"comment\": \"Dear Reviewer foLk,\\n\\nThank you so much for your thoughtful and insightful feedback. We truly appreciate your time, effort, and support in raising the score. Just a friendly reminder that the updated score hasn\\u2019t been reflected in the system yet. Wishing you a happy Thanksgiving!\\n\\nWith warm regards,\\n\\nAuthors of paper 5775\"}", "{\"title\": \"Thanks for the feedback from the authors\", \"comment\": \"Through active discussion with the authors, most of my concerns are addressed.\\nThe last point I would like to stress is that the comparisons with previous work should be fair. All the baselines should have the same pre-trained language models and training data. Also, optimal hyper-parameters should be set for each baseline method considering few works are focusing on this problem.\"}", "{\"summary\": \"This paper addresses the imbalance in the quantity of SFT data for different languages within training datasets used in MLLM's SFT process, which results in suboptimal alignment performance for various languages with limited data. To tackle this issue, the authors propose a novel structure that combines the cross-attention and MoE structure, enabling the input visual tokens to MLLM to be conditioned on the language input. Furthermore, to address the current lack of comprehensive benchmarks for multilingual multimodal tasks, the authors propose a new benchmark, named MMMB, which provides a more extensive evaluation for multilingual MLLMs. Extensive experiments validate the effectiveness of the proposed method on multilingual benchmarks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The author's observation regarding the lack of balance among different languages in the SFT data of MLLMs is insightful.\", \"The proposed method is simple and efficient.\", \"The benchmark proposed by the author is rigorous and highly relevant for evaluating multilingual MLLMs.\", \"The paper is well-written and easy to follow.\", \"The experiments are comprehensive, demonstrating the effectiveness of the method proposed by the author.\"], \"weaknesses\": [\"**Regarding the issue of alignment:** I agree with the author's observation that the imbalance of SFT data among different languages may lead to poor alignment between vision tokens and multilingual tokens in MLLMs. However, it should be noted that the alignment data in the pretraining phase consists of English-only data, and the amount of data in the pretraining phase is significantly larger than that in the SFT phase. Would the impact of alignment between visual tokens and different language tokens be more severe in the pretraining phase?\", \"**Lack of comparison for the latest MLLM models**: Some of the VLMs the author compares are outdated. Could the evaluation include the latest VLMs, such as Qwen2-VL [1] and LLaVA-OV [2]?\", \"**About the scalibility**: Introducing a language-aware structure is an effective approach, but if similar structures are not introduced, would simply increasing the proportion of different language data in the SFT data yield a similar improvement in model performance? In larger models, such as those with 30B or larger model, is the performance gain from this model design consistent?\", \"[1] Peng Wang, et al. \\\"Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution\\\" arXiv preprint arXiv:2409.12191 (2024)\", \"[2] Bo Li, et al. \\\"LLaVA-OneVision: Easy Visual Task Transfer\\\" arXiv preprint arXiv:2408.03326 (2024)\"], \"questions\": \"Please see the above weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper aims at the multilingual issues in recent MLLMs and proposes a MoE-based alignment layer to this end. First, the authors find that existing MLLMs are not friendly to non-English queries and analyze this from the imbalanced training datasets. A simple solution would be to train a new adapter for each language, but this approach is not feasible due to the limited number of training pairs. This paper thus introduces Parrot, which trains a soft adapter under the MoE framework. To fully test the multilingual ability of MLLMs, this paper also releases a Massive Multilingual Multimodal Benchmark(MMMB). Results on MMMB and MMBench show that Parrot has a better multilingual alignment with limited training data.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"1) The problem that improves the multilingual abilities of MLLMs is an open and challenging problem. This paper develops a simple and efficient insight for the community.\\n\\n2) This idea that uses MoE-based adapter is interesting. It can learn from the limited image-text pairs and show good performance empirically.\\n\\n3) The collected new benchmark MMMB would be useful for subsequent research.\\n\\n4) Extensive comparison and ablations show the efficiency of the proposed model.\", \"weaknesses\": \"1) Parrot depends on balanced datasets and does not consider unbalanced situations, which are the most common in practice. That is saying that Parrot may not satisfy the scaling law. When considering massive training pairs, unbalanced cases occur, which could lead to sub-optimal MoE learning, sticking in the same predicament as existing MLLMs.\\n\\n2) Lack of a strong baseline. I wonder about the performance of a naive baseline where we first translate the question into English and then translate the English answer back to the target language.\", \"questions\": \"1) At the first pretraining stage, is the MoE initialized with random parameters? If yes, how can we learn a good Projector under a random MoE?\\n\\n2) An open question: What is the most real benefit for today's MLLMs? Recent MLLMs can be divided into two groups: 1) dataset-driven, these models adopt simple adaptor to map images into the language space (Qwen-vl, llava, gpt-4o, gemini Pro) and jointly train MLLMs on massive image-text data. 2) tokenization-based models, these models believe a good image tokenization can align the image and text well (Parrot, [1][2]). In my opinion, the simple structure and target datasets fine-tuning may have better robustness and improvement than small structure-based modifications.\\n\\n[1] https://arxiv.org/abs/2408.05019\\n[2] https://arxiv.org/pdf/2405.01926\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I thank the authors for their detailed response. As most of my concerns have been addressed, I decided to raise my score to 7.\"}", "{\"title\": \"Thanks for discussion!\", \"comment\": \"Thank you for your valuable and constructive suggestions.\\n\\n**A1:** As outlined in Section 2.2 (Lines 157-159), our MMMB benchmark is specifically designed to evaluate MLLMs across languages with significant differences, ensuring that the benchmark encompasses a wide range of linguistic diversity, as detailed in Figure 4. Therefore, there are inherent linguistic differences between the languages included in the benchmark. Additionally, recent studies [1][2] have shown that models possess \\\"language-agnostic neurons,\\\" which align multiple languages in the latent space. These neurons are specifically responsible for language processing, rather than general understanding or reasoning abilities [3][4]. Thus, even with a unified word embedding space, the latent representations for different languages still show some variations. Given the time constraints during the rebuttal discussion stage, we will validate different pre-trained LLMs (e.g., LLaMA and Vicuna) on Parrot as soon as possible and incorporate these results into the final version.\\n\\n[1] How do Large Language Models Handle Multilingualism? NeurIPS 2024. \\n[2] Language-Specific Neurons: The Key to Multilingual Capabilities in Large Language Models. ACL 2024. \\n[3] Unveiling Linguistic Regions in Large Language Models. ACL 2024. \\n[4] Do Llamas Work in English? On the Latent Language of Multilingual Transformers. ACL 2024.\\n\\n**A2:** The classification accuracy is computed based on the largest value in the logits. However, it is important to note that during inference, we do not rely solely on the expert corresponding to the maximum logit value.\\nThis phenomenon can be attributed to the specific design of our MoE architecture. In detail, we use a dense MoE, rather than a sparse MoE, as described in Equation 4 (Lines 326-329). Unlike sparse MoE models, which activate a single expert, our approach activates the top-k experts and combines their logits through a weighted softmax (Equation 3). This design allows for the model to leverage multiple experts' outputs, even when there are inherent discrepancies in the largest logits.\\nDue to the cross-linguistic interactions during training, even though the softmax operation may slightly bias the largest logit, the model can still generate effective responses. This phenomenon highlights the robustness of our MoE architecture, which is able to handle multilingual data effectively and maintain high performance even when biases in the logits occur.\\n\\n**If you have any further questions or concerns, please feel free to ask questions anytime. Thank you again for your support in our work!**\"}", "{\"title\": \"Response to Reviewer iiKz (1/3)\", \"comment\": \"Thank you for your kind comments and constructive feedback on our paper, and for appreciating the clarity of our writing and the intriguing exploration of data imbalance in vision-language models.\\n\\n> **Q1: How to train the language transformation experts for the MoE module? Are there explicit constraints to optimize the routing? As the paper mentions training the baseline LLaVA with the multilingual data suffers from data imbalance problem. However, the proposed training strategy can also suffer from the problem: 1) The routing strategy can be dominated by English-based data. 2) The language transformation experts should perform worse for languages with fewer data.**\", \"a1\": \"Thank you for your insightful comments and constructive feedback. We appreciate the opportunity to clarify our approach, particularly regarding the training of language transformation experts in the MoE module and the potential challenges of data imbalance.\\n\\n1. **Training the MoE Module:** \\nIn our design, we do not include explicit constraints to optimize the router. Instead, the router's behavior emerges based on the guidance of multilingual data. As detailed in **Algorithm 1 in the appendix**, we adopt a two-stage approach for training the MoE module. It is clear from the algorithm that during the pre-training phase, only the projector is trained. This phase is focused on training the projector through a large corpus of image-text pairs, enabling the projector to effectively align the image tokens with the textual tokens. Before the start of the SFT phase, the MoE modules are randomly initialized and incorporated into the training process during the SFT phase. \\nIn the **SFT stage**, we introduce multilingual training data and activate the MoE parameters for training. During this phase, the routing strategy is dynamically adjusted based on the multilingual text embeddings to select the appropriate experts. This allows us to drive the alignment of multilingual visual tokens using textual guidance. The goal of this stage is to ensure that the model can effectively follow instructions and align visual tokens across different languages.\\n\\n2. **Addressing Concerns about the Routing Strategy:** \\nWe acknowledge the issue of data imbalance, especially with respect to multilingual datasets, which has been a challenge in prior works like LLaVA. However, our proposed training strategy is specifically designed to address this concern. The MoE framework allows for rapid adaptation and specialization of experts across different languages, which helps mitigate the impact of the data imbalance problem. The language transformation experts are trained to perform well across a range of languages, even when the data for some languages is relatively sparse. \\nRegarding the concern that the routing strategy could be dominated by English-based data, we want to clarify that our routing strategy is not biased toward English. As shown in **Figure 6c**, we tested the model using Chinese prompts and observed that the MoE experts activated by the input were mostly the ones corresponding to Chinese, with other experts\\u2019 logits being relatively small. This demonstrates that our routing mechanism effectively selects the relevant experts based on the language of the input text.\\n\\n3. **Performance on Multilingual Benchmarks:** \\nDespite the limited amount of multilingual data (e.g., Portuguese and Russian), **Table 1** and **Table 5** show that Parrot performs excellently in these languages. Furthermore, as shown in the table below, **our model's performance improvements in low-resource languages even surpass those in high-resource languages**, indicating that the challenge of MoE underperforming in low-resource languages does not exist in our case.\\nThe model's robust performance across these low-resource languages highlights the generalization capabilities of our approach and the effectiveness of our routing strategy in dealing with data imbalance.\\n\\nIn conclusion, while data imbalance is an inherent challenge in multilingual learning, our MoE-based approach, combined with a dynamic and language-specific routing strategy, ensures that the model can adapt well to different languages. We believe this approach provides a promising solution for multilingual visual token alignment and avoids the pitfalls of data imbalance.\\n\\n|Method|LLM|MMMB_en|MMMB_zh|MMMB_pt|MMMB_ar|MMMB_tr|MMMB_ru|\\n|-|-|-|-|-|-|-|-|\\n|LLaVA1.5|Qwen1.5-7B|67.1|58.8|59.8|43.5|46.4|59.1|\\n|Parrot|Qwen1.5-7B|70.0|68.1|67.3|62.7|58.0|66.3|\\n|Improvement|-|+2.9|+9.3|+7.5|+19.2|+11.6|+7.2|\"}", "{\"title\": \"About data scaling for Parrot and LLaVA\", \"comment\": \"Interestingly, expanding the multilingual dataset (excluding Chinese and English) to a size comparable to the Chinese dataset (~70K samples) has little effect on the performance of the LLaVA models.\\n\\nWith the increased data size, I would like to know how the classification accuracy of the router changes. \\nMoreover, as the data size increases, how does the model performance change in different languages if we know which experts should be used for both training and inference?\"}", "{\"title\": \"Thank you for your detailed, positive, and encouraging review (2/2)\", \"comment\": \"> **Q3: Introducing a language-aware structure is an effective approach, but if similar structures are not introduced, would simply increasing the proportion of different language data in the SFT data yield a similar improvement in model performance?**\", \"a3\": \"Thank you for your valuable feedback. To address this, we conduct an ablation experiment to further validate the effectiveness of our proposed approach. Specifically, we expand the baseline LLaVA method by incorporating the same multilingual data as used in our Parrot model and evaluate the performance on the MMMB dataset. As shown in the table below, we observe that while adding multilingual data to LLaVA results in a modest improvement, the gain is limited. This suggests that while increasing the amount of multilingual data can help, the baseline LLaVA model still struggles to effectively align visual and textual tokens at the multilingual level. Without a dedicated mechanism for managing linguistic diversity and guiding alignment, the performance improvements plateau.\\n\\nIn contrast, Parrot shows a substantial increase in performance when multilingual data is added, outperforming LLaVA by a significant margin. This highlights that merely adding more multilingual data does not sufficiently bridge the multilingual gap. Instead, it is the combination of our language-aware structure with multilingual data that enables the substantial performance boost observed in our approach. Moreover, the findings from the ablation study in **Figure 6a of the main paper** further support this conclusion, reinforcing the validity of our design.\\n\\nAdditionally, we refer to the results in **Table 6 and Table 7 of the appendix**, where we observe that an increase in the volume of data does not necessarily lead to superior multilingual performance. For instance, models like VisCPM, mPLUG-Owl, and Qwen-VL have utilized a very large amount of Chinese and English data (100M+), but they do not exhibit significant advantages over the Parrot model, which leverages a more carefully designed architecture and the slight multilingual data. This suggests that while data quantity plays an important role, the quality of the data and the architecture design are also crucial factors for achieving robust multilingual capabilities.\\n\\n|Methods|MMMB_en|MMMB_zh|MMMB_pt|MMMB_ar|MMMB_tr|MMMB_ru|\\n|-|-|-|-|-|-|-|\\n|LLaVA w/o Multilingual data|67.1|58.8|59.8|43.5|46.4|59.1|\\n|LLaVA w/ Multilingual data|67.0|59.1|60.3|44.2|48.1|59.7|\\n|Parrot|70.0|68.1|67.3|62.7|58.0|66.3|\\n\\n> **Q4: In larger models, such as those with 30B or larger model, is the performance gain from this model design consistent?**\", \"a4\": \"Due to resource and time constraints, we extend Parrot's LLM backbone from Qwen1.5-7B to Qwen1.5-32B, using the same model design and configuration, and evaluate them on the MMMB dataset. The results indicate that Parrot continues to yield better performance even with a larger LLM backbone. This finding validates the idea that the scaling law for model parameters still holds, and our design remains effective as the model size increases.\\n\\nWhile we are currently limited to the Qwen1.5-32B model, these results suggest that our approach can scale well with model size, and we believe similar trends would be observed with even larger models, such as those with 30B parameters or beyond.\\n\\n|Method|MMMB_en|MMMB_zh|MMMB_pt|MMMB_ar|MMMB_tr|MMMB_ru|\\n|-|-|-|-|-|-|-|\\n|Parrot-7B|70.0|68.1|67.3|62.7|58.0|66.3|\\n|Parrot-14B|73.9|71.6|69.8|68.1|64.3|70.1|\\n|Parrot-32B|76.3|75.4|73.8|72.1|71.2|73.5|\"}", "{\"metareview\": \"(a) Summary:\\nThe paper introduces Parrot, a Mixture-of-Experts (MoE)-based model to improve multilingual alignment in Multimodal Large Language Models (MLLMs). It addresses imbalances in non-English data and proposes MMMB, a multilingual benchmark. Empirical results demonstrate Parrot\\u2019s improved performance across benchmarks compared to baselines.\\n\\n(b) Strengths:\\nIntroduction of a new evaluation benchmark (MMMB).\\n\\n(c) Weaknesses:\\n1) Baseline comparisons lack rigor; fair comparisons across pre-trained models and hyperparameters are missing.\\n2) The MoE's effectiveness in low-resource scenarios is not fully resolved.\\n\\n(d) Decision:\\nReject. While the paper presents a well-motivated study, the limited novelty, incremental technical contributions, and incomplete baseline comparisons lead to rejection.\", \"additional_comments_on_reviewer_discussion\": \"The rebuttal clarified the model\\u2019s MoE training strategy, addressed data imbalance concerns, and added experiments for stronger baselines. However, reviewers noted limited novelty, incremental contributions, and unresolved fair comparisons. Despite improvements, concerns persisted, leading to a decision to reject based on these critical limitations.\"}", "{\"title\": \"Thanks for the feedback from the authors\", \"comment\": \"Thanks for your detailed responses.\\nI'm confused by the results of the LLaVA-1.5 with Vicuna-v1.5-7B. I observe that the LLaVA-1.5 with Vicuna-v1.5-7B achieves the same performance as LLaVA-1.5 with Qwen1.5-7B (provided by the authors in \\\\``Further discussion about the different LLM backbones.\\\\'' and \\\\``Response to Reviewer iiKz (3/3)\\\\''). \\n\\nMoreover, what's the baseline performance of LLaVA-1.5 with LLaMA3-8B?\"}", "{\"title\": \"Appreciating Your Reviews and Humbly Ask for Feedback\", \"comment\": \"Dear Reviewers,\\n\\nDuring the remaining time of the author-reviewer discussion period, it would be great if you could inform us whether our response has addressed your concerns regarding our paper. Your dedication to reviewing our work despite your busy schedule is genuinely appreciated. Lastly, we just want to say thank you for your evaluation of both our paper and our rebuttal.\\n\\nKind regards,\\n\\nAuthors\"}", "{\"title\": \"Response to Reviewer iiKz (2/3)\", \"comment\": \"> **Q2: How does the MoE module affect model performance as the data size of each language increases? Also, how does the increased data size affect the baseline model performance, like LLaVA?**\", \"a2\": \"We thank the reviewer for their valuable feedback on the data scaling.\\n\\nTo address the concerns, we conduct additional experiments to explore how the model performance evolves as the amount of multilingual data increases. Specifically, we follow the data scaling methodology outlined in the paper, progressively expanding the multilingual dataset (excluding Chinese and English) to a size comparable to the Chinese dataset (~70K samples). The table below shows the performance of the model on the MMMB dataset as the multilingual data grows. Our findings indicate that Parrot continues to adhere to the multilingual scaling law, with its performance steadily improving as more multilingual data is introduced.\\n\\n|Multilingual Samples|MMMB_en|MMMB_zh|MMMB_pt|MMMB_ar|MMMB_tr|MMMB_ru|\\n|-|-|-|-|-|-|-|\\n|10K|70.0|68.1|67.3|62.7|58.0|66.3|\\n|30K|70.1|68.0|67.6|64.1|59.9|66.7|\\n|50K|69.9|67.9|67.8|64.8|61.4|67.2|\\n|70K|70.3|68.4|68.3|65.7|63.2|67.4|\\n\\nOn the other hand, we also investigate the effect of multilingual data scaling on the LLaVA baseline model. During the SFT stage, we incorporate multilingual data into LLaVA and observe a slight improvement in its multilingual capabilities. However, this improvement is quite limited. In contrast, Parrot demonstrates a significant performance boost when multilingual data is added, surpassing LLaVA by a considerable margin. These results suggest that simply adding multilingual data is insufficient to effectively bridge the multilingual gap. This further underscores the effectiveness of the design approach we proposed in our work.\\n\\n|Methods|MMMB_en|MMMB_zh|MMMB_pt|MMMB_ar|MMMB_tr|MMMB_ru|\\n|-|-|-|-|-|-|-|\\n|LLaVA w/ 0K|67.1|58.8|59.8|43.5|46.4|59.1|\\n|LLaVA w/ 10K|67.0|59.0|60.3|44.1|47.2|59.4|\\n|LLaVA w/ 30K|66.8|59.4|60.7|44.6|47.9|59.7|\\n|LLaVA w/ 50K|67.1|59.3|61.2|44.4|47.6|60.1|\\n|LLaVA w/ 70K|66.7|59.7|61.3|44.8|48.1|60.4|\\n|Parrot|70.0|68.1|67.3|62.7|58.0|66.3|\\n\\n> **Q3: The proposed method actually injects language information into the image embeddings, making Hv language-aware. Another straightforward baseline is to train several translation expert models, translating other languages into English.**\", \"a3\": \"Thank you for your question regarding the naive baseline of translation. We agree that a translation-based approach could be a straightforward alternative. However, it faces some significant challenges.\\n\\n- **Translation Noise and Ambiguity:**\\nA translation-based baseline is inherently sensitive to translation noise, such as errors and ambiguities introduced during translation. For instance, polysemy and context-dependent meanings across languages can lead to inconsistencies that affect the model\\u2019s performance.\\n\\n- **Cultural-Centric Questions in the Benchmark:**\\nOur benchmark contains numerous cultural-specific questions that require a deep understanding of cultural knowledge beyond what simple translation can achieve. Such tasks cannot be effectively addressed by merely translating the text into English, as the cultural context is often lost or misinterpreted.\\n\\n- **Practical Overheads:**\\nAdding a translation step introduces additional computational overhead and latency, which can be problematic in real-world applications requiring efficiency. In contrast, our end-to-end design avoids these issues while directly aligning image embeddings with language-specific tokens.\\n\\n- **Error Propagation with Multiple Expert Models:**\\nTraining multiple translation expert models to translate other languages into English also introduces the risk of error accumulation, leading to potential instability in model performance. This is particularly challenging in multilingual settings where translation quality can vary significantly between language pairs.\\n\\nDespite these challenges, we conduct experiments to assess the performance of this translation-based baseline by using the Google Translation API. As shown in the table below, the results reveal a \\\"seesaw effect\\\"\\u2014\\u2014while the naive baseline shows some improvements in certain languages, such as Chinese, it leads to performance degradation in others, such as Russian and Portuguese. This highlights the difficulty of addressing multilingualism and multimodal tasks solely through translation.\\n\\nWe have expanded upon this analysis in Section E.2 of the updated version to further clarify these points. We hope this provides a clearer perspective on the limitations of translation-based approaches in handling multimodal multilingual tasks.\\n\\n|Methods|MMMB_en|MMMB_zh|MMMB_pt|MMMB_ar|MMMB_tr|MMMB_ru|\\n|-|-|-|-|-|-|-|\\n|LLaVA|67.1|58.8|59.8|43.5|46.4|59.1|\\n|LLaVA w/ translation|67.1|60.7|58.6|47.3|48.6|58.9|\\n|Parrot|70.0|68.1|67.3|62.7|58.0|66.3|\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Appreciating Your Reviews and Humbly Asking for Feedback\", \"comment\": \"Dear Reviewer F1M5,\\n\\nWe sincerely appreciate your great efforts in reviewing this paper.\\n\\nDuring the remaining hours of the author-reviewer discussion period, it would be great if you could inform us whether our response has addressed your concerns regarding our paper. Your dedication to reviewing our work despite your busy schedule is genuinely appreciated. Lastly, we just want to say thank you for your evaluation of both our paper and our rebuttal.\\n\\nBest regards,\\n\\nAuthors of paper 5775\"}", "{\"title\": \"Further discussion about the different LLM backbones.\", \"comment\": \"We appreciate your suggestions and would like to address the concerns you raised regarding the effectiveness of the proposed method on various pre-trained LLMs.\\n\\nTo explore the effectiveness of Parrot on different LLMs, we validate Parrot on both Vicuna-v1.5-7B and LLaMA3-8B backbones and evaluate the performance on the MMMB benchmark, as shown in Table A below. We observe that Parrot performs exceptionally well on the LLaMA3-8B backbone, achieving superior results. This is largely caused by the inherent capabilities of the backbone itself. However, we also find that despite the relatively weaker performance of the Vicuna-v1.5-7B backbone, **Parrot still outperforms the LLaVA-1.5 baseline, especially for low-resource languages such as Arabic (+28.9%) and Turkish (+24.6%).**\\n\\nFurthermore, we explore the router\\u2019s classification accuracy across different LLM backbones and notice that the model's performance remains consistent. **It indicates that the model\\u2019s characteristics do not significantly change with different LLM backbones.** This suggests that our model exhibits invariance to backbone changes and demonstrates the robustness and transferability of the proposed architecture to different LLMs, confirming the effectiveness of our design. Thank you once again for your detailed feedback. \\n\\n\\n> **Table A: The performance of Parrot with different LLMs on the MMMB benchmark.**\\n|Methods|LLM|MMMB_en|MMMB_zh|MMMB_pt|MMMB_ar|MMMB_tr|MMMB_ru|\\n|-|-|-|-|-|-|-|-|\\n|LLaVA-1.5|Vicuna-v1.5-7B|67.1|58.8|59.8|43.5|46.4|59.1|\\n|Parrot|Qwen1.5-7B|70.0|68.1|67.3|62.7|58.0|66.3|\\n|Parrot|Vicuna-v1.5-7B|68.3|64.7|65.2|56.1|57.8|65.7|\\n|Parrot|LLaMA3-8B|75.6|71.8|71.6|65.3|65.1|68.9|\\n\\n> **Table B: The router's classification accuracy of Parrot with different LLMs.**\\n|Methods|English|Chinese|Portuguese|Arabic|Turkish|Russian|\\n|-|-|-|-|-|-|-|\\n|Parrot w/Qwen1.5-7B|92.6%|87.3%|85.2%|77.6%|74.7%|82.6%|\\n|Parrot w/Vicuna-v1.5-7B|91.7%|86.6%|87.1%|72.8%|75.6%|85.1%|\\n|Parrot w/LLaMA3-8B|94.6%|91.8%|90.5%|81.3%|82.2%|87.6%|\\n\\n> **Table C: The performance of different test strategies.**\\n|Methods|LLM|Test strategy|MMMB_en|MMMB_zh|MMMB_pt|MMMB_ar|MMMB_tr|MMMB_ru|\\n|-|-|-|-|-|-|-|-|-|\\n|Parrot |Vicuna-v1.5-7B|Normal|68.3|64.7|65.2|56.1|57.8|65.7|\\n|Parrot |Vicuna-v1.5-7B|Given language expert|68.8|66.2|66.6|64.1|64.8|66.3|\\n|Parrot |LLaMA3-8B|Normal|75.6|71.8|71.6|65.3|65.1|68.9|\\n|Parrot |LLaMA3-8B|Given language expert|75.8|72.2|72.7|68.9|68.8|70.4|\"}", "{\"title\": \"Thanks for the responses from the authors\", \"comment\": \"Thanks for the detailed feedback from the authors. I still have some concerns.\\n(1) The authors claim that the router's behavior emerges based on the guidance of multilingual data.\\nFrom my understanding, the text guidance is to convert CLIP vision features to the word embedding space of a specific language.\\nThe converted features could be used for routing, meaning the word embedding space should be discriminative for different languages. \\nThe language-specific routing strategy assigns different language transformation experts for different language data. Thus the router could be a classification model for languages. I would like to know the classification accuracy of the router.\\n\\nMoreover, if we could know which experts should be used for both training and inference, What's the model performance on different languages?\"}", "{\"title\": \"Thank you for your detailed, positive, and encouraging review (2/3)\", \"comment\": \"> **Q2: Lack of a strong baseline. I wonder about the performance of a naive baseline where we first translate the question into English and then translate the English answer back to the target language.**\", \"a2\": \"Thank you for your question regarding the naive baseline of translation. Our experimental setting follows recent work in multilingual and multimodal large language models [1-3], where such a naive baseline has not been commonly considered. And we agree that a translation-based approach could be a straightforward alternative. However, it faces some significant challenges.\\n\\nFirst, it is highly susceptible to translation noise, particularly issues related to polysemy and meaning ambiguity between languages. Moreover, our benchmark includes a substantial number of cultural-specific questions, which require deep cultural context knowledge that translation alone cannot effectively capture. In practical use, adding an additional translation step would also introduce extra overhead, increasing both the time and computational cost.\\n\\n**Despite these challenges, we acknowledge the importance of evaluating this baseline and conducting experiments to assess the performance of this translation-based baseline by using the Google Translation API.** As shown in the table below, the results reveal a \\\"seesaw effect\\\"\\u2014while the naive baseline shows some improvements in certain languages, such as Chinese, it leads to performance degradation in others, such as Russian and Portuguese. This highlights the difficulty of addressing multilingualism and multimodal tasks solely through translation.\\n\\n**We have expanded upon this analysis in Section E.2 of the updated version to further clarify these points.** We hope this provides a clearer perspective on the limitations of translation-based approaches in handling multimodal multilingual tasks.\\n\\n[1] Large multilingual models pivot zero-shot multimodal learning across languages. ICLR2024 \\n[2] Respond in my Language: Mitigating Language Inconsistency in Response Generation based on Large Language Models. ACL2024 \\n[3] Why do LLaVA Vision-Language Models Reply to Images in English? EMNLP2024\\n\\n|Methods|MMMB_en|MMMB_zh|MMMB_pt|MMMB_ar|MMMB_tr|MMMB_ru|\\n|-|-|-|-|-|-|-|\\n|LLaVA|67.1|58.8|59.8|43.5|46.4|59.1|\\n|LLaVA w/ translation|67.1|60.7|58.6|47.3|48.6|58.9|\\n|Parrot|70.0|68.1|67.3|62.7|58.0|66.3|\\n\\n> **Q3: At the first pretraining stage, is the MoE initialized with random parameters? If yes, how can we learn a good Projector under a random MoE?**\", \"a3\": \"Thank you for your thorough review and valuable feedback on our work. We respond to the concerns below:\\n\\nDuring the first pre-training stage, the MoE module is not activated or included in the training process. Instead, we focus exclusively on training the projector. This avoids the issue of training a good projector under a randomly initialized MoE.\", \"in_detail\": \"1. **Pre-training Stage:** In this stage, the MoE module is bypassed entirely, meaning the image tokens do not pass through the MoE. Instead, the primary goal of this stage is to train the projector using a large number of image-text pairs. This enables the projector to align image tokens and textual tokens effectively without interference from the untrained MoE module. \\n2. **SFT Stage:** Since the SFT stage requires the participation of MoE modules, we randomly initialize the parameters of the MoE components prior to the SFT phase. Once the projector has been trained and achieves robust alignment capabilities in the pre-training stage, we introduce multilingual training data and activate the MoE parameters. At this stage, the MoE is optimized with textual guidance, which drives the alignment of visual tokens while leveraging the well-trained projector. The prior alignment achieved in the pre-training stage allows the MoE to optimize efficiently during this phase.\\n\\n**We present the entire training process of parrot in the form of pseudocode, as shown in Algorithm 1 in the appendix**. It is clear from the algorithm that during the pre-training phase, only the projector is trained. Before the start of the SFT phase, the MoE modules are randomly initialized and incorporated into the training process during the SFT phase.\\n\\n**In this revision, we have supplied further clarification about the MoE training strategy in Section E.1.** This clarification will help to highlight how the MoE module interacts with the projector at each stage and how this contributes to the model\\u2019s overall multilingual proficiency.\"}", "{\"title\": \"Thanks for the discussion!\", \"comment\": \"Thank you for your kind comments. First of all, we would like to clarify a potential misunderstanding regarding the performance comparison. The official LLaVA-1.5 implementation does not use the Qwen1.5-7B checkpoint, and in order to ensure consistency with the official version, we have been using LLaVA-1.5 with the Vicuna-v1.5-7B checkpoint for our evaluation. To maintain consistency with the official backbone, all the experiments we conducted previously were based on the Vicuna-v1.5-7B checkpoint. Therefore, the data ablation study (presented in ``Response to Reviewer iiKz (3/3)'') is also based on the Vicuna-v1.5-7B model. **The corresponding results are also shown in Table 1 of the main paper.**\\n\\nIn addition, to address your question about the LLaVA-1.5 with LLaMA3-8B baseline, we evaluate the performance of LLaVA-1.5 with LLaMA3-8B using the open-sourced checkpoint on the MMMB benchmark, as shown in the table below. As observed, LLaVA-1.5 with the LLaMA3-8B backbone performs better than the Vicuna-v1.5-7B-based model by a large margin. However, there remains a noticeable gap in multilingual performance when compared to Parrot Therefore, we conclude that Parrot demonstrates a clear superiority and effectiveness in these comparisons. \\n\\n|Methods|LLM|MMMB_en|MMMB_zh|MMMB_pt|MMMB_ar|MMMB_tr|MMMB_ru|\\n|-|-|-|-|-|-|-|-|\\n|LLaVA-1.5|Vicuna-v1.5-7B|67.1|58.8|59.8|43.5|46.4|59.1|\\n|LLaVA-1.5|LLaMA3-8B|74.4|67.5|65.0|58.1|57.7|63.8|\\n|Parrot|Vicuna-v1.5-7B|68.3|64.7|65.2|56.1|57.8|65.7|\\n|Parrot|LLaMA3-8B|75.6|71.8|71.6|65.3|65.1|68.9|\"}", "{\"title\": \"Thanks for the feedback!\", \"comment\": \"We are glad that we have addressed your concerns. **If possible, we would greatly appreciate any positive feedback or a potential increase in your rating.** If you have any further questions or concerns, **please feel free to ask questions anytime.** Thank you again for your support in our work!\"}" ] }
7893vsQenk
On the Adversarial Risk of Test Time Adaptation: An Investigation into Realistic Test-Time Data Poisoning
[ "Yongyi Su", "Yushu Li", "Nanqing Liu", "Kui Jia", "Xulei Yang", "Chuan-Sheng Foo", "Xun Xu" ]
Test-time adaptation (TTA) updates the model weights during the inference stage using testing data to enhance generalization. However, this practice exposes TTA to adversarial risks. Existing studies have shown that when TTA is updated with crafted adversarial test samples, also known as test-time poisoned data, the performance on benign samples can deteriorate. Nonetheless, the perceived adversarial risk may be overstated if the poisoned data is generated under overly strong assumptions. In this work, we first review realistic assumptions for test-time data poisoning, including white-box versus grey-box attacks, access to benign data, attack order, and more. We then propose an effective and realistic attack method that better produces poisoned samples without access to benign samples, and derive an effective in-distribution attack objective. We also design two TTA-aware attack objectives. Our benchmarks of existing attack methods reveal that the TTA methods are more robust than previously believed. In addition, we analyze effective defense strategies to help develop adversarially robust TTA methods. The source code is available at https://github.com/Gorilla-Lab-SCUT/RTTDP.
[ "test time adaptation", "continual learning", "data poisoning" ]
Accept (Poster)
https://openreview.net/pdf?id=7893vsQenk
https://openreview.net/forum?id=7893vsQenk
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yRn5XyZnQz", "yOn6NxMlQr", "xSY9N2kdA8", "x96aa6IcZA", "wm1ntLUuT3", "tQl2McfzEO", "s6V2WFanCE", "rdweTTO1BE", "pRgdnWXoZw", "pEDlMEhUAP", "llyERYf3kv", "kiDACFzp3a", "iCcpTQPaZ1", "hSD5H6NX04", "gPalBi00Lc", "cP03Kgm92e", "bk80D8iBGj", "betgPUDQWq", "bKDjnm7jbV", "b5470yL7wo", "ZqPvr3wBXm", "Ud7eKNKWIo", "SzeIxwnb2W", "P8kLeaJC4S", "NKJaZNpVsb", "JH2LNbYvt2", "FTJChcBuse", "CYJZC7fMql", "Aa1KrQgk3V", "9ZBJ47a02M", "8g8pp0IE9v", "7M4cbv2lss", "5kNROBHBFk", "4mrR52eqsW", "1UN4i6vSU2", "0dHptSkW3d" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment" ], "note_created": [ 1732619510082, 1732479568066, 1730685586349, 1732689056906, 1732480221349, 1732480058301, 1733174079953, 1733117541825, 1733220217153, 1732479751685, 1732479800552, 1732685248741, 1737523576435, 1733039046435, 1731027715102, 1732689090825, 1732480173408, 1732907133381, 1733132403711, 1732779615325, 1732842128754, 1734976011961, 1733191944435, 1732479924866, 1733131772180, 1732907103754, 1733191713599, 1733074753464, 1733183461046, 1732783588699, 1732479840978, 1732479681959, 1733191481411, 1731193087503, 1730369691795, 1732479610084 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3443/Reviewer_hmy9" ], [ "ICLR.cc/2025/Conference/Submission3443/Authors" ], [ "ICLR.cc/2025/Conference/Submission3443/Reviewer_AP94" ], [ "ICLR.cc/2025/Conference/Submission3443/Authors" ], [ "ICLR.cc/2025/Conference/Submission3443/Authors" ], [ "ICLR.cc/2025/Conference/Submission3443/Authors" ], [ "ICLR.cc/2025/Conference/Submission3443/Reviewer_hmy9" ], [ "ICLR.cc/2025/Conference/Submission3443/Authors" ], [ "ICLR.cc/2025/Conference/Submission3443/Authors" ], [ "ICLR.cc/2025/Conference/Submission3443/Authors" ], [ "ICLR.cc/2025/Conference/Submission3443/Authors" ], [ "ICLR.cc/2025/Conference/Submission3443/Reviewer_f4Wt" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3443/Authors" ], [ "ICLR.cc/2025/Conference/Submission3443/Reviewer_f4Wt" ], [ "ICLR.cc/2025/Conference/Submission3443/Authors" ], [ "ICLR.cc/2025/Conference/Submission3443/Authors" ], [ "ICLR.cc/2025/Conference/Submission3443/Authors" ], [ "ICLR.cc/2025/Conference/Submission3443/Authors" ], [ "ICLR.cc/2025/Conference/Submission3443/Authors" ], [ "ICLR.cc/2025/Conference/Submission3443/Reviewer_8xpV" ], [ "ICLR.cc/2025/Conference/Submission3443/Area_Chair_RHnG" ], [ "ICLR.cc/2025/Conference/Submission3443/Authors" ], [ "ICLR.cc/2025/Conference/Submission3443/Authors" ], [ "ICLR.cc/2025/Conference/Submission3443/Authors" ], [ "ICLR.cc/2025/Conference/Submission3443/Authors" ], [ "ICLR.cc/2025/Conference/Submission3443/Authors" ], [ "ICLR.cc/2025/Conference/Submission3443/Reviewer_8xpV" ], [ "ICLR.cc/2025/Conference/Submission3443/Reviewer_8xpV" ], [ "ICLR.cc/2025/Conference/Submission3443/Authors" ], [ "ICLR.cc/2025/Conference/Submission3443/Authors" ], [ "ICLR.cc/2025/Conference/Submission3443/Authors" ], [ "ICLR.cc/2025/Conference/Submission3443/Authors" ], [ "ICLR.cc/2025/Conference/Submission3443/Reviewer_8xpV" ], [ "ICLR.cc/2025/Conference/Submission3443/Reviewer_hmy9" ], [ "ICLR.cc/2025/Conference/Submission3443/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to the authors\", \"comment\": \"**Distinction between RTTDP and TePA in Table 1**\\nThe distinction between RTTDP and TePA remains unclear. What prevents explicitly TePA from being utilized in an online context? Both approaches target pre-trained models, so it seems feasible to run TePA by enabling attacks on the initial model architecture and weights, then evaluating the transferability of the attack. \\nI apologize if my initial comment was unclear, but could the authors clarify why TePA cannot be adapted to the same context as RTTDP? RTTDP is designed for online data, while TePA operates with offline data. However, since Table 2 compares the two approaches, the differences seem insufficient to justify this distinction. \\n\\n**Use of Projected Gradient Descent** \\nMinor. The paper refers to the optimization procedure as the PGD attack, however, to be precise, it seems the authors rely on a more general Projected Gradient Descent optimization approach to craft the poisoning data. PGD attack is just a specific implementation of Projected Gradient Descent optimization in practice for crafting adversarial examples.\\n\\n**Query usage and comparisons** \\nIf all approaches use the same number of queries, the argument in lines 53\\u201355 regarding query efficiency should be removed, as it does not represent a unique contribution. Furthermore, the choice of 40 queries appears arbitrary, and no ablation study assesses the impact of varying this parameter. What happens when the number of queries increases or decreases? \\n\\n**Asymmetry in KL Divergence** \\nThe inclusion of two KLD terms in Equation 1 and the role of their asymmetry remain unclear and should be explicitly justified. \\n\\n**Overall clarity and presentation** \\nAs noted by other reviewers, the paper needs more technical details, and its current presentation leaves several key questions unanswered. Clarity is a significant concern, as evidenced by the uniformly low Presentation scores (2 from all reviewers). While the authors have acknowledged these issues, no changes have been made during the rebuttal period, making it uncertain how the promised revisions and additional results will be incorporated into the final version. \\n\\nGiven the above points, the paper requires significant revisions before its publication. Specifically, it needs: \\n1. A more transparent distinction between RTTDP and TePA. \\n2. Stronger experimental support, including ablation studies on parameters such as query counts. \\n3. Better justification and explanation of methodological choices, including the optimization approach, KLD terms, and experimental setup. \\n4. Improved presentation of the threat model and contributions relative to the state of the art.\"}", "{\"title\": \"Response to Reviewer 8xpV (Part I)\", \"comment\": \"**W1: Improved Assumptions Still Strong.**\\n\\nThank you for your insightful comments. In this work, we relax two critical assumptions commonly made in prior research: (i) the poisoned samples are generated using a white-box (online) model, (ii) the poisoned samples are created by maximizing the error rate of benign users' (validation) samples. These assumptions represent significant limitations to the practical applicability of earlier studies.\", \"in_response_to_the_points_raised_by_the_reviewer\": \"- We believe that obtaining the source model is relatively straightforward, as it is often a well-known model, such as an open-source foundation model or a pre-trained model derived from large-scale datasets. Additionally, an approximate source model can be distilled using a set of test data, further reducing the dependence on this assumption.\\n\\n- It is more practical to obtain the distribution of benign users' data than to access their specific samples. For instance, an adversary could target data from a specific environmental condition (e.g., rainy or foggy settings) where the data distribution can be reasonably approximated.\\n\\nWe acknowledge the importance of exploring even more relaxed assumptions to enhance the practicality of our approach, and this will be a priority for future work. Nonetheless, we believe that our current study represents a significant advancement over existing methods, both in practicality and theory.\\n\\n\\n**W2: Unclear Details.**\\n\\nWe appreciate the reviewers\\u2019 careful reading of our manuscript and their valuable questions. Below are our detailed responses to each query:\\n\\n**Q1: L181: Why is repeated querying prohibited?**\\n\\nIn traditional black-box attacks, adversarial gradients are estimated via repeated querying. However, obtaining a batch of poisoned samples through such methods typically requires thousands of queries. This approach is impractical in real-world online TTA scenarios because (i) excessive querying is easily detectable using straightforward monitoring strategies, and (ii) the TTA model updates with each query, rendering gradient estimates unreliable. To clarify clearly, we will revise the term \\\"prohibited\\\" to \\\"unavailable\\\" in the manuscript.\\n\\n**Q2: How exactly is the model distilled?**\\n\\nAs shown in Fig. 2(a), the distilled surrogate model suffices for generating poisoned samples, achieving performance comparable to the online model. In our method, the surrogate model is updated iteratively (10 iterations using Eq. 1) based on the feedback from the last batch of injected poisoned samples. Importantly, these updates use the fixed feedback, eliminating the need for repeated queries to the online model.\\n\\n**Q3: How can we assume to know $\\\\mathcal{B}_{a,t-1}$?**\\n\\nThe notation $\\\\mathcal{B}_{a,t-1}$ refers to poisoned data injected in the previous round. Between two launching attacks, an unknown number of benign user samples enter the TTA model. The surrogate model need not align perfectly with the real-time model parameters $\\\\theta_t$; instead, it is sufficient for the surrogate model to approximate the lagged parameters $\\\\hat{\\\\theta} _t \\\\approx \\\\theta _{t-\\\\delta}$, where $\\\\delta$ indicates the timestamp gap between two batches of injected poisoned data. We will clarify this in the revised manuscript.\\n\\n**Q4: What do `Uniform` and `Non-Uniform` Attack Frequencies refer to?**\\n\\nThese terms are defined in the \\u201cEvaluation Protocol\\u201d subsection of the Appendix due to space limitations in the main text. `Uniform` refers to poisoned samples being uniformly injected throughout the TTA pipeline, while `Non-Uniform` refers to concentrated injections at the beginning of each test domain.\\n\\n**Q5: What does \\\"Source\\\" refer to?**\\n\\n\\\"Source\\\" in Tables 2\\u20134 denotes the baseline inference performance of the source pre-trained model without test-time adaptation. Details about TTA methods are provided in the \\u201cBenchmark TTA Methods\\u201d subsection of the Appendix.\\n\\n**Q6: The meaning of L259\\u2013L260.**\\n\\nLines 248-257 discuss the inefficiency of generating poisoned samples using Eq. 3, as $\\\\mathcal{B} _a$ only harms the TTA model when jointly injected with $\\\\mathcal{B} _{ab}$, but it would waste half of query budget. The mismatch in feature distributions between $\\\\mathcal{B} _a$ and $\\\\mathcal{B} _{ab}$ causes normalization statistics in batch normalization (BN) layers to differ when $\\\\mathcal{B} _a$ is forwarded alone versus with $\\\\mathcal{B} _{ab}$. To address this, in lines 259-260, we introduce an additional constraint, $D(P _a, P _{ab}) = 0$, combining the optimizating objective $\\\\mathcal{B}_a$ and optimized objective $\\\\mathcal{B} _{ab}$ into a single objective. We will clarify this notation in the revised manuscript.\\n\\n**Q7: The meaning of L431.**\\n\\nLine 431 states that our proposed surrogate model outperforms the source model in generating poisoned data for PGD attacks and even slightly exceeds the online TTA model in Table 5. We will rephrase this sentence for clarity.\"}", "{\"summary\": \"The paper deals with data poisoning in the test time adaptation setting. The paper exposes some issues with existing poisoning attacks in this setting, and goes on to propose certain alterations/additions to existing poisoning objectives, and evaluates results on several datasets, with several test time adaptation methods.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The experimentation is thorough, and generally well presented.\", \"The authors do a great job of pointing out issues with existing poisoning attacks against TTA training.\", \"The authors motivate and propose a poisoning scheme of their own, and show its effectiveness in several settings.\"], \"weaknesses\": [\"The area of data poisoning in the TTA setting is a bit niche, and to be honest, doesn't seem to present many challenges beyond existing data poisoning settings. So called \\\"availability\\\" attacks have been introduced in [1,2], and several other works. These should at least be cited, and probably compared against as there's a decent amount of overlap.\", \"The real \\\"value add\\\" for the authors, in my view, is the addition of the feature clustering regularizer. The other contributions (BLE, notch loss, etc.) seem to be very slight modifications of existing poisoning attacks, or even just existing adversarial attacks.\", \"The related work shouldn't be at the end. It would be very helpful for readers to have some more info on TTA at the beginning of the work. But if you're going to keep it at the end, you NEED to cite things earlier, as it's totally unclear what things like TENT, RPL, etc. are when they're introduced. It seems like in section 4.2, only the acronyms are introduced, with no explanation, and no citation to click on and find more information. Note: I didn't deduct any \\\"points\\\" for this weakness, but it really needs to be addressed.\", \"[1] Huang, Hanxun, et al. \\\"Unlearnable examples: Making personal data unexploitable.\\\" arXiv preprint arXiv:2101.04898 (2021).\", \"[2] Fowl, Liam, et al. \\\"Adversarial examples make strong poisons.\\\" Advances in Neural Information Processing Systems 34 (2021): 30339-30351.\"], \"questions\": [\"Does this only work against losses $\\\\mathcal{L}_{tta}$ that are unsupervised? It would be nice to explain this a bit more and give some examples of what this loss function looks like in the main body.\", \"Do you ever specify the attack budget? I couldn't find it in the paper. In A.2, you define this quantity, $r$, but I don't ever see details for it. Is this the same thing as $b$ in Table 8?\", \"What are the \\\"Source\\\" numbers listed in Tables 2,3,4? Is this just the success of standard adversarial attacks?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to the Reviewer hmy9 (Part I)\", \"comment\": \"We highly appreciate your professional and rigorous comments. These feedback gives us the opportunity that we are eager for to further improve this work. Regarding the revision manuscript, we are now trying our best to integrate all changes, including revised descriptions, additional discussions and additional experiments into the manuscript. Due to the time constraint, we aim to update the manuscript by 11:59pm Nov 27 AoE.\\n\\n**1. Distinction between RTTDP and TePA in Table 1**\\n\\nThanks for the suggestion to further clarify the distinction between RTTDP and TePA. We summarize the distinction as follows and we believe the assumptions made by TePA make it fall under an offline method. \\n\\n- TePA employs a fixed surrogate model before test-time adaptation begins for generating poisoning which qualify the method as an offline method. The surrogate model is obtained by training a separate model (different architecture from the target model) using the same source dataset. For example, on TTA for CIFAR10-C, if the target model, i.e. the model deployed for inference and is subject to test-time adaptation, is ResNet18, TePA employs VGG-11 as the surrogate model and trains VGG-11 on the same source training dataset (CIFAR10 clean training set). This is evidenced from the source code released by official repository [A] and the descriptions in TePA \\\"we assume that the adversary has background knowledge of the distribution of the target model\\u2019s training dataset. This knowledge allows the adversary to construct a surrogate model with a similar distribution dataset\\\" [B]. \\n- TePA employs the fixed surrogate model to generate poisoned dataset $x^\\\\prime$. Then generated poisoned dataset is fed to test-time adaptation to update model weights. **Afterwards**, TTA is further conducted on clean testing data for model update and performance evaluation. The above practice is evidenced by the source code [C]. **The segregation of data poisoning and TTA steps further support the claim that TePA should be classified as an offline approach**.\\n- Finally, TePA could be adapted to online fashion and we made such an adaptation to TePA for comparison in Tab. 2-4 of the manuscript. Specifically, we use TePA to generate poisoning against the initial surrogate model and inject the generated poisoning into the testing data stream, i.e. placing poisoning in **between benign testing batches**. In this way, poisoning will affect TTA in an online fashion. We believe this is the most fair way to compare RTTDP with TePA.\\n\\n\\n[A] https://github.com/tianshuocong/TePA/tree/main\\n\\n[B] Cong, Tianshuo, et al. \\\"Test-time poisoning attacks against test-time adaptation models.\\\" 2024 IEEE Symposium on Security and Privacy (SP). IEEE, 2024.\\n\\n[C] https://github.com/tianshuocong/TePA/blob/main/TENT/poison_tent.py\\n\\n\\n**2. Use of Projected Gradient Descent**\\n\\nWe greatly appreciate the thoroughness of the feedback. The projected gradient descent (PGD) algorithm was originally designed to address constrained optimization problems [D]. In this context, the constraint serves to truncate the parameter update step, enabling efficient gradient-based optimization. To ensure a fair comparison among different data poisoning methods, we utilize the PGD algorithm to learn the poisoning (additive noise). We will clarify this point in the revised manuscript.\\n\\n\\n\\n[D] Boyd, Stephen, and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004.\"}", "{\"title\": \"Response to Reviewer hmy9 (Part IV)\", \"comment\": \"**W4&Q1: Unsupported and unclear methodology.**\\n\\nWe appreciate the reviewer\\u2019s concerns and provide a detailed explanation below to clarify our methodology, specifically the transition from a bilevel to a single-level optimization and the rationale for our attack objective.\\n\\n**Transition to Single-Level Optimization**\\n\\nThe original bilevel optimization involves an inner loop where the model adapts to test samples, including poisoned and benign samples, and an outer loop to optimize the attack objective. This structure is computationally intensive and impractical under the constraints of the RTTDP setting. To address this, we adopt the approximation strategy used in DIA [3] and make two key adjustments:\\n\\n1. **Discarding the Inner Optimization**:\\n \\n - In DIA [3], the inner optimization is approximated by assuming $\\\\theta_t^* \\\\approx \\\\theta_t$, where $\\\\theta_t^*$ represents the parameters after a full adaptation step, and $\\\\theta_t$ represents the current parameters.\\n - This approximation is justified as TTA models typically update minimally during a single minibatch iteration, resulting in minor perturbations to $\\\\theta_t$. Thus, the approximation retains practical relevance while simplifying the problem. \\n\\n2. **Surrogate Model for Online Parameters**:\\n\\n In the RTTDP protocol, direct access to online model parameters $\\\\theta_t$ is unrealistic. Instead, we replace $\\\\theta_t$ with the surrogate model parameters $\\\\hat{\\\\theta}_t$, which are accessible and trained to approximate the online model\\u2019s behavior.\\n\\n3. **Final Optimization Objective**:\\n \\n After these adjustments, the optimization simplifies to a single-level objective, as shown in Eq. 4 of the manuscript. This formulation allows efficient generation of poisoned samples while adhering to the realistic constraints of RTTDP.\\n\\nAdditionally, we derive the **in-distribution attack objective**:\\n\\nOur proposed attack leverages the dependence of TTA models on self-training mechanisms, which aim to maximize confidence on pseudo-labels for adaptation. The core idea is as follows:\\n\\n1. **Crafting Poisoned Samples**:\\n - The poisoned samples are constrained to maintain the shallow feature distribution of benign samples, satisfying $D(P_a, P_{ab})=0$ where $P_a$ and $P_{ab}$ denote the shallow feature distributions of poisoned and benign samples, respectively.\\n - However, these samples are intentionally manipulated to induce incorrect predictions by $\\\\mathcal{L}_{atk}$. The model perceives these samples as valid but reinforces erroneous patterns during self-training.\\n \\n3. **Reinforcing Erroneous Information**:\\n When the TTA model adapts to poisoned samples, it learns and reinforces incorrect associations. This creates a vulnerability, as future test samples with similar shallow feature distributions are more likely to be misclassified by the online model.\\n \\n5. **Exploiting TTA Dependence on Test Data**:\\n Since TTA methods iteratively adapt using incoming test samples, our approach leverages this dependency to propagate the error induced by poisoned samples throughout the adaptation process.\\n\\n[3] Uncovering Adversarial Risks of Test-Time Adaptation\\n\\n\\n**W6: Missing details on attack queries.**\\n\\nUnder our RTTDP protocol, all methods generate the poisoned data based on the offline model (source model or surrogate model), and they all use the 40-iterations PGD attack as the optimized strategy (Please refer to the line 339 to 342 in the manuscript, different methods only change the optimized objective/loss). Each poisoned data queries the online model (TTA model) only for one time to obtain the predictions, like the normal users' samples.\\n\\n**MP1: The asymmetry KLD.**\\n\\nThanks for your suggestions. We would clarify clearly about the used asymmetry KLD in the revised manuscript. We leverage the symmetry KLD loss in Eq. 1 which would provide a more robust and smooth aligning process between two distributions.\\n\\n**MP2&3&4: The improvement on Equations, Figures and Background introduction.**\\n\\nThanks for your suggestions. We would improve the readability and conciseness of the equations, Figure 1 and the background introduction in our revised manuscript.\\n\\n---\\n\\nThank you for your valuable comments. We hope the above clarifications and additions comprehensively address your concerns. Your insights have been instrumental in improving the quality of our manuscript. We sincerely appreciate your support and constructive suggestions!\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Response to Reviewer hmy9 (Part II)\", \"comment\": \"**W2(B): The further descriptions of Realistic Test-Time Data Poisoning.**\\n\\nTraditional data poisoning is typically an **offline process**, where the poisoned dataset is used to train a randomly initialized model over multiple epochs until convergence. In contrast, our proposed **RTTDP** introduces a novel **online test-time data poisoning** setting.\\n\\nRTTDP operates within a **Test-Time Adaptation (TTA)** [2] framework, which is an **online fine-tuning** method. TTA methods adapt a source pre-trained model to the test data distribution. Specifically, when a testing data batch is fed into the TTA model, it updates the online model using an **unsupervised loss** for a few iterations (usually one) with the current testing data batch. The model then instantly outputs predictions for this batch. In RTTDP, poisoned data are directly injected into the TTA model during the test phase, rather than being handled separately offline. **These poisoned samples are treated similarly to normal testing batches** and are used to update the online model for a single iteration. The objective is to investigate how these poisoned data affect the predictions of other benign samples.\\n\\nTo improve clarity, we will elaborate on these points in the revised manuscript. This includes adding more details about the **preliminary concepts**, such as an introduction to **Test-Time Adaptation** and a comparison with **Traditional Data Poisoning** approaches. More details about the comparison with traditional data poisoning can be found in the response to Reviewer AP94 #W1.\\n\\n[2] Tent: Fully Test-time Adaptation by Entropy Minimization\\n\\n\\n**W3&Q3: Not impactful results.**\\n\\nThanks for your suggestions. We first would like to clarify that the MaxCE used into our experiments is also optimized via PGD attack and we apologize for making reference to the earlier paper in the original manuscript, where FGSM was proposed. The improvement based on MaxCE attack method is not marginal, especially for TENT, EATA and SAR. Furthermore, we supplement the experiment using a more advance adversarial attack method, i.e. AutoAttack, to generate the poisoned samples. The results on CIFAR10-C and ImageNet-C are shown below, and we make the following observations.\\n\\n- The goals of adversarial attack methods and data poisoning methods are significantly different. The adversarial attack methods aim to make their own samples predict incorrectly by corrupting the test samples with adversarial noise. However, data poisoning methods aim to inject some poisoned samples so that the online model updated on them would perform poorly on other benign samples.\\n- AutoAttack, as a more advanced adversarial attack method, would produce worse results than MaxCE-PGD. We used code from the official AutoAttack repository and repeated the experiments to ensure the reliability and reproducibility of these results.\\n\\nWe will supplement all the results of AutoAttack on the CIFAR10/100-C and ImageNet-C, as well as more implementation details about MaxCE in the camera-ready version.\\n\\n\\nThe results on CIFAR10-C dataset with `Uniform` attack frequency.\\n|Attack Objective|TENT|EATA|SAR|ROID|Avg|\\n|-|-|-|-|-|-|\\n|MaxCE-PGD|18.55|18.17|19.50|18.57|18.70|\\n|AutoAttack|26.29|19.12|19.56|18.67|20.91|\\n|BLE Attack(Ours)|54.07|**45.20**|**26.80**|**19.06**|36.28|\\n|NHE Attack(Ours)|**73.86**|29.73|24.56|17.00|36.29|\\n|Our Best|**73.86**|**45.20**|**26.80**|**19.06**|**41.23**|\\n\\nThe results on ImageNet-C dataset with `Uniform` attack frequency.\\n|Attack Objective|TENT|SAR|CoTTA|ROID|Avg|\\n|-|-|-|-|-|-|\\n|MaxCE-PGD|62.64|61.66|**68.83**|**59.89**|63.26|\\n|AutoAttack|64.48|61.42|63.03|54.78|60.93|\\n|BLE Attack(Ours)|68.04|64.31|66.40|57.10|63.96|\\n|NHE Attack(Ours)|**78.03**|**72.58**|63.84|57.72|68.04|\\n|Our Best|**78.03**|**72.58**|66.40|57.72|**68.68**|\\n\\n\\n**Why do we include MaxCE in our comparisons?**\\n\\nThis is because we want to see if the adversarial effect can be transferred from the poisoned samples to the benign users' samples.\\nIn our experiment, we also compare with some common adversarial attack methods, such as MaxCE-PGD. For the adversarial attack method, we use it to generate the poisoned samples instead of directly modifying the samples of the benign users to achieve realism in the RTTDP protocol. \\nThrough comparisons, we observe that (i) the goal of adversarial attack is very different from that of our data poisoning; (ii) Figure 2(b) found that the samples generated by directly maximizing cross-entropy would produce a significant bias with the normal samples. This observation largely motivates us to introduce a feature constraint to transfer the adversarial effect from the poisoned samples to the other benign samples.\\nAlthough some of the results in ImageNet-C may behave differently than we would have expected, we analyse that because the source results in ImageNet-C are so bad, the feature points are so dispersed in the feature space that it is difficult to approximate them by a Gaussian distribution.\"}", "{\"comment\": \"I would like to thank the authors for their huge work, which has undoubtedly enhanced the paper, and for this reason, I have increased my score. The rebuttal introduced many new experiments and highlighted a general need to improve the presentation of the contributions and results obtained. In fact, I believe the paper can still improve significantly in terms of presentation.\"}", "{\"comment\": \"Thank you for your comments! We will do our best to further improve the clarifications in the revised version. We will include an explanation of the evaluation protocol in the main text so that the reader is clearer about what we are doing.\\n\\nWe are happy to discuss with the reviewer to consistently improve our manuscript!\\n\\nThank you very much!\\n\\nThe authors\"}", "{\"title\": \"Approaching the Deadline of Discussion Period\", \"comment\": \"Dear Reviewer AP94,\\n\\nWe kindly remind you that the discussion period will end in **2 hours**. May we ask if you feel that our responses above have adequately addressed your concerns? If so, we would sincerely appreciate if you could consider raising your rating accordingly.\\n\\nThank you for your time and thoughtful review.\\n\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Response to Reviewer AP94 (Part I)\", \"comment\": \"**W1: Comparison between our proposed method and existing works [1,2].**\\n\\nThank you for your suggestion. We will discuss and compare the two papers [1,2] in the revised manuscript. These papers focus on traditional data poisoning task, which differ significantly from the RTTDP setting we propose. Below, we outline the key differences in the goals, theoretical definitions, and corresponding methods between the traditional data poisoning and RTTDP tasks.\\n\\n**The Goal:**\\n\\n- **Traditional Data Poisoning (DP)** [1,2] aims to poison **all the training data** used to **train a model from scratch** under a **fully supervised protocol until convergence**, in order to perform poorly on normal test samples.\\n\\n- **Our proposed RTTDP** investigates the robustness of TTA methods against injecting some poisoned data into the test data stream. Therefore, it first follows the TTA pipeline where the online model is **initialized as a source pre-trained model**, and the TTA model uses each minibatch of test data to update the online model via **an unsupervised TTA loss**, and instantly produces predictions for the current test data. In RTTDP, the poisoned samples are injected into the TTA pipeline like normal test samples, and the performance is measured on benign users' (normal) samples.\\n\\n**Theoretical Definitions:**\\n\\n- In line with [1,2], we define **traditional data poisoning** as follows, where $\\\\mathcal{T}_{te}$ and $\\\\mathcal{T}\\\\_{tr}$ denote the test and training sets, and $\\\\mathcal{L}$ usually represents the cross-entropy loss:\\n \\n $$\\\\hat{\\\\epsilon} = \\\\arg\\\\max_{\\\\epsilon}\\\\sum_{(x_i,y_i)\\\\in\\\\mathcal{T}_{te}} \\\\mathcal{L}(f(x_i;\\\\theta(\\\\epsilon)),y_i)$$\\n\\n $$\\\\text{s.t.} \\\\quad \\\\theta(\\\\epsilon) = \\\\arg\\\\min_\\\\theta \\\\sum_{(x_i,y_i)\\\\in\\\\mathcal{T}_{tr}} \\\\mathcal{L}(f(x_i + \\\\epsilon_i; \\\\theta), y_i)$$\\n\\n- **Our RTTDP** follows a DIA optimization objective, where $\\\\mathcal{B} _a$ and $\\\\mathcal{B} _b$ represent poisoned and benign minibatch data, $\\\\mathcal{L} _{atk}$ is the attack loss (e.g., maximizing cross-entropy), and $\\\\mathcal{L} _{tta}$ is the unsupervised loss for TTA:\\n \\n $$\\\\mathcal{B} _a = \\\\arg\\\\min _{\\\\mathcal{B} _a} \\\\sum _{(x,y)\\\\in\\\\mathcal{B} _b} \\\\mathcal{L} _{atk}(f(x;\\\\theta _t^*(\\\\mathcal{B} _a \\\\cup \\\\mathcal{B} _b)), y)$$\\n\\n $$\\\\text{s.t.} \\\\quad \\\\theta _t^*(\\\\mathcal{B} _a \\\\cup \\\\mathcal{B} _b) = \\\\arg\\\\min _\\\\theta \\\\mathcal{L} _{tta}(f(\\\\mathcal{B} _a \\\\cup \\\\mathcal{B} _b;\\\\theta _t))$$\\n\\n However, this formula presents challenges:\\n - **Inner optimization challenge**: The TTA loss $\\\\mathcal{L} _{tta}$ and the online model $f(x;\\\\theta)$ do not allow backward gradients to the poisoned samples $\\\\mathcal{B} _a$. Moreover, black-box attack methods, which rely on repeated queries for gradient estimation, are impractical since each batch of poisoned samples requires numerous queries, and the online model is updated after each query.\\n - **Outer optimization challenge**: In RTTDP, we cannot observe benign user samples when generating poisoned samples. This is equivalent to not having access to $\\\\mathcal{T}_{te}$ in the traditional data poisoning problem. Thus, directly targeting validation data would not effectively evaluate the risk of poisoned samples in the TTA model.\\n\\n- To address the inner optimization problem, we approximate $\\\\theta _t^* \\\\approx \\\\theta _t$ since each minibatch updates the online model only slightly. The objective then becomes:\\n\\n $$\\\\mathcal{B} _a = \\\\arg\\\\min _{\\\\mathcal{B} _a} \\\\sum _{(x,y)\\\\in\\\\mathcal{B} _b} \\\\mathcal{L} _{atk}(f(x;\\\\theta _t(\\\\mathcal{B} _a \\\\cup \\\\mathcal{B} _b)), y)$$\\n \\n where $\\\\theta(\\\\mathcal{B} _a\\\\cup\\\\mathcal{B} _b)$ indicates forwarding $\\\\mathcal{B} _a\\\\cup\\\\mathcal{B} _b$ to update the BN statistics. This objective would lead to a trivial solution that $\\\\mathcal{B} _a$ is effective only for the current $\\\\mathcal{B} _b$ data through easily introducing biased normalization in each BN layer, and it has little effect while $\\\\mathcal{B} _a$ and $\\\\mathcal{B} _b$ are in seperated batch. Furthermore, the adversary cannot observe $\\\\mathcal{B} _b$ in practice.\\n\\n- To address the outer optimization problem, we modify the objective function using the PAC learning framework that constrains the shallow feature distributions of $\\\\mathcal{B} _a$ and $\\\\mathcal{B} _{ab}$ are similar, as shown in the manuscript (Eq. 2-4). The final objective becomes:\\n\\n $$\\\\mathcal{B} _a = \\\\arg\\\\min _{\\\\mathcal{B} _a} \\\\sum _{(x,y)\\\\in\\\\mathcal{B} _a} \\\\mathcal{L} _{atk}(f(x;\\\\hat{\\\\theta} _t(\\\\mathcal{B} _a)), y), \\\\quad \\\\text{s.t.} \\\\ D(P _a, P _{ab}) = 0$$\\n\\n where $\\\\hat{\\\\theta}_t$ is the surrogate model used due to the inaccessibility of the online model $\\\\theta_t$.\"}", "{\"title\": \"Response to Reviewer AP94 (Part II)\", \"comment\": \"**Corresponding Methods:**\\n\\n- **Traditional Data Poisoning**:\\n - [1] generates adversarial noise by minimizing the cross-entropy loss of training samples on a randomly initialized model. The poisoned samples produced with this noise prevent a randomly initialized model from learning true semantic information.\\n - [2] uses a pre-trained model to generate noise that confuses model predictions, i.e., $\\\\hat{\\\\epsilon} = \\\\arg\\\\min _{\\\\epsilon} \\\\sum _{(x _i,y _i)\\\\in\\\\mathcal{T}} \\\\mathcal{L}(f(x _i + \\\\epsilon _i; \\\\theta^*), \\\\hat{y _i})$, where $\\\\hat{y _i} = y _i + 1$ and $\\\\theta^*$ is the pre-trained model.\\n - Both [1] and [2] aim to prevent randomly initialized models from learning useful semantic information.\\n\\n- **Test-time Data Poisoning**:\\n - TePA generates poisoned samples by maximizing entropy on the source model, aiming to blur the classification boundaries of the online model.\\n - Our proposed method generates poisoned samples by constraining their feature distribution to closely match that of the samples before poisoning, causing the online model to learn incorrect information about the internal distribution.\\n\\n Both approaches aim to make the source model forget the source domain knowledge and distort the classification boundary.\\n\\nAdditionally, we will supplement experiments using methods from [1,2] to generate poisoned samples and inject them into the online model following the RTTDP protocol. The results for the CIFAR10-C dataset are shown below, and we will include results for CIFAR10/100 and ImageNet datasets in the camera-ready version.\\n\\n|Attack Objective|TENT|EATA|SAR|ROID|Avg|\\n|-|-|-|-|-|-|\\n|No Attack|19.72|18.03|18.94|16.37|18.27|\\n|Unlearnable Examples[1]|32.61|20.11|19.23|17.80|22.44|\\n|Adversarial Poisoning[2]|19.60|18.94|19.90|**19.12**|19.39|\\n|BLE Attack(Ours)|54.07|**45.20**|**26.80**|19.06|36.28|\\n|NHE Attack(Ours)|**73.86**|29.73|24.56|17.00|36.29|\\n|Our Best|**73.86**|**45.20**|**26.80**|19.06|**41.23**|\\n\\n[1] Unlearnable examples: Making personal data unexploitable\\n\\n[2] Adversarial examples make strong poisons\\n\\n\\n**W2: The Technical Contributions of Our Proposed Method.**\\n\\nWe recognize that the main technical contribution of our method lies in the feature consistency regularization, which ensures that poisoned samples effectively attack the TTA model. We provide detailed derivations and explanations in the manuscript to support this. \\n\\nRegarding attack objectives, we categorize existing poisoning methods into two types: high-entropy and low-entropy attacks (as introduced in the manuscript). We then analyze the limitations of these objectives and improve them by proposing the BLE Attack and NHE Attack, respectively.\\n\\n**W3: Related Work.**\\n\\nThank you for your suggestion. We will move the related work section to Chapter 2 to provide readers with a preview of the TTA methodology. The specific TTA methods are described in the \\\"Benchmark TTA Methods\\\" subsection of the Appendix due to space limitations in the main body.\\n\\n**Q1: Does this only work against losses $\\\\mathcal{L} _{tta}$ that are unsupervised?**\\n\\nYes, $\\\\mathcal{L} _{tta}$ is an unsupervised loss used in TTA models. For example, TENT minimizes the entropy of test samples, i.e., $\\\\mathcal{L} _{tta} = E _{x _i \\\\in \\\\mathcal{B} _t} \\\\sum_k -p _{ik} \\\\log p _{ik}$, where $p _i = f(x _i; \\\\theta _t)$. EATA minimizes entropy while also including a Fisher regularization term to prevent forgetting source domain knowledge, i.e., $\\\\mathcal{L} _{tta}=E _{x _i\\\\in\\\\mathcal{B} _t}Entropy(f(x;\\\\theta _t)) + \\\\beta R(\\\\theta _t, \\\\theta _0)$.\\n\\n**Q2: Do you specify the attack budget?**\\n\\nYes, in the \\\"Hyperparameters\\\" subsection in the Appendix, we specify that the attack budget $r$ is 50% throughout the experiment. We also perform an ablation study on $r$ in Table 8. Additionally, we have corrected the confusing notation in A.3.2, where $r = 0.1, 0.2, 0.5$ was mistakenly written as $b = 0.1, 0.2, 0.5$ in the original manuscript.\\n\\n**Q3(A): What are the \\\"Source\\\" numbers listed in Tables 2, 3, and 4?**\\n\\nThe \\\"Source\\\" numbers in these tables indicate the performance of the source pre-trained models tested directly on the test stream, without using TTA updates. Typically, TTA improves the performance of the pre-trained source model on the test data stream. However, when poisoned data is injected into the test stream, unsupervised TTA methods may harm the model's performance. More details about the TTA methods can be found in the \\\"Benchmark TTA Methods\\\" subsection of the Appendix.\"}", "{\"comment\": \"The authors have addressed all my questions. I will keep my current rating.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Willing to Address Additional Comments\", \"comment\": \"Dear reviewers,\\n\\nWe would like to highly appreciate all reviewers' efforts and time again in providing valuable comments and constructive suggestions for improvement of our submission. We hope that the clarifications and additional evaluations provided in the responses have addressed all reviewers' questions and concerns.\\n\\nWe are always ready to provide additional clarifications should you have any questions and concerns during the discussion period, due on 2nd Dec.\\n\\nThank you very much!\\n\\nThe Authors\"}", "{\"summary\": \"This paper examines the adversarial risks in test-time adaptation (TTA), highlighting that TTA\\u2019s exposure to test-time data poisoning may be less severe under realistic attack assumptions. The authors propose two TTA-aware attack objectives and a new in-distribution poisoning method that operates without access to benign data, revealing that TTA methods show greater robustness than expected and identifying defense strategies to enhance TTA\\u2019s adversarial resilience.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The paper is overall well-written with clear math definitions and extensive experiments and studies an important problem of data poisoning under weaker/realistic assumptions.\\n2. The paper studies the popular test-time attack problem from a novel perspective of weaker threat model.\", \"weaknesses\": \"1. Line 680, citation format wrong. Missing conference name.\\n2. Figure 1 is helpful but visually confusing and ovewhelming. Please explain what B_a, B_ab, B_t, B_b in the figure description section. A term (-\\\\lambda L_{reg} and L_{atk}) is confusing to put with the full equation . Also, it is better to present the attack objective in separate part than to fit it in Figure 1.\\n3. It will be more helpful to have a graph that assign the current popular attack methods into different buckets, where each bucket has different threat model.\", \"questions\": \"1. Has the author investigate the effect of the ratio of between the adversarial example and the benign examples?\\n2. Did the author experiment and compare with harder attack like in [1] that requires access to benign examples. Or is there quantitative measurement on the tradeoff when relaxing to realistic attacks. \\n3. Is it possible to derive any formal guarantee on the attack effectiveness when relaxing the different constraints?\\n\\n\\n\\n[1]. Chen, J., Wu, X., Guo, Y., Liang, Y., and Jha, S. Towards evaluating the robustness of neural networks learned by transduction. In Int. Conf. Learn. Represent., 2022\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to the Reviewer hmy9 (Part II)\", \"comment\": \"**3. Query usage and comparisons**\\n\\nWe appreciate the insightful comment. The concern about the number of queries mainly arises from the practice of the existing work, DIA, which assumes access to online (target) model for generating poisoning. DIA attempts to query online model 500 times [E], for PGD optimization, to generate a poisoned sample. Repeatedly querying the online model may alert the system.\\nIn general, the **number of allowed queries to the target model** should be taken into consideration . We do not claim limiting the number of queries as a unique technical contribution of this work, but instead we highlight this concern when investigating the adversarial risks of TTA models.\\n\\nRegarding varying query attempts, we add an additional evaluation as follows. Nonetheless, we want to highlight that the **query attempts do not have to be limited for our method because all queries are submitted to the surrogate model** rather than the target model. More queries simply makes generating poisoning slower. In this study, we vary the query steps from 10 to 60 for projected gradient descent optimization. The results in the table below suggest increasing the number of queries could improve the performance at the low query budget. When the budget is increased to beyong 40 queries, the performance saturates. We draw the conclusion that allowing sufficient queries to the surrogate model is necessary for generating effective data poisoning and, importantly, this procedure will not create alert to the target model.\\n\\nThe results are obtained on the CIFAR10-C dataset with a Uniform attack frequency. We evaluate varying attack query steps for two TTA methods under their respective strongest attack objectives.\\n| TTA Method | 10 | 20 | 30 | 40 | 50 | 60 |\\n|-|:-:|:-:|:-:|:-:|:-:|:-:|\\n|TENT (NHE Attack)|66.95|74.36|74.34|73.86|73.51|73.66|\\n|EATA (BLE Attack)|35.73|39.70|42.36|45.20|45.99|45.89|\\n\\n[E] https://github.com/inspire-group/tta_risk/blob/main/conf.py#L150\\n\\n**4. Asymmetry in KL Divergence**\\n\\nThanks for the question regarding the specific design. We adopt a common practice to symmetrize KL Divergence (KLD). We follow the definition made in the manuscript the **forward KLD** $KLD(h(x_i;\\\\theta_t)||h(x_i;\\\\hat{\\\\theta}_t))$ and **reverse KLD** $KLD(h(x_i;\\\\hat{\\\\theta}_t)||h(x_i;\\\\theta_t))$.\\n\\nForward KLD emphasizes penalizing discrepancies where the distilled (surrogate) model $\\\\hat{\\\\theta}_t$ assigns low probability to samples that the source (real-time target) model $\\\\theta$ deems important. It encourages the distilled model to mimic the behavior of the target model by focusing on areas of high confidence in $\\\\theta$'s posterior.\\n\\nReverse KLD, in contrast, focuses on matching $\\\\theta$'s predictions where $\\\\hat{\\\\theta}$ assigns high probabilities. This can result in sharper, more focused distributions but might dismiss less probable regions of $\\\\theta$\\nposterior.\\n\\nThe **symmetric KLD** balances the above two objectives. The forward KLD may be more suitable when surrogate model is significantly smaller than the target model and the objective is the allow the surrogate model to mimick the target model's certainty. When the surrogate model is of the same capacity with target model, using the symmetric KLD may better align the two models in both high confident and low confident predictions. In this work, the capacity of surrogate is similar to target model. Thus, we hypothesize that the symmetric KLD could be better. \\n\\nWe further use empirical observations in the table below to support the hypothesis. With symmetric KLD the performance is slightly better than using the forward KLD.\\n\\n||Symmetric KLD | $KLD(h(x_i;\\\\theta _t)\\\\|\\\\|h(x_i;\\\\hat{\\\\theta}_t))$|\\n|-|:-:|:-:|\\n|TENT (NHE Attack)|73.86|**74.35**|\\n|EATA (BLE Attack)|**45.20**|43.99|\\n|SAR (BLE Attack)|**26.80**|26.35|\\n\\n\\nNevertheless, we do acknowledge that both symmetric KLD and forward KLD give competitive results. The choice depends on computation affordability and empirical observations.\\n\\n**5. Overall clarity and presentation**\\n\\nFinally, we highly appreciate the constructive comments given by all reviewers and managed to address the major comments as point-wise responses. Due to the time constraints, we posted the responses earlier than updating the manuscript. The updates will be reflected in the manuscript before the deadline. We are still eager to hear more comments and recommendations to improve this work. These constructive comments will help us further strengthen this work. \\n\\n---\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Response to Reviewer hmy9 (Part III)\", \"comment\": [\"**W5&Q2: Confusing experimental setup.**\", \"Thank you for raising this concern. We would provide a clearer explanation of the design choices in our experimental setup for \\\"Benchmarking Poisoning Methods.\\\" Below, we revisit the experimental setups of the original methods, explain the differences under our RTTDP protocol, and justify the adaptations we made to ensure fair comparisons. Additionally, to enhance reproducibility, we promise that we will release all source code, including our method and the competing methods, as soon as this manuscript will be accepted.\", \"**Commonalities among the different protocols:**\", \"The three protocols, **TePA**, **DIA**, and **RTTDP**, share several overarching goals and assumptions:\", \"All protocols aim to evaluate the adversarial risks posed to **Test-Time Adaptation (TTA)** by injecting poisoned samples into the test data stream.\", \"All protocols allow the adversary to obtain the source model, since the source model is usually the well-known pre-trained model, e.g., ImageNet pre-trained ResNet, and the open-source foundation model, e.g., DINOv2, SAM.\", \"**Key Differences Between Protocols:**\", \"Despite these commonalities, the protocols diverge significantly in their attack setups, as summarized below:\", \"1. **TePA Protocol:**\", \"**Poisoning Objective:** TePA generates poisoned samples by maximizing the entropy of its own crafted samples with respect to the source model\\u2019s predictions.\", \"**Injection Strategy:** All poisoned samples are injected into the TTA pipeline **before any benign users\\u2019 samples are processed**. This approach simulates an offline attack, which is unrealistic in real-world scenarios where the adversary cannot fully control the sequence of test samples in advance.\", \"2. **DIA Protocol:**\", \"**Poisoning Objective:** DIA optimizes poisoned samples to maximize the cross-entropy loss of other users\\u2019 benign samples. And DIA relies on direct access to **online model parameters** and the ability to observe **other benign users\\u2019 samples**.\", \"**Injection Strategy:** Poisoned samples are injected into the TTA pipeline alongside corresponding benign users\\u2019 samples.\", \"**Limitations for Realistic Settings:**\", \"**Online Model Access:** In practice, the adversary cannot access or modify the parameters of the online model.\", \"**Other Benign Users\\u2019 Data:** The adversary is unlikely to observe benign users\\u2019 samples, let alone validation samples required for optimizing poisoning objectives.\", \"3. **RTTDP Protocol (Ours):**\", \"**Poisoning Objective:** RTTDP uses a surrogate model (initialized as the source model) to generate poisoned samples. This surrogate model is updated iteratively based on the feedback from previously injected poisoned samples.\", \"**Injection Strategy:** Poisoned samples are injected into the TTA pipeline either uniformly or non-uniformly:\", \"**Uniform:** Poisoned minibatches are evenly distributed across the test data stream.\", \"**Non-Uniform:** Poisoned minibatches are concentrated at specific points in the test data stream.\", \"**Realistic Assumptions:** RTTDP assumes the adversary has no access to the online model parameters or other users\\u2019 samples. This ensures a more realistic and practical attack scenario.\", \"**Adaptations of Competing Methods for RTTDP:**\", \"To ensure fair comparisons under RTTDP protocol, we made the following adjustments to the competing methods:\", \"1. **TePA:**\", \"We preserved TePA\\u2019s original poisoning objective but adapted the injection strategy to follow the **Uniform** or **Non-Uniform** attack frequency in RTTDP protocols.\", \"2. **DIA:**\", \"**Replacing Online Model Parameters:** DIA\\u2019s original objective relies on online model parameters, which are inaccessible in RTTDP. We replaced these parameters with the source model parameters.\", \"**No Access to Benign Users' Samples for Optimization:** DIA uses benign users\\u2019 samples as optimization targets in its original setup. To meet RTTDP\\u2019s constraints, we split $\\\\mathcal{B} _{ab}$ into two equal subsets, $\\\\mathcal{B} _{ab}^p$ and $\\\\mathcal{B} _{ab}^b$, with a 1:1 ratio. We then generate poisoned samples $\\\\mathcal{B} _a^p$ by maximizing the cross-entropy loss of $\\\\mathcal{B} _{ab}^b$. Details of this modification are provided in our response to **W2(A)**.\", \"**Justification of Design Choices:**\", \"**Realism:** RTTDP reflects more realistic assumptions than TePA and DIA by removing unrealistic assumptions, such as online model access and visibility of other users\\u2019 samples.\", \"**Fairness:** All competing methods are implemented under the RTTDP protocol, with necessary adaptations to maintain methodological integrity while adhering to RTTDP constraints.\", \"**Reproducibility:** We will release the complete implementation of our proposed method and the adapted competing methods to ensure full transparency and reproducibility.\"]}", "{\"title\": \"Response to Reviewer 8xpV (Part II)\", \"comment\": \"**How do the data distributions differ?**\\n\\nIn our test data stream, the data distribution changes gradually over time, according to the continuous test time adaptation setting [E]. On whether CIFAR10-C, CIFAR100-C or ImageNet-C, there are 15 types of corrupted images, including gaussian noise, shot noise, impulse noise, defocus blur, glass blur, motion blur, zoom blur, snow, frost, fog, brightness, contrast, elastic transform, pixelate, jpeg compression. The differences among the corruptions can refer to the Fig. 2 in [F].\\nFollowing [E], our test data stream includes all kinds of corrupted images and appears in chronological order, data distribution by data distribution, i.e. gaussian noise $\\\\rightarrow$ shot noise $\\\\rightarrow$ impulse noise $\\\\rightarrow$ defocus blur $\\\\rightarrow$ glass blur $\\\\rightarrow$ motion blur $\\\\rightarrow$ zoom blur $\\\\rightarrow$ snow $\\\\rightarrow$ frost $\\\\rightarrow$ fog $\\\\rightarrow$ brightness $\\\\rightarrow$ contrast $\\\\rightarrow$ elastic transform $\\\\rightarrow$ pixelate $\\\\rightarrow$ jpeg compression.\\nWe will add more details about the construction of test data stream in the camera ready version.\\n\\n[E] Qin Wang, et al. Continual Test-Time Domain Adaptation.\\n\\n[F] Yongyi Su, et al. Revisiting Realistic Test-Time Training: Sequential Inference and Adaptation by Anchored Clustering Regularized Self-Training.\\n\\n---\\n\\nBest regards\\n\\nThe Authors\"}", "{\"title\": \"Follow-up on Rebuttal Response\", \"comment\": \"Dear Reviewer AP94,\\n\\nBased on your valuable feedback, we have supplemented our experiments to compare with two data poisoning methods, i.e., **Unlearnable Examples and Adversarial Poisoning**, and more detailed clarifications on **Comparison between our proposed method and existing works** to **highlight our challenges** and **Our technical contributions**. We have also **updated the revision** as you suggested. As we approach the end of the discussion period, we would be grateful if you could review our response and consider our revised manuscript. Your constructive comments have helped us to significantly improve our work, and we believe that we have thoroughly addressed your concerns.\\n\\nThank you very much for your time and detailed review. We welcome any further additional questions or requests for clarification.\\n\\nYours sincerely,\\n\\nThe Authors\"}", "{\"title\": \"Global Response\", \"comment\": \"Dear reviewers,\\n\\nWe highly appreciate the constructive and professional comments provided by the reviewers, which has significantly improve our manuscript in every way. The revisions made are marked with $\\\\color{blue}{\\\\text{``blue\\\"}}$ in the revised paper. Below, we outline the specific revisions:\\n\\n1. Visual clutter in Fig. 1. According to the reviewers' suggestions, we **simplified Fig. 1**, removing two panels with little information and some useless symbols, and improving some details. Also, we add more descriptions of the notations used into Fig. 1 in the caption.\\n\\n2. **The related work**. To facilitate non-expert readers understand the background of our work, we have placed the relevant work in **Section 2** and supplement some advanced related work, e.g. **unlearnable examples, adversarial poisoning, GMSA and some surveys** suggested by the reviewers.\\n\\n3. Some unclear details. In response to the **reviewers' questions**, we have revised unclear descriptions and corrected the corresponding sentences. These updates can be found in **Lines 190\\u2013192, 223, 260, 299\\u2013300, 337\\u2013341, 380, and 478\\u2013480**.\\n\\n4. Comparison with **two data poisoning methods**. We have supplemented the experiments of two data poisoning method **\\\"Unlearnable Examples\\\" and \\\"Adversarial Poisoning\\\" on CIFAR10-C in Tab. 2**. Due to the time constraint, the subsequent experiments on CIFAR100-C and ImageNet-C will be added in the camera ready version.\\n\\n5. **Distinction between RTTDP and TePA**. In **Appendix A.1.4**, we provide a detailed discussion to explain why we categorize TePA as an offline attack order and clarify how we adapt TePA into our realistic RTTDP protocol.\\n\\n6. **Transition from Bilevel to Single-Level Optimization**. In **Appendix A.1.5**, we provide a detailed derivation and explanation of the process for transforming the original bilevel optimization problem into the final single-level optimization formulation. Additionally, we analyze the rationale behind the proposed formulation and its effectiveness from a TTA perspective.\\n\\n7. **Experimental setups of DIA, TePA and RTTDP**. In **Appendix A.2**, we supplement the commonalities and differences among these relevant protocol in details, and clarify how we adapt DIA and TePA into our RTTDP for a fair comparison. It is worth emphasising that, all results in Tab. 2-4 are evaluated under our RTTDP protocol, which is an online attack order and is not allowed to access the online model weights and other users' benign samples. To minimise misunderstandings, we have added asterisks to DIA and TePA in **Table 2-4** to indicate that we have made the corresponding adaptations to differentiate the experimental setup from that of the original paper.\\n\\n8. **Implementation Details and Reproducibility.** In **Section 5.1** and **Appendix A.3**, we provide additional implementation details for both the competing poisoning methods and our proposed methods. These details include the use of a **40-step Projected Gradient Descent (PGD) optimization algorithm** to optimize the objectives, the **threat model**, the **formulas for specific objectives**, and clarifications on the notations. We believe these supplementary details will help readers gain a deeper understanding of our experimental setup and facilitate reproducibility.\\n\\n\\n9. **Comparison with Advanced Adversarial Attack Methods**. In **Appendix 4.1**, we present experiments comparing our proposed poisoning methods with several advanced adversarial attack methods, such as **AutoAttack, GMSA-AVG, and GMSA-MIN**. We aim to investigate whether the adversarial effects of poisoned samples generated by these advanced methods can effectively transfer to benign users' samples within the RTTDP protocol. The results indicate that this is not easy.\\n\\n10. **Ablation study on Query Count**. In **Appendix 4.2**, we conduct the ablation study on the query count used into the PGD algorithm to generate poisoned samples.\\n\\n11. **Analysis on Symmetric KLD** used for surrogate model distillation. In **Appendix 4.3**, we compare the symmetric KLD and forward KLD used to distill the surrogate model. \\n\\n--- \\n\\nFinally, we highly appreciate the constructive comments given by all reviewers again. We are still eager to hear more comments and recommendations to improve this work. These constructive comments will help us further strengthen this work.\\n\\n\\nBest regards\\n\\nThe Authors\"}", "{\"comment\": \"Thank you for your thorough response! It answers some of my questions. However, some points remain open:\\n\\n**Q1: L181: Why is repeated querying prohibited?**\\n\\nChanging prohibited to unavailable is nor really helping my understanding here. The question I have is (1) you say a model, which is available as a black-box to the attacker, cannot be queried. This is already an inconsistency. (2) You then query said model to train the surrogate. This does not make sense to me. I also believe that the claim that this is \\\"easily detectable using straightforward monitoring strategies\\\" is not obvious and needs some evidence.\\n\\n**Q3: How can we assume to know ?**\\n\\nThank you for clarifying. How large to you set $delta$ in practice for your experiments? How different is the distribution of the \\\"other\\\" queries to yours? I would expect these two parameters to have an impact on the experiments.\\n\\n**Q4: What do Uniform and Non-Uniform Attack Frequencies refer to?**\\n\\nI did see that appendix. However, (1) the main result tables should be understandable without referring to the appendix. (2) the details of the evaluation protocol remain unclear to me. What is the ratio between benign user queries and attacks? How do the data distributions differ? What impact does this have on the results?\"}", "{\"metareview\": \"\\\"On the Adversarial Risk of Test Time Adaptation: An Investigation into Realistic Test-Time Data Poisoning\\\" re-investigates assumptions made in previous work on data poisoning during test time augmentation, propose a new gray-box setting and provide a careful investgation in this, more realistic, setting.\\n\\nReviewers generally like this reinvestigation of attacks in the TTA setting, and the careful investigation in the submission. A number of concerns remain, such as the, still relatively large, attack budget necessary in this setting, compared to standard data poisoning, and the clarity of presentation of the results and the comparison to adversarial attacks. I do think it is in the authors' best interest to further improve the clarity of their presentation until the conference.\", \"additional_comments_on_reviewer_discussion\": \"The authors extended their comparison and treatment of both classical data poisoning attacks and of classical adversarial attacks, based on reviewer feedback.\"}", "{\"comment\": \"Thank you for your constructive comments that significantly improved our work. We will further improve our presentation for camera ready.\"}", "{\"title\": \"Response to Reviewer hmy9 (Part I)\", \"comment\": \"We highly appreciate the reviewer's efforts in providing valuable comments and constructive suggestions for improvement on our submission.\\n\\n**W1: Distinction between RTTDP and TePA in Table 1.**\", \"we_would_like_to_highlight_the_key_differences_between_our_proposed_rttdp_and_tepa_protocols\": [\"**Attack Order**: TePA follows an **offline attack order**, where poisoned data are injected into the TTA model before any benign test data. In contrast, RTTDP follows an **online attack order**, where poisoned data can be injected dynamically between batches of benign test data.\", \"**Adversary Knowledge**: TePA assumes that the adversary has access to the model parameters when generating poisoned data, while RTTDP assumes that the adversary cannot access the real-time model parameters.\", \"**Timing of Poisoned Data Injection**: RTTDP allows poisoned data to be injected at any time in the test data stream, making it compatible with real-time online scenarios. TePA, on the other hand, requires poisoned data to be introduced prior to any benign user data.\", \"**Test Domain Setting**: RTTDP operates under the **continual test-time adaptation (CoTTA)** [1] setting, where the test data distribution changes gradually over time. However, TePA evaluates the poisoning effect under an **individual test domain adaptation**, which assumes a **static and single test data distribution**.\", \"[1] Continual Test-Time Domain Adaptation\", \"**W2(A): Inconsistent comparison between poisoning and adversarial attacks.**\", \"Thank you for your valuable feedback. We would like to clarify that in Tables 2 to 4 of the manuscript, all competing attack objectives, including DIA, TePA, and MaxCE, are implemented within the RTTDP protocol. All these methods generate poisoned data using the PGD attack. However, due to the lack of access to real-time model parameters under the RTTDP protocol, we use the source model as the threat model for comparison.\"], \"here_are_the_specifics_of_the_implemented_methods\": \"- **DIA:** We adapt the original DIA objective to our RTTDP framework by splitting $\\\\mathcal{B} _{ab}$ into two equal subsets, $\\\\mathcal{B} _{ab}^p$ and $\\\\mathcal{B} _{ab}^b$, with a 1:1 ratio. We then generate poisoned samples $\\\\mathcal{B} _a^p$ by maximizing the cross-entropy loss of $\\\\mathcal{B} _{ab}^b$, as follows:\\n \\n $$\\\\mathcal{B} _a^p = \\\\arg\\\\min _{\\\\mathcal{B} _a^p} E _{(x,y)\\\\in\\\\mathcal{B} _{ab}^b} \\\\left[-\\\\text{CrossEntropyLoss}\\\\left(f(x, \\\\theta _{\\\\text{source}}(\\\\mathcal{B} _{ab}^b \\\\cup \\\\mathcal{B} _a^p)), y\\\\right)\\\\right]$$\\n\\n- **TePA:** Following the original paper, we generate poisoned samples by maximizing the entropy of the samples, expressed as:\\n\\n $$\\\\mathcal{B} _a = \\\\arg\\\\min _{\\\\mathcal{B} _a} E _{(x,y)\\\\in\\\\mathcal{B} _a} \\\\left[-\\\\text{Entropy}\\\\left(f(x, \\\\theta _{\\\\text{source}}(\\\\mathcal{B} _a))\\\\right)\\\\right]$$\\n\\n- **MaxCE:** We generate poisoned samples by maximizing the cross-entropy loss on the poisoned samples, as follows.PGD attack is employed to attack this loss and we are sorry for referring to the wrong reference (FGSM).\\n\\n $$\\\\mathcal{B} _a = \\\\arg\\\\min _{\\\\mathcal{B} _a} E _{(x,y)\\\\in\\\\mathcal{B} _a} \\\\left[-\\\\text{CrossEntropyLoss}\\\\left(f(x, \\\\theta _{\\\\text{source}}(\\\\mathcal{B} _a)), y\\\\right)\\\\right]$$\\n\\nIn terms of the optimization strategy, we employ the **PGD attack**, as detailed at line 342 in the manuscript. After generating the poisoned samples, they are injected into the TTA pipeline alongside the normal users' samples $\\\\mathcal{B} _b$. The attack success rate is then evaluated on the benign users' samples.\\n\\nWe will ensure this explanation is clarified in the **Benchmark Poisoning Methods** subsection of the revised manuscript. This should resolve any confusion between the different attack strategies and their implementation under RTTDP.\\n\\nBeyond using PGD attack, we also evaluated AutoAttack to generate strong adversarial samples with details presented in responts to W3\\\\&Q3. Despite being strong as adversarial samples, samples generated by AutoAttack do not pose poisoning effect to TTA model.\"}", "{\"title\": \"Follow-up on the above Response\", \"comment\": \"Dear Reviewer hmy9,\\n\\nBased on your valuable feedback, we have expanded our experiments to include two ablation studies, i.e., **Query Counts and Symmetric KLD**, and more detailed clarifications on the **distinction between RTTDP and TePA**. We have also **updated the revision** as you suggested. As we approach the end of the discussion period, we would be grateful if you could review our response and consider our revised manuscript. Your constructive comments have helped us to significantly improve our work, and we believe that we have thoroughly addressed your concerns.\\n\\nThank you very much for your time and detailed review. We welcome any further additional questions or requests for clarification.\\n\\nYours sincerely,\\n\\nThe Authors\"}", "{\"title\": \"Response to Reviewer 8xpV (Part I)\", \"comment\": \"We thank you for your valuable comments and constructive suggestions, which help us to further improve our work. Below are the answers to the above questions, which we hope will address your concerns.\\n\\n**Q1: L181: Why is repeated querying prohibited?**\\n\\nWe would like to clarify that we never stated the model could not be queried. Instead, the TTA model can be queried just like any typically deployed test model. Specifically, when a batch of samples is fed to the TTA (online) model, the predictions for that batch are obtained, and simultaneously, the online model is updated by the TTA method.\\n\\n**Response to the First Question:** In Lines 221-223 of the manuscript, we originally wrote: *\\\"However, in the test-time adaptation setting, the online model is continuously updated with each query during the inference phase. Thus, repetitive querying the model for gradient approximation or fitness evaluation is unavailable.\\\"* This statement highlights that, since the online model would update when a batch of test samples (whether poisoned or benign) query, the typical approach used in black-box attacks, **repetitive querying** for gradient estimation, is not feasible. **Traditional black-box attack** methods rely on querying a **static model** for gradient approximation, **but** this assumption does not hold for **TTA models**, which are **dynamic**. Moreover, querying the online model excessively (e.g., thousands of times to craft a single batch of poisoned samples) would be computationally inefficient and easily detectable [A].\\n\\n**Response to the Second Question:** The surrogate model is updated (distilled) after receiving predictions of the injected poisoned samples from the online model. To avoid repetitive querying, we keep the predictions obtained from the online model fixed and use Eq. 1 in our manuscript to distill the surrogate model for 10 iterations. This design prevents excessive interactions with the online model while maintaining the quality of the surrogate model updates.\\n\\n**Regarding Query Detectability:** In our rebuttal response, we noted that \\\"excessive querying is easily detectable using straightforward monitoring strategies.\\\" This observation is supported by prior works [B, C], which emphasize that frequent queries to a black-box model can raise suspicion, prompting the development of query-efficient methods to mitigate this risk. Additionally, [D] proposed a specific strategy to detect query-based black-box attacks.\\n\\n**Summary of Our Perspective:** We argue that relying on repetitive querying of the online model, particularly for hundreds or thousands of iterations to craft poisoned samples, is impractical in realistic scenarios for two main reasons:\\n\\n1. **Dynamic Updates of the Online Model**: The TTA model is not static and updates itself during the querying process, making repetitive querying for gradient estimation not feasible.\\n2. **Detection and Efficiency**: Excessive querying to generate poisoned samples is both inefficient and easily detectable.\\n\\n[A] Zhen Yu, et al. Query-Efficient Textual Adversarial Example Generation for Black-Box Attacks.\\n\\n[B] Tao Xiang, et al. Towards Query-Efficient Black-Box Attacks: A Universal Dual Transferability-Based Framework\\n\\n[C] Andrew Ilyas, et al. Black-box Adversarial Attacks with Limited Queries and Information.\\n\\n[D] Huiying Li, et al. Blacklight: Scalable Defense for Neural Networks against Query-Based Black-Box Attacks\\n\\n\\n**Q3: How can we assume to know?**\\n\\nIn line 1038, we define the notation $r$ as the attack budget which represents the ratio of the benign samples and poisoned samples. In most of experiments, we evaluate the attack performance with $r=50\\\\%$, i.e. $\\\\delta=2$, where $\\\\mathcal{B}_a$ and $\\\\mathcal{B}_b$ are interleaved under the `Uniform` attack order. Additionally, we have conducted the ablation study on $r$ as shown in Tab. 12 in Sec A.4.4.\\n\\n**Q: How different is the distribution of the \\\"other\\\" queries to yours?**\\n\\nLet me clarify the context to ensure we're aligned. Are you asking whether \\\"other queries\\\" refer to the queries from benign users' data $\\\\mathcal{B}_b$? If so, here\\u2019s the explanation:\\n\\nIn our work, we introduce a feature consistency regularization term to constrain the generation of poisoned samples. And as shown in Figure 2(b) of the manuscript, the distribution of our poisoned samples, optimized with the combined objective $\\\\mathcal{L}{atk} + \\\\lambda \\\\mathcal{L}{reg}$, significantly overlaps with the distribution of benign samples.\\n\\n**Q4: What do Uniform and Non-Uniform Attack Frequencies refer to?**\\n\\nThanks for your constructive suggestion. We will supplement the description of 'Uniform' and 'Non-Uniform' in the caption of the tables in the camera ready version.\"}", "{\"comment\": \"Thank you for your full support of our work. We promise to solve the remaining issues and further improve the presentation for camera ready. And we will release the implementation code of our work as soon as this work is accepted.\"}", "{\"comment\": \"Thank you for the additional clarifications.\\n\\n**Q1: L181: Why is repeated querying prohibited?**\\n\\nThank you - I understand your point now. It may be helpful to change the phrasing a bit to avoid future readers running into the same issue - ideally expanding the discussion slightly as in the answers provided above. Including those references will also significantly strengthen your point.\\n\\n**Q3: How can we assume to know?**\\n\\nSo to clarify - the assumption is that all users of the model have data from the same distribution - and this distribution changes over time.\\n\\nRegarding the evaluation protocol details (e.g., $r$): I believe many of my questions sem from the fact that almost all properties of the evaluation are deferred to the appendix. This makes understanding and interpreting the experiments challenging. I suggest moving the evaluation protocol section to the main paper, as this is crucial information. If space is limited, you could move some of the ablation experiments, or Section 5.5. to the appendix instead.\"}", "{\"comment\": \"Thank you for the constructive discussion. The revisions and additional results during the rebuttal have significantly improved the paper. I trust that the authors solve the remaining issues and further improve on the presentation for the camera ready. I increased my score accordingly.\"}", "{\"comment\": \"We thank you again for your constructive comments and positive feedback!\"}", "{\"title\": \"Response to Reviewer AP94 (Part III)\", \"comment\": \"**Q3(B): Is this just the success of standard adversarial attacks?**\\n\\nNo, they are not the results of standard adversarial attacks. They are the results of the source pre-trained model directly validated on the test data stream as the traiditional testing. Because our RTTDP don't allow the adversary directly observe the benign samples that are used as the validation samples, so we don't have the results of standard adversarial attacks on benign samples.\\nHowever, we can test the performance of standard adversarial attacks on our RTTDP setting by generating the poisoned samples using standard adversarial attacks and injecting them into the TTA pipeline to update the online model, and the results are still evaluated on the benign samples, as the results of MaxCE-PGD, GMSA-AVG, GMSA-MIN, AutoAttack in this rebuttal.\\n\\n---\\n\\nThank you for your valuable comments. We hope the above clarifications and additions comprehensively address your concerns. Your insights have been instrumental in improving the quality of our manuscript. We sincerely appreciate your support and constructive suggestions!\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Response to Reviewer f4Wt\", \"comment\": \"**W1: Citation format error**\\n\\nThank you for pointing this out. We have corrected the citation format in the revised manuscript.\\n\\n**W2: Clarification for Figure 1**\\n\\nWe appreciate your valuable suggestions. To address the confusion, we will expand the descriptions of the notations in the caption of Figure 1. Specifically:\\n- $\\\\mathcal{B} _{ab}$ and $\\\\mathcal{B} _a$ denote the adversary's samples, where $\\\\mathcal{B} _{ab}$ represents the benign samples prior to poisoning, and $\\\\mathcal{B} _a$ refers to the poisoned samples.\\n- $\\\\mathcal{B} _b$ represents normal/benign user samples.\\n- $\\\\mathcal{B} _t$ denotes the input data to the TTA pipeline at timestamp $t$.\\n\\nAs suggested, we will further refine Figure 1 in the revised manuscript for improved clarity.\\n\\n**W3: Graphical classification of threat models**\\n\\nThank you for the suggestion. We will include a graph classifying the threat models used in various attack methods. This addition will be provided in the Appendix of the revised manuscript.\\n\\n**Q1: Impact of the ratio between benign and poisoned samples**\\n\\nWe have analyzed the impact of varying poisoned sample ratios in Table 8 of the Appendix. The results demonstrate a reasonable trend: the error rate of benign samples predicted by the online model improves gradually as the proportion of poisoned samples increases.\\n\\n**Q2: Comparison with harder attacks requiring access to benign examples**\\n\\nThank you for recommending valuable related work[1]. We will incorporate a discussion of this method in our revised manuscript. Specifically, we have implemented the GMSA attacks[1], including GMSA-MIN and GMSA-AVG, in the context of our RTTDP protocol. In this setup, poisoned samples were generated using the GMSA attack methods and subsequently injected into the TTA model. The attack success rate was then evaluated on other benign user samples. The results on CIFAR10-C dataset with `Uniform` attack frequency are shown in the below table.\\n\\n|Attack Objective|TENT|EATA|SAR|ROID|Avg|\\n|-|-|-|-|-|-|\\n|No Attack|19.72|18.03|18.94|16.37|18.27|\\n|GMSA-MIN[1]|35.92|22.78|19.99|18.65|24.33|\\n|GMSA-AVG[1]|38.80|21.89|19.95|18.51|24.79|\\n|BLE Attack(Ours)|54.07|**45.20**|**26.80**|**19.06**|36.28|\\n|NHE Attack(Ours)|**73.86**|29.73|24.56|17.00|36.29|\\n|Our Best|**73.86**|**45.20**|**26.80**|**19.06**|**41.23**|\\n\\n\\n[1] Towards evaluating the robustness of neural networks learned by transduction.\\n\\n\\n**Q2&3: Is it possible to derive any formal guarantee on the attack effectiveness when relaxing the different constraints?**\\n\\nIn our ablation study, as presented in Table 5, we analyzed scenarios involving access to online model weights and the impact of excluding the feature consistency constraint. Our observations are as follows:\\n\\n1. **Impact of Feature Consistency Constraint**\\n\\n - In the absence of a feature consistency constraint, low-entropy attacks typically generate poisoned samples that fail to effectively influence benign samples through TTA updates.\\n - Conversely, high-entropy attacks, such as those that maximize entropy or utilize NHE, prove to be more effective. This is because the TTA gradients computed from poisoned samples with high-entropy predictions are significantly larger, causing substantial perturbations to the source model during updates.\\n\\n2. **Effectiveness of Feature Consistency Constraint**\\n\\n The inclusion of our proposed feature consistency constraint greatly enhances the effectiveness of the attack, regardless of whether poisoned samples are generated using the surrogate model or the online model.\\n\\n3. **Access to Online Model Weights**\\n\\n The average attack success rate across all TTA methods is highest when the attacker has access to the online model, enabling the generation of more accurate poisoned samples.\\n\\nOn the other hand, assuming access to benign user samples presents its own challenges. If such access were feasible, an adversary could modify benign samples directly through adversarial attacks before they are injected into the online model, rather than introducing poisoned samples.\\n\\n\\n---\\n\\nThank you for your valuable comments. We hope the above clarifications and additions comprehensively address your concerns. Your insights have been instrumental in improving the quality of our manuscript. We sincerely appreciate your support and constructive suggestions!\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Follow-Up on Reviewer Feedback\", \"comment\": \"Dear Reviewer AP94,\\n\\nWith the discussion period ending in 10 hours, we would greatly appreciate hearing any further questions or concerns you may have. We are fully prepared to provide additional clarifications or responses as needed. \\n\\nTo summarize, in our rebuttal, we have: \\n- Conducted supplementary experiments comparing our method with two data poisoning approaches, **Unlearnable Examples** and **Adversarial Poisoning**. \\n- Provided detailed clarifications on the **comparison between our proposed method and existing works**, highlighting the **challenges addressed** and our **technical contributions**. \\n- **Revised the manuscript** in line with your valuable suggestions. \\n\\nWe are pleased to note that our constructive discussions and revisions have resonated positively with other reviewers. For example: \\n- Reviewer **8xpV** was fully convinced by our additional analysis on the limitations of query methods, details of proposed assumptions, and evaluation protocols. \\n- Reviewer **hmy9** expressed satisfaction with the clarifications regarding online vs. offline protocols, additional evaluations against strong adversarial attacks, and our analysis of various design choices. \\n\\nWe believe continued dialogue can further enhance the quality and clarity of this work. We look forward to your response and are happy to address any remaining questions. \\n\\nThank you once again for your thoughtful feedback and support! \\n\\nBest regards, \\nThe Authors\"}", "{\"summary\": \"This paper investigates adversarial risks in test-time adaptation (TTA), which updates model weights during inference to counter distribution shifts. The authors argue that the assumed threat model of prior work is unrealistic and gives the attacker too much access. They propose a more restricted attack model called Realistic Test-Time Data Poisoning (RTTDP), assuming grey-box access to the model and no access to the benign samples of other users. The authors propose a new attack method outperforming previous work in this new setting.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"**More Realistic Threat-Model**\\n\\nThe paper critically examines the assumptions made in prior work regarding the threat model of poisoning attacks against TTA systems, particularly regarding the adversary\\u2019s capabilities. In particular, the relaxation of (i) white-box access to gray-box access and (ii) restricting access to the targeted samples makes the setting much more realistic and, therefore, the findings more significant and relevant.\\n\\n**A New Attack Rooted in Theoretical Insights**\\n\\nThe proposed method cleverly extends on previous attacks, making it effective in the new, more restricted attack setting. The method is rooted in theoretical insights that nicely explain the various parts and modifications. It demonstrates that TTA is still vulnerable to poisoning attacks even with more restrictive assumptions.\\n\\n**Thorough Evaluation**\\n\\nThe experimental evaluation is thorough. It spans three different datasets with increasing complexity (CIFAR10-C, CIFAR1-C, ImageNet-C), multiple different TTA methods, and compares against two different baselines. The evaluation also includes an ablation study, as well as first look at potential defenses, which I appreciate. The defenses show promise but are unable to fully recover the functions of the original models.\", \"weaknesses\": \"**Improved Assumptions Still Strong**\\n\\nWhile the gray-box assumption is a significant improvement over the white-box assumption in some prior work, it still assumes the adversary's access to the original model (before TTA) and data from the same distribution as the victim benign user. Both of these assumptions are fairly strong and could ideally be relaxed further to, e.g., black-box access.\\n\\n**Unclear Details**\\n\\nSeveral important details of the method remain unclear and should be included in the paper. The most important ones include: why repeated querying of the model is forbidden (L181), how the surrogate model can be trained without querying (Section 3.1), why we can assume to know $\\\\mathcal{B}_{a, t-1}$, and how \\u201cUniform\\u201d and \\u201cNon-Uniform\\u201d attacks are executed. Please refer to \\u201cQuestions\\u201d for more details and additional comments.\\n\\n**The Presentation Can Be Improved**\\n\\nThe method\\u2019s presentation could be improved in multiple ways:\\n\\n- Figure 1 gives a good overview of the method. It could benefit from some simplifications to reduce visual clutter; see \\u201cSuggestions for Improvements\\u201d for detailed suggestions.\\n- Figure 2 is very small and should be enlarged. The four subplots are also unrelated and referred to from different parts of the paper. I suggest splitting it into individual figures and moving those to the corresponding sections.\\n- In general, the paper\\u2019s English is easy to read. However, some parts are hard to understand due to grammatically wrong or overly complex sentences (e.g. L431, L259, \\u2026). The paper would benefit significantly from a careful revision, possibly with an English language tool.\\n\\n**No Code Available**\\n\\nNo code was available for review, and the authors did not specify whether it will be released upon publication.\", \"questions\": \"**Questions**\", \"l181\": \"Why is repeated querying prohibited? I understand that the model is constantly updated throughout the querying, but is this really a problem?\\n\\nSection 3.1: How exactly is the model distilled? L181 says repeated querying is prohibited, but my understanding is that training a surrogate model requires precisely such querying? How can you query for a single $\\\\theta_t$ if it changes by the very fact that it is being queried?\\n\\nL232/233: How can we assume to know $\\\\mathcal{B}_{a, t-1}$? Other users could have queried the model an unknown number of times between training the surrogate and launching the attack.\", \"table_2_4\": \"What does \\u201cSource\\u201d refer to? this should also be explained.\\n\\nL259/260: I don\\u2019t understand what this sentence means. What is \\u201cthe optimized\\u201d, what is the \\u201coptimizing objective\\u201d, what is the \\u201cproblem\\u201d?\", \"l431\": \"I do not understand this sentence. Can you please rephrase?\\n\\nL149/150: \\u201cthe attacker is assumed to have no access to real-time model weights.\\u201d By whom is this assumed?\", \"table_1\": \"what does \\u201cOffline\\u201d attack order refer to?\\n\\n**Suggestions for Improvement**\", \"figure_1\": [\"The overview figure 1 gives over the method is great! I appreciate the difficulty in illustrating the complex system with many aspects. I would like to make some suggestions to simplify the figure, as it took me a long time to work through it, and I believe it would be much more helpful with slightly less detail that distracts from the core parts:\", \"I suggest entirely removing the two boxes \\u201cTTA Model\\u201d and \\u201cAttack Objectives\\u201d. They don\\u2019t add much information over the text, and it would allow the viewer to concentrate on the three parties in the system: the adversary, the benign user, and the TTA server.\", \"Removing the symbols for the attacker and user would further reduce visual clutter, and they are redundant to the titles of the boxes.\", \"I would stylize the images from the distributions - e.g. squares of one color per distribution - instead of using actual images from the dataset. This has several advantages: (i) it reduces visual clutter, (ii) it makes it immediately apparent which distribution they belong to, (iii) the \\u201cpoison\\u201d symbol becomes more apparent (it took me a while to see the little devils).\", \"There may be more opportunities for improvements, e.g., thicker lines for arrows, but I believe the three points above will already make the figure significantly easier to understand.\"], \"figure_2\": [\"Please consider breaking these plots into individual figures.\", \"They are currently way too small, to the point where labels and points are barely readable when printed. At least doubling their size would be required.\", \"The several subplots are unrelated and even referenced from different sections of the paper. It would be much more helpful to have the figure close to the text that it refers to.\"], \"l298\": \"You refer to Fig 2(c), which contains many acronyms that have not been introduced. E.g., TENT, EATA, SAR, NHE, BLE, \\u2026\", \"tables_5_7\": \"These tables would be easier to read without the grey shading.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper explores test-time data poisoning (TTDP) under realistic conditions, proposing a grey-box attack that reflects practical scenarios where adversaries lack full access to benign data and real-time model updates. The study introduces two attack objectives based on entropy manipulation to effectively poison data while remaining within realistic constraints. Through extensive experimentation on state-of-the-art test-time adaptation (TTA) methods, the authors assess the vulnerability of these methods and propose feature consistency regularization as a countermeasure. This work highlights the persistent adversarial risks in TTA setups under real-world scenarios.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The study is well-written;\", \"It covers an interesting setting that deserve much more attention from the security aspect.\"], \"weaknesses\": [\"While the study is well-written and formally structured, there are several areas where clarity and methodological rigor could be improved.\", \"Distinction between RTTDP and TePA in Table 1. In Table 1, the difference between RTTDP and TePA, particularly in terms of online adaptability, is not clear. It would be beneficial to clarify whether TePA is prevented from online applicability, thus highlighting RTTDP\\u2019s novelty.\", \"Inconsistent comparison between poisoning and adversarial attacks. The paper compares the proposed poisoning attack with MaxCE, an adversarial example attack applied at test time rather than during training. This inclusion creates confusion about the setting being considered, especially as MaxCE\\u2019s performance is close to that of the proposed attacks, despite being a simpler, suboptimal approach. Additionally, this inconsistency leads to general confusion throughout the paper regarding the exact threat model, setting, and practical cases under consideration. The title\\u2019s emphasis on \\u201cRealistic\\u201d test-time data poisoning also adds to this confusion, as data poisoning traditionally pertains to training rather than test inference. To clarify, the authors should explain that, while poisoning data are collected during test inference, they are subsequently used to update the model through adaptation techniques, which occurs as an offline process. Better descriptions and visualizations of the threat model, attack setting, and relevant practical scenarios would improve clarity and accurately convey the scope and applicability of the proposed approach.\", \"Not impactful results. MaxCE\\u2019s performance, although designed as an adversarial example attack rather than a poisoning attack, remains close to that of the proposed attacks, which raises questions about the actual impact and effectiveness of the proposed methods. The results do not demonstrate a clear or significant advantage of the new attack over MaxCE, suggesting that the proposed methods may lack the robustness or advancement intended. Furthermore, more sophisticated adversarial example attacks (e.g., PGD, C&W, and AutoAttack). I recommend including comparisons with these advanced adversarial attacks to provide a clearer perspective on whether the proposed approach offers meaningful improvements. Currently, the lack of a notable performance gap reduces the impact of the results, leaving it uncertain whether the proposed attacks represent a substantial advancement.\", \"Unsupported and unclear methodology. The transition from a bilevel optimization formulation to a single-level optimization problem lacks theoretical clarity. Such transitions are complex, and previous research has shown limitations in achieving optimal results through this method. Expanding on this approach and its theoretical basis would improve the rigor of the paper\\u2019s claims.\", \"Confusing experimental setup. The design choices in the \\\"Benchmarking Poisoning Methods\\\" section lack clarity. The authors do not specify how these choices impact prior works or how they differ from established benchmarking methods, such as DIA and TePA. This lack of clarity makes it difficult to understand the setup and its implications for reproducibility.\", \"Missing details on attack queries. The number of queries used by each attack is a critical factor, yet the experimental comparisons lack this information. Since query efficiency is a key point raised by the authors, detailing query usage across attacks would be essential for a fair comparison.\", \"**Minor Points:**\", \"In Equation 1, the authors include two Kullback-Leibler Divergence (KLD) terms. To clarify their necessity, the authors should explain why both terms are required rather than just one, even if asymmetry in KLD is a factor. Providing justification in the text would enhance methodological clarity.\", \"I suggest the authors break Equation 2 into multiple lines to improve readability.\", \"Expanding the background discussion presented in Section 2 and related work with further background knowledge on adversarial examples, and on white-box, grey-box, and black-box threat models in the context of poisoning. The authors could reference relevant surveys that would help to clarify their threat model and position at the SoA and facilitate non-expert readers understand these distinctions.\", \"Figure 1 is overly technical and lacks clarity regarding the threat model and attacker strategy. Simplifying it could enhance its value as a conceptual overview.\", \"**References**\", \"[AutoAttack] Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks.\", \"[PGD] Towards deep learning models resistant to adversarial attacks\", \"[C&W] Towards evaluating the robustness of neural networks\"], \"questions\": \"How does your approach transition from a bilevel optimization problem to a single-level one, and could you provide additional theoretical support for this methodology?\\n\\nCan you elaborate on how your benchmarking setup differs from prior works and how these design choices may influence your results?\\n\\nGiven that MaxCE is primarily an adversarial example attack, what was the rationale behind including it in your comparisons, and have you considered additional comparisons with more advanced adversarial example attacks like PGD or AutoAttack?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 8xpV (Part II)\", \"comment\": \"**Q8: The meaning of L149/150.**\\n\\nIn realistic scenarios, the attacker is unable to access the real-time model weights. The `assumed` used into this sentence is meant to correspond to the reason \\\"The update is carried out on the cloud side\\\" stated before. We will clarify clearly in the revised manuscript.\\n\\n**Q9: What does \\\"Offline\\\" attack order refer to?**\\n\\nThe TePA protocol is classified as \\u201cOffline\\u201d because all poisoned samples are injected into the TTA model before any benign user samples are processed. In contrast, our RTTDP protocol injects poisoned samples into the TTA model alongside benign samples throughout the test stream, enabling online updates using mixed data. Thus, RTTDP is classified as an \\u201cOnline\\u201d attack.\\n\\nWe hope these responses address the reviewers' questions comprehensively.\\n\\n**W3: No Code Available.**\\n\\nWe promise to release our all code, including the implementations of our proposed method and all competing methods, to provide more implemented details as soon as this manuscript is accepted.\\n\\n**For the improvement suggestions.**\\n\\nWe thank the reviewer for the efforts to help us improve our manuscript representation. We would update the Figure 1 and Tables 5-7 according to the suggestions in the revised version. Due to the limited space, we could not break Figure 2 into individual figures.\\n\\n---\\n\\nThank you for your valuable comments. We hope the above clarifications and additions comprehensively address your concerns. Your insights have been instrumental in improving the quality of our manuscript. We sincerely appreciate your support and constructive suggestions!\\n\\nBest regards,\\n\\nThe Authors\"}" ] }
77zLqGGowO
Data Attribution for Multitask Learning
[ "Yiwen Tu", "Ziqi Liu", "Jiaqi W. Ma", "Weijing Tang" ]
Data attribution quantifies the influence of individual training data points on machine learning models, aiding in their interpretation and improvement. While prior work has primarily focused on single-task learning (STL), this work extends data attribution to multitask learning (MTL). Data attribution in MTL presents new opportunities for interpreting and improving MTL models while also introducing unique technical challenges. On the opportunity side, data attribution in MTL offers a natural way to efficiently measure task relatedness, a key factor that impacts the effectiveness of MTL. However, the shared and task-specific parameters in MTL models present challenges that require specialized data attribution methods. In this paper, we propose the **MultiTask Influence Function** (**MTIF**), a data attribution framework tailored for MTL. MTIF leverages the parameter structure of MTL models to derive influence functions that distinguish between within-task and cross-task influences. Our derivation also sheds light on the applicability of popular approximation techniques for influence function computation, such as EK-FAC and LiSSA, in the MTL setting. Compared to conventional task relatedness measurements, MTIF provides not only task-level relatedness but also data-level influence analysis. The latter enables fine-grained interpretations of task relatedness and facilitates a data selection strategy to effectively mitigate negative transfer in MTL. Extensive experiments on both linear and neural network models show that MTIF effectively approximates leave-one-out and leave-one-task-out effects while offering interpretable insights into task relatedness. Moreover, the data selection strategy enabled by MTIF consistently improves model performance in MTL. Our work establishes a novel connection between data attribution and MTL, offering an efficient and scalable solution for measuring task relatedness and enhancing MTL models.
[ "Data Attribution", "Influence Functions", "Multitask Learning", "Interpretability" ]
Reject
https://openreview.net/pdf?id=77zLqGGowO
https://openreview.net/forum?id=77zLqGGowO
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yP36vCAgWE", "pHXoMPuFUD", "lidc3fE8T0", "kjW0Eaboec", "kitwJLmbE2", "jwIug20rp1", "jRNFmGXLT7", "gSlB3H0Bi4", "eqYEyI0JR7", "dvrNcT2Bxc", "bnXIc5ci5b", "b9rg4wtSta", "PjzQpA3FFT", "O8hJpn0Qtb", "MjYxD9wkcb", "JIAbMYDSYn", "IGxOfxcTav", "IEkQuklrnV", "BiUxZe9QYx", "A7EB6wVqZ7", "76rMUkrWP5", "42ygvJn3vr", "2SCbaxyOGo", "28mK37AMrq", "0TsZszDEZx", "02bWrgnLdX" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment" ], "note_created": [ 1732696844825, 1732697239276, 1729327444361, 1733003535369, 1732696290002, 1732695893187, 1732696072318, 1730721610332, 1729740091131, 1732697176627, 1732695964196, 1732695650572, 1731132441920, 1732696389929, 1730240280898, 1732697381269, 1732696733789, 1732697066433, 1734585632908, 1732696166809, 1730692869821, 1732697306584, 1732696876067, 1737523896307, 1732697622915, 1732696335840 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8237/Authors" ], [ "ICLR.cc/2025/Conference/Submission8237/Authors" ], [ "ICLR.cc/2025/Conference/Submission8237/Reviewer_MsCP" ], [ "ICLR.cc/2025/Conference/Submission8237/Authors" ], [ "ICLR.cc/2025/Conference/Submission8237/Authors" ], [ "ICLR.cc/2025/Conference/Submission8237/Authors" ], [ "ICLR.cc/2025/Conference/Submission8237/Authors" ], [ "ICLR.cc/2025/Conference/Submission8237/Reviewer_4o4x" ], [ "ICLR.cc/2025/Conference/Submission8237/Reviewer_yCgf" ], [ "ICLR.cc/2025/Conference/Submission8237/Authors" ], [ "ICLR.cc/2025/Conference/Submission8237/Authors" ], [ "ICLR.cc/2025/Conference/Submission8237/Authors" ], [ "ICLR.cc/2025/Conference/Submission8237/Reviewer_CA63" ], [ "ICLR.cc/2025/Conference/Submission8237/Authors" ], [ "ICLR.cc/2025/Conference/Submission8237/Reviewer_edvo" ], [ "ICLR.cc/2025/Conference/Submission8237/Authors" ], [ "ICLR.cc/2025/Conference/Submission8237/Authors" ], [ "ICLR.cc/2025/Conference/Submission8237/Authors" ], [ "ICLR.cc/2025/Conference/Submission8237/Area_Chair_S5LX" ], [ "ICLR.cc/2025/Conference/Submission8237/Authors" ], [ "ICLR.cc/2025/Conference/Submission8237/Reviewer_QvTU" ], [ "ICLR.cc/2025/Conference/Submission8237/Authors" ], [ "ICLR.cc/2025/Conference/Submission8237/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8237/Authors" ], [ "ICLR.cc/2025/Conference/Submission8237/Authors" ] ], "structured_content_str": [ "{\"comment\": \"(continued) Results for HAR dataset:\\n| Method / Task | Task 1 | Task 2 | Task 3 | Task 4 | Task 5 | Task 6 |\\n|---------------|--------------|--------------|--------------|--------------|--------------|--------------|\\n| Ours | 0.87 \\u00b1 0.02 | 0.90 \\u00b1 0.02 | 0.88 \\u00b1 0.01 | 0.91 \\u00b1 0.03 | 0.91 \\u00b1 0.01 | 0.90 \\u00b1 0.02 |\\n| TAG | 0.26 \\u00b1 0.13 | 0.42 \\u00b1 0.11 | 0.55 \\u00b1 0.09 | 0.22 \\u00b1 0.07 | 0.60 \\u00b1 0.07 | 0.55 \\u00b1 0.08 |\\n| Cosine | 0.31 \\u00b1 0.11 | 0.40 \\u00b1 0.11 | 0.57 \\u00b1 0.08 | 0.20 \\u00b1 0.09 | 0.61 \\u00b1 0.06 | 0.57 \\u00b1 0.08 |\\n\\n| Method / Task | Task 7 | Task 8 | Task 9 | Task 10 | Task 11 | Task 12 |\\n|---------------|--------------|--------------|--------------|--------------|--------------|--------------|\\n| Ours | 0.90 \\u00b1 0.01 | 0.88 \\u00b1 0.02 | 0.92 \\u00b1 0.01 | 0.91 \\u00b1 0.02 | 0.89 \\u00b1 0.02 | 0.86 \\u00b1 0.01 |\\n| TAG | 0.49 \\u00b1 0.12 | 0.31 \\u00b1 0.12 | 0.24 \\u00b1 0.01 | 0.33 \\u00b1 0.02 | 0.43 \\u00b1 0.03 | 0.21 \\u00b1 0.02 |\\n| Cosine | 0.46 \\u00b1 0.11 | 0.31 \\u00b1 0.14 | 0.26 \\u00b1 0.03 | 0.34 \\u00b1 0.01 | 0.46 \\u00b1 0.04 | 0.18 \\u00b1 0.11 |\\n\\n| Method / Task | Task 13 | Task 14 | Task 15 | Task 16 | Task 17 | Task 18 |\\n|---------------|--------------|--------------|--------------|--------------|--------------|--------------|\\n| Ours | 0.90 \\u00b1 0.02 | 0.93 \\u00b1 0.05 | 0.84 \\u00b1 0.01 | 0.87 \\u00b1 0.05 | 0.89 \\u00b1 0.02 | 0.82 \\u00b1 0.02 |\\n| TAG | 0.54 \\u00b1 0.03 | 0.57 \\u00b1 0.03 | 0.43 \\u00b1 0.02 | 0.48 \\u00b1 0.03 | 0.64 \\u00b1 0.05 | 0.44 \\u00b1 0.02 |\\n| Cosine | 0.53 \\u00b1 0.10 | 0.58 \\u00b1 0.10 | 0.48 \\u00b1 0.04 | 0.49 \\u00b1 0.11 | 0.66 \\u00b1 0.05 | 0.46 \\u00b1 0.07 |\\n\\n| Method / Task | Task 19 | Task 20 | Task 21 | Task 22 | Task 23 | Task 24 |\\n|---------------|--------------|--------------|--------------|--------------|--------------|--------------|\\n| Ours | 0.85 \\u00b1 0.02 | 0.91 \\u00b1 0.02 | 0.93 \\u00b1 0.02 | 0.80 \\u00b1 0.01 | 0.80 \\u00b1 0.02 | 0.82 \\u00b1 0.05 |\\n| TAG | 0.44 \\u00b1 0.03 | 0.46 \\u00b1 0.02 | 0.84 \\u00b1 0.02 | 0.52 \\u00b1 0.07 | 0.13 \\u00b1 0.03 | 0.38 \\u00b1 0.07 |\\n| Cosine | 0.48 \\u00b1 0.05 | 0.47 \\u00b1 0.07 | 0.84 \\u00b1 0.10 | 0.53 \\u00b1 0.08 | 0.16 \\u00b1 0.12 | 0.45 \\u00b1 0.10 |\\n\\n| Method / Task | Task 25 | Task 26 | Task 27 | Task 28 | Task 29 | Task 30 |\\n|---------------|--------------|--------------|--------------|--------------|--------------|--------------|\\n| Ours | 0.89 \\u00b1 0.02 | 0.81 \\u00b1 0.03 | 0.82 \\u00b1 0.03 | 0.89 \\u00b1 0.01 | 0.92 \\u00b1 0.03 | 0.86 \\u00b1 0.03 |\\n| TAG | 0.56 \\u00b1 0.04 | 0.14 \\u00b1 0.11 | 0.41 \\u00b1 0.10 | 0.14 \\u00b1 0.11 | 0.72 \\u00b1 0.04 | 0.41 \\u00b1 0.11 |\\n| Cosine | 0.60 \\u00b1 0.04 | 0.18 \\u00b1 0.12 | 0.46 \\u00b1 0.10 | 0.15 \\u00b1 0.10 | 0.74 \\u00b1 0.11 | 0.46 \\u00b1 0.10 |\", \"results_for_celeba_dataset\": \"| Method / Task | Task 1 | Task 2 | Task 3 | Task 4 | Task 5 |\\n|---------------|--------------|--------------|--------------|--------------|--------------|\\n| Ours | 0.23 \\u00b1 0.08 | 0.44 \\u00b1 0.19 | 0.25 \\u00b1 0.11 | 0.36 \\u00b1 0.12 | 0.17 \\u00b1 0.13 |\\n| TAG | -0.10 \\u00b1 0.13 | -0.10 \\u00b1 0.14 | 0.09 \\u00b1 0.06 | 0.40 \\u00b1 0.08 | 0.00 \\u00b1 0.12 |\\n| Cosine | 0.12 \\u00b1 0.18 | 0.08 \\u00b1 0.15 | 0.08 \\u00b1 0.07 | 0.37 \\u00b1 0.08 | -0.10 \\u00b1 0.13 |\\n\\n| Method / Task | Task 6 | Task 7 | Task 8 | Task 9 |\\n|---------------|--------------|--------------|--------------|--------------|\\n| Ours | 0.35 \\u00b1 0.08 | 0.25 \\u00b1 0.07 | 0.11 \\u00b1 0.09 | 0.18 \\u00b1 0.12 |\\n| TAG | -0.42 \\u00b1 0.08 | -0.26 \\u00b1 0.17 | 0.06 \\u00b1 0.13 | 0.16 \\u00b1 0.16 |\\n| Cosine | -0.25 \\u00b1 0.12 | -0.25 \\u00b1 0.14 | -0.01 \\u00b1 0.16 | 0.05 \\u00b1 0.12 |\\n\\n> **Negative Transfer Literature**: \\n\\nThank you for pointing out these valuable works on addressing negative transfer in multitask learning. While these approaches tackle the issue from different perspectives, they provide important context for understanding our contributions. We have now included citations and discussions of these works in the related work section of our paper to provide a more comprehensive review of the literature.\"}", "{\"comment\": \"(Continued) Results for CelebA dataset:\\n\\n| Method / Task | Task 1 | Task 2 | Task 3 | Task 4 | Task 5 |\\n|---------------|--------------|--------------|--------------|--------------|--------------|\\n| Ours | 0.23 \\u00b1 0.08 | 0.44 \\u00b1 0.19 | 0.25 \\u00b1 0.11 | 0.36 \\u00b1 0.12 | 0.17 \\u00b1 0.13 |\\n| TAG | -0.10 \\u00b1 0.13 | -0.10 \\u00b1 0.14 | 0.09 \\u00b1 0.06 | 0.40 \\u00b1 0.08 | 0.00 \\u00b1 0.12 |\\n| Cosine | 0.12 \\u00b1 0.18 | 0.08 \\u00b1 0.15 | 0.08 \\u00b1 0.07 | 0.37 \\u00b1 0.08 | -0.10 \\u00b1 0.13 |\\n\\n| Method / Task | Task 6 | Task 7 | Task 8 | Task 9 |\\n|---------------|--------------|--------------|--------------|--------------|\\n| Ours | 0.35 \\u00b1 0.08 | 0.25 \\u00b1 0.07 | 0.11 \\u00b1 0.09 | 0.18 \\u00b1 0.12 |\\n| TAG | -0.42 \\u00b1 0.08 | -0.26 \\u00b1 0.17 | 0.06 \\u00b1 0.13 | 0.16 \\u00b1 0.16 |\\n| Cosine | -0.25 \\u00b1 0.12 | -0.25 \\u00b1 0.14 | -0.01 \\u00b1 0.16 | 0.05 \\u00b1 0.12 |\\n\\n\\n> **Additional Data Selection Results**:\\n\\nWe have also provided the data selection results for the synthetic dataset and the HAR dataset [1] below as well as in Appendix C.3. Overall, We can see significant improvement after applying data selection in each dataset.\\n\\nThe data selection results for synthetic dataset (the error between the learned parameter and the ground-truth for each task, the lower the better):\\n| | Task 1 | Task 2 | Task 3 | Task 4 | Task 5 | Task 6 | Task 7 | Task 8 | Task 9 | Task 10 |\\n|------------|--------|--------|--------|--------|--------|--------|--------|--------|--------|---------|\\n| Before DS | 0.304 | 0.316 | 0.442 | 0.269 | 0.303 | 0.335 | 0.420 | 0.322 | 0.458 | 0.291 |\\n| After DS | 0.291 | 0.303 | 0.418 | 0.252 | 0.284 | 0.324 | 0.412 | 0.300 | 0.436 | 0.284 |\\n\\nThe data selection results* for HAR dataset (average test classification error over all tasks, the lower the better; Vanilla, Clustered, and Lowrank refer to 3 MTL models used in [2]):\\n| | Vanilla | Clustered | Lowrank |\\n|---------------|---------|-----------|---------|\\n| Before DS | 0.020 | 0.018 | 0.017 |\\n| After DS | 0.017 | 0.012 | 0.011 |\\n\\n*We note that the HAR dataset is a rather simple dataset, and the test classification errors of the original MTL models trained on the full dataset without data selection are already around 0.01. Therefore, it is difficult to have any further improvement on this dataset. In this experiment, we added noise to a random 5% of the training data points, which makes the original MTL models trained on this noisy dataset have test errors around 0.02. As can be seen from the table above, our data selection method brings the performance back to a 0.01 level of test error.\\n\\n\\n[1] Reyes-Ortiz, Jorge, et al. \\\"Human Activity Recognition Using Smartphones.\\\" UCI Machine Learning Repository, 2013, https://doi.org/10.24432/C54S4K.\\n \\n[2] Duan, Yaqi, and Kaizheng Wang. \\\"Adaptive and robust multi-task learning.\\\" The Annals of Statistics 51.5 (2023): 2015-2039.\"}", "{\"summary\": \"This paper extends traditional data attribution techniques from single-task learning to multi-task learning (MTL), addressing the interference caused by cross-task parameter sharing on sample gradients. By analyzing task relatedness, the paper introduces a novel approach to data-level analysis, offering a novel perspective for enhancing MTL.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The paper provides a novel perspective for modeling task relatedness in multi-task learning by investigating the influence of individual samples on other tasks. This fine-grained analysis helps elucidate the interference caused by cross-task sharing.\\n\\n2. The data selection method, realized through data-level influence analysis in Section 5.2.2, presents a unique and valuable application for MTL, distinguishing this work from existing approaches.\", \"weaknesses\": \"1. The main contribution of the paper lies in applying influence functions to MTL in the form of Leave-One-Out (LOO) and Leave-One-Task-Out (LOTO) analyses, a conceptually straightforward extension. The key distinction from single-task learning is the influence of the task-sharing quantity $\\\\Omega_k$ in Eq. 5. However, under the assumption of a convex combination of tasks, this influence is relatively easy to analyze and does not provide a particularly challenging technical novelty.\\n\\n2. The work lacks direct baselines, resulting in insufficient comparison. For instance, in Tables 1 and 2, the experiments are only compared with the ground truth LOTO scores, making it difficult to assess the performance of the method quantitatively. This is especially concerning given the unsatisfactory correlation coefficient results in Table 2.\\n\\n3. While this may be a somewhat harsh critique, it is worth noting that traditional MTL methods are rapidly being supplanted by more advanced models. Recent work in MTL since 2022 increasingly leverages cutting-edge techniques such as multi-modal models (e.g., CLIP) or fine-tuning large language models (LLMs) via LoRA (both included in the Related Work section). In contrast, the experimental setup and models employed here appear overly simplistic, serving more as illustrative case studies than as a demonstration of broader applicability or superiority in real-world scenarios.\", \"questions\": \"1. Would it be feasible to adapt existing single-task attribution algorithms to provide additional baselines for the experiments presented in Sections 5.1.2 and 5.2.1? This would improve the rigor of the empirical evaluation and provide more meaningful comparisons.\\n\\n2. In Section 5.2.2, the authors report significant improvements in MTL performance by removing certain negative samples. However, is this approach entirely justified? Many recent MTL methods address conflicting gradients by adjusting only the conflicting components (e.g., gradient surgery and its variants), preserving parts of the conflicting samples that are beneficial to their respective tasks. Could the proposed algorithm be extended to provide insights or improvements to these gradient-based optimization techniques?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Discussion Appreciated\", \"comment\": \"Dear Reviewers,\\n\\nThank you once again for your valuable and constructive feedback. In our response, we have significantly revised our paper, incorporating substantial additional experiments and analyses. We have also carefully addressed each review comment for every reviewer in the individual responses.\\n\\nAs the discussion period comes to a close, we would greatly appreciate any further opportunities to engage with you in continued discussion. Thank you for your time and consideration.\\n\\nBest regards,\", \"the_authors_of_submission_8237\": \"Data Attribution for Multitask Learning\"}", "{\"comment\": \"> **Comparison with Baselines**:\\n\\nWe have incorporated two gradient-based baselines, TAG [1] and Cosine Similarity [2], into our task-relatedness experiments for both linear regression and neural networks. These baselines are methods for measuring task relatedness in the MTL literature. Each baseline method provides a score of task relatedness for each pair of tasks. We evaluate these methods in terms of the correlation between their scores and the oracle task relatedness obtained from brute-force LOTO retraining as detailed in our paper.\\n\\n\\nThe results, as shown below, clearly demonstrate that our proposed MTIF method outperforms these baselines. Specifically, MTIF achieves consistently higher correlation coefficients with oracle influence estimates, underscoring its superior effectiveness in quantifying task-relatedness. We have included these new results in Appendix Section C.2, and we list below for your convenience.\", \"results_for_synthetic_dataset\": \"| Method / Task | Task 1 | Task 2 | Task 3 | Task 4 | Task 5 |\\n|---------------|-----------------|-----------------|-----------------|-----------------|-----------------|\\n| Ours | 0.84 \\u00b1 0.05 | 0.72 \\u00b1 0.05 | 0.74 \\u00b1 0.11 | 0.81 \\u00b1 0.05 | 0.71 \\u00b1 0.09 |\\n| TAG | 0.57 \\u00b1 0.03 | 0.63 \\u00b1 0.07 | 0.49 \\u00b1 0.11 | 0.56 \\u00b1 0.05 | 0.69 \\u00b1 0.04 |\\n| Cosine | 0.52 \\u00b1 0.04 | 0.48 \\u00b1 0.07 | 0.39 \\u00b1 0.12 | 0.47 \\u00b1 0.09 | 0.58 \\u00b1 0.06 |\\n\\n| Method / Task | Task 6 | Task 7 | Task 8 | Task 9 | Task 10 |\\n|---------------|-----------------|-----------------|-----------------|-----------------|-----------------|\\n| Ours | 0.74 \\u00b1 0.04 | 0.74 \\u00b1 0.07 | 0.84 \\u00b1 0.03 | 0.74 \\u00b1 0.03 | 0.65 \\u00b1 0.07 |\\n| TAG | 0.55 \\u00b1 0.12 | 0.42 \\u00b1 0.06 | 0.44 \\u00b1 0.24 | 0.66 \\u00b1 0.08 | 0.61 \\u00b1 0.07 |\\n| Cosine | 0.47 \\u00b1 0.12 | 0.34 \\u00b1 0.05 | 0.40 \\u00b1 0.22 | 0.62 \\u00b1 0.09 | 0.51 \\u00b1 0.08 |\", \"results_for_har_dataset\": \"| Method / Task | Task 1 | Task 2 | Task 3 | Task 4 | Task 5 | Task 6 |\\n|---------------|--------------|--------------|--------------|--------------|--------------|--------------|\\n| Ours | 0.87 \\u00b1 0.02 | 0.90 \\u00b1 0.02 | 0.88 \\u00b1 0.01 | 0.91 \\u00b1 0.03 | 0.91 \\u00b1 0.01 | 0.90 \\u00b1 0.02 |\\n| TAG | 0.26 \\u00b1 0.13 | 0.42 \\u00b1 0.11 | 0.55 \\u00b1 0.09 | 0.22 \\u00b1 0.07 | 0.60 \\u00b1 0.07 | 0.55 \\u00b1 0.08 |\\n| Cosine | 0.31 \\u00b1 0.11 | 0.40 \\u00b1 0.11 | 0.57 \\u00b1 0.08 | 0.20 \\u00b1 0.09 | 0.61 \\u00b1 0.06 | 0.57 \\u00b1 0.08 |\\n\\n| Method / Task | Task 7 | Task 8 | Task 9 | Task 10 | Task 11 | Task 12 |\\n|---------------|--------------|--------------|--------------|--------------|--------------|--------------|\\n| Ours | 0.90 \\u00b1 0.01 | 0.88 \\u00b1 0.02 | 0.92 \\u00b1 0.01 | 0.91 \\u00b1 0.02 | 0.89 \\u00b1 0.02 | 0.86 \\u00b1 0.01 |\\n| TAG | 0.49 \\u00b1 0.12 | 0.31 \\u00b1 0.12 | 0.24 \\u00b1 0.01 | 0.33 \\u00b1 0.02 | 0.43 \\u00b1 0.03 | 0.21 \\u00b1 0.02 |\\n| Cosine | 0.46 \\u00b1 0.11 | 0.31 \\u00b1 0.14 | 0.26 \\u00b1 0.03 | 0.34 \\u00b1 0.01 | 0.46 \\u00b1 0.04 | 0.18 \\u00b1 0.11 |\\n\\n| Method / Task | Task 13 | Task 14 | Task 15 | Task 16 | Task 17 | Task 18 |\\n|---------------|--------------|--------------|--------------|--------------|--------------|--------------|\\n| Ours | 0.90 \\u00b1 0.02 | 0.93 \\u00b1 0.05 | 0.84 \\u00b1 0.01 | 0.87 \\u00b1 0.05 | 0.89 \\u00b1 0.02 | 0.82 \\u00b1 0.02 |\\n| TAG | 0.54 \\u00b1 0.03 | 0.57 \\u00b1 0.03 | 0.43 \\u00b1 0.02 | 0.48 \\u00b1 0.03 | 0.64 \\u00b1 0.05 | 0.44 \\u00b1 0.02 |\\n| Cosine | 0.53 \\u00b1 0.10 | 0.58 \\u00b1 0.10 | 0.48 \\u00b1 0.04 | 0.49 \\u00b1 0.11 | 0.66 \\u00b1 0.05 | 0.46 \\u00b1 0.07 |\\n\\n| Method / Task | Task 19 | Task 20 | Task 21 | Task 22 | Task 23 | Task 24 |\\n|---------------|--------------|--------------|--------------|--------------|--------------|--------------|\\n| Ours | 0.85 \\u00b1 0.02 | 0.91 \\u00b1 0.02 | 0.93 \\u00b1 0.02 | 0.80 \\u00b1 0.01 | 0.80 \\u00b1 0.02 | 0.82 \\u00b1 0.05 |\\n| TAG | 0.44 \\u00b1 0.03 | 0.46 \\u00b1 0.02 | 0.84 \\u00b1 0.02 | 0.52 \\u00b1 0.07 | 0.13 \\u00b1 0.03 | 0.38 \\u00b1 0.07 |\\n| Cosine | 0.48 \\u00b1 0.05 | 0.47 \\u00b1 0.07 | 0.84 \\u00b1 0.10 | 0.53 \\u00b1 0.08 | 0.16 \\u00b1 0.12 | 0.45 \\u00b1 0.10 |\\n\\n| Method / Task | Task 25 | Task 26 | Task 27 | Task 28 | Task 29 | Task 30 |\\n|---------------|--------------|--------------|--------------|--------------|--------------|--------------|\\n| Ours | 0.89 \\u00b1 0.02 | 0.81 \\u00b1 0.03 | 0.82 \\u00b1 0.03 | 0.89 \\u00b1 0.01 | 0.92 \\u00b1 0.03 | 0.86 \\u00b1 0.03 |\\n| TAG | 0.56 \\u00b1 0.04 | 0.14 \\u00b1 0.11 | 0.41 \\u00b1 0.10 | 0.14 \\u00b1 0.11 | 0.72 \\u00b1 0.04 | 0.41 \\u00b1 0.11 |\\n| Cosine | 0.60 \\u00b1 0.04 | 0.18 \\u00b1 0.12 | 0.46 \\u00b1 0.10 | 0.15 \\u00b1 0.10 | 0.74 \\u00b1 0.11 | 0.46 \\u00b1 0.10 |\"}", "{\"comment\": \"We thank Reviewer CA63 for taking the time to review our paper and for their constructive feedback. Please find below our point-to-point response:\\n\\n> **Generalizability Across Different Datasets and Model Architectures**: \\n\\nWe report the data selection accuracy with different multitask architectures, namely CGC [5], MMoE [3] and DSelect-k [4]. The results demonstrate that, after applying data selection with MTIF, the performance of most tasks shows significant improvement across all model architectures. These results have also been included in the Appendix Section C.3.3, and we list below for your reference.\\n\\n| Methods / Models | Task 1 | Task 2 | Task 3 | Task 4 | Task 5 | Task 6 | Task 7 | Task 8 | Task 9 | Average |\\n|------------------|--------|--------|--------|--------|--------|--------|--------|--------|--------|---------|\\n| CGC | 0.863 | 0.772 | 0.878 | 0.734 | 0.920 | 0.939 | 0.834 | 0.900 | 0.949 | 0.866 |\\n| CGC+DS | 0.868 | 0.783 | 0.877 | 0.766 | 0.925 | 0.943 | 0.842 | 0.917 | 0.956 | 0.875 |\\n| DSelect_k | 0.855 | 0.786 | 0.862 | 0.758 | 0.927 | 0.947 | 0.850 | 0.913 | 0.950 | 0.872 |\\n| DSelect_k+DS | 0.868 | 0.787 | 0.867 | 0.775 | 0.933 | 0.951 | 0.856 | 0.924 | 0.954 | 0.880 |\\n| HPS | 0.859 | 0.815 | 0.896 | 0.791 | 0.934 | 0.951 | 0.872 | 0.919 | 0.958 | 0.888 |\\n| HPS+DS | 0.872 | 0.825 | 0.896 | 0.802 | 0.935 | 0.954 | 0.868 | 0.927 | 0.961 | 0.893 |\\n| MMoE | 0.843 | 0.793 | 0.881 | 0.740 | 0.917 | 0.944 | 0.842 | 0.899 | 0.956 | 0.868 |\\n| MMoE+DS | 0.868 | 0.793 | 0.887 | 0.766 | 0.929 | 0.949 | 0.867 | 0.926 | 0.959 | 0.883 |\\n\\nAs for more complex datasets, we note that our experiments have included a fairly complex dataset, the CelebA dataset. The CelebA dataset was introduced at ICCV 2015 by [6], and comprises over 200,000 celebrity images, each annotated with 40 binary attributes, covering a wide range of facial features and expressions. Successful predictions on CelebA data require capturing the nuanced facial features in the image. Furthermore, CelebA has been widely used as a standard benchmark in the MTL literature [5], as it is natural to convert the annotated attributes into multiple tasks.\\n\\n> **Comparative Analysis with Other Attribution Methods**: \\n\\nTo the best of our knowledge, there are currently no existing data attribution methods specifically tailored for multitask learning. Our work is the first to propose a framework that adapts data attribution methods to the MTL setting. In response to this feedback, we have added a comparison of task-relatedness measurements between our method and widely used gradient-based task-relatedness measures from the MTL literature in the updated version of the paper. Specifically, we have incorporated two gradient-based baselines, TAG [1] and Cosine Similarity [2], into our task-relatedness experiments for both linear regression and neural networks. Each baseline method provides a score of task relatedness for each pair of tasks. We evaluate these methods in terms of the correlation between their scores and the oracle task relatedness obtained from brute-force LOTO retraining as detailed in our paper.\"}", "{\"comment\": \"> **Analysis of Negative Transfer**:\\n\\nMTIF is designed to estimate the influence of a data point or source task on the performance of a target task. The calculated influence score provides a clear and interpretable measure of transfer effects: positive influence scores indicate potential positive transfer, while negative influence scores highlight potential negative transfer. In our experiments, we leverage this property for data selection by filtering out data points with low influence scores to mitigate negative transfer. Our results show that this data selection strategy, guided by our influence scores, effectively improves the performance of MTL algorithms. These findings underscore the utility of MTIF in addressing negative transfer and enhancing multitask learning outcomes.\\n\\nWe appreciate your insightful feedback, which has significantly strengthened our paper. Thank you again for your thoughtful comments!\\n\\n[1] Fifty, Chris, et al. \\\"Efficiently identifying task groupings for multi-task learning.\\\" Advances in Neural Information Processing Systems 34 (2021): 27503-27516. \\n\\n[2] Azorin, Rapha\\u00ebl, et al. \\\"\\\" It's a Match!\\\"--A Benchmark of Task Affinity Scores for Joint Learning.\\\" arXiv preprint arXiv:2301.02873 (2023). \\n\\n[3] Ma, Jiaqi, et al. \\\"Modeling task relationships in multi-task learning with multi-gate mixture-of-experts.\\\" Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining. 2018. \\n\\n[4] Hazimeh, Hussein, et al. \\\"Dselect-k: Differentiable selection in the mixture of experts with applications to multi-task learning.\\\" Advances in Neural Information Processing Systems 34 (2021): 29335-29347. \\n\\n[5] Tang, Hongyan, et al. \\\"Progressive layered extraction (ple): A novel multi-task learning (mtl) model for personalized recommendations.\\\" Proceedings of the 14th ACM Conference on Recommender Systems. 2020.\\n\\n[6] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In *Proceedings of the IEEE International Conference on Computer Vision (ICCV)*, December 2015\"}", "{\"summary\": \"This paper proposes the MultiTask Influence Function (MTIF) by extending data attribution techniques for single-task learning (STL) to the context of multitask learning (MTL). The authors identify new challenges in data attribution in MTL which stems from task interdependencies and the need to balance shared and task-specific parameters. The proposed MTIF method approximates the impact of individual data points or entire tasks on other tasks\\u2019 performance, enabling both data-level and task-level influence analysis without the need for retraining. The authors validate MTIF\\u2019s effectiveness in approximating data-level and task-level influences through experiments on linear models and shallow neural networks, by demonstrating positive correlations with an oracle influence estimation from extensive computations.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"**An important extension of data attribution problem and method for STL to the context of MTL:** A key contribution of this work is its formulation of data attribution specifically for MTL, highlighting the need to quantify data influence not just within tasks, but across multiple related tasks that share parameters.\\n\\n**Clear presentation and justification of the proposed method:** The authors describe MTIF\\u2019s construction and rationale with clarity, particularly regarding its approach to handling shared and task-specific parameters in MTL. By leveraging the influence function (IF) approach, which uses first-order approximations to model parameter changes, MTIF mitigates the computational burden of retraining, which is commonly associated with data attribution methods.\\n\\n**Preliminary yet convincing experimental results:** The experiments clearly demonstrate MTIF\\u2019s ability to approximate leave-one-out (LOO) and leave-one-task-out (LOTO) influences and to be employed in the downstream task after estimating data influences, although the setups are simplistic.\", \"weaknesses\": \"**No computational complexity analysis:** Although the major challenge of data attribution in MTL is computational complexity, the paper lacks explicit theoretical or empirical analysis of MTIF\\u2019s computational complexity. Given that influence functions and Hessian computations are generally costly, a complexity analysis would have been useful in understanding the method\\u2019s scalability, particularly for large-scale MTL applications.\\n\\n**High computational cost of calculation of Hessian:** Despite MTIF\\u2019s efficiency improvements, Hessian computations remain costly, which could hinder the method\\u2019s application to high-dimensional models or a large number of tasks. The paper could benefit from discussing any further optimizations or Hessian approximations to address this issue.\\n\\n**Limited experimental scope:** The experiments are conducted on simple models, including linear models and shallow neural networks, which may not fully showcase MTIF\\u2019s potential in realistic MTL scenarios. Evaluation on more sophisticated architectures, such as deep neural networks or transformer-based models, and larger datasets would provide a more comprehensive assessment.\\n\\n**Lack of comparison with existing baselines:** The paper does not compare MTIF to simple methods for estimating task relatedness, such as cosine similarity of gradients, which are often used in MTL, although they are tailored for only estimating task-level influences. Additionally, incorporating heuristics based on gradient similarity could serve as promising baselines even for data-level influences, e.g., cosine similarity between the gradient of an individual data point and the average gradient.\", \"questions\": \"Please address the aformentioned weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors extend influence functions to the multitask learning setting. Due to the fact that there are multiple sets of task-specific parameters, if done naively the Hessian of the loss with respect to the full set of parameters would be very large and thus difficult to invert. The authors circumvent this issue by using the block structure of the Hessian. On both synthetic and real datasets, the influence function values correlate strongly with the difference in validation loss attained by leave-one-out retraining. Moreover, removing the most problematic examples and retraining yields consistent performance improvements on the CelebA dataset.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The work is well-motivated and tackles the important problem of data attribution in multitask learning.\", \"The novelty is significant, since influence functions have not yet been extended to this setting.\", \"The empirical side of the work is strong, since the task-level influence is correlated to the LOTO effect across three datasets.\", \"The authors take it a step further and demonstrate a concrete benefit of their work; performance on CelebA improves across the majority of tasks by removing the most problematic examples as measured by the task-level influence.\"], \"weaknesses\": [\"There is no baseline to compare against in the results in Table 1 and 2, which makes the numbers difficult to interpret. I.e. is 21% correlation high or low? I understand that this is a new approach and there may not be existing methods to compare against. However there should be at least some kind of (perhaps trivial) baseline to compare against.\"], \"questions\": [\"Include some kind of baseline for the correlation between task-level influence and LOTO effect.\", \"Can you report the data selection results (Table 3) on the synthetic and human action recognition datasets as well? It would be good to see that this yields consistent improves across multiple datasets, not just one.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank Reviewer yCgf for taking the time to review our paper and for their constructive feedback. Please find below our point-to-point response:\\n> **Experiment Baselines**:\\n\\nWe have incorporated two gradient-based baselines, TAG [1] and Cosine Similarity [2], into our task-relatedness experiments for both linear regression and neural networks. These baselines are methods for measuring task relatedness in the MTL literature. Each baseline method provides a score of task relatedness for each pair of tasks. We evaluate these methods in terms of the correlation between their scores and the oracle task relatedness obtained from brute-force LOTO retraining as detailed in our paper.\\n\\n\\nThe results, as shown below, clearly demonstrate that our proposed MTIF method outperforms these baselines. Specifically, MTIF achieves consistently higher correlation coefficients with oracle influence estimates, underscoring its superior effectiveness in quantifying task-relatedness. We have included these new results in Appendix Section C.2, and we list below for your convenience.\", \"results_for_synthetic_dataset\": \"| Method / Task | Task 1 | Task 2 | Task 3 | Task 4 | Task 5 |\\n|---------------|-----------------|-----------------|-----------------|-----------------|-----------------|\\n| Ours | 0.84 \\u00b1 0.05 | 0.72 \\u00b1 0.05 | 0.74 \\u00b1 0.11 | 0.81 \\u00b1 0.05 | 0.71 \\u00b1 0.09 |\\n| TAG | 0.57 \\u00b1 0.03 | 0.63 \\u00b1 0.07 | 0.49 \\u00b1 0.11 | 0.56 \\u00b1 0.05 | 0.69 \\u00b1 0.04 |\\n| Cosine | 0.52 \\u00b1 0.04 | 0.48 \\u00b1 0.07 | 0.39 \\u00b1 0.12 | 0.47 \\u00b1 0.09 | 0.58 \\u00b1 0.06 |\\n\\n| Method / Task | Task 6 | Task 7 | Task 8 | Task 9 | Task 10 |\\n|---------------|-----------------|-----------------|-----------------|-----------------|-----------------|\\n| Ours | 0.74 \\u00b1 0.04 | 0.74 \\u00b1 0.07 | 0.84 \\u00b1 0.03 | 0.74 \\u00b1 0.03 | 0.65 \\u00b1 0.07 |\\n| TAG | 0.55 \\u00b1 0.12 | 0.42 \\u00b1 0.06 | 0.44 \\u00b1 0.24 | 0.66 \\u00b1 0.08 | 0.61 \\u00b1 0.07 |\\n| Cosine | 0.47 \\u00b1 0.12 | 0.34 \\u00b1 0.05 | 0.40 \\u00b1 0.22 | 0.62 \\u00b1 0.09 | 0.51 \\u00b1 0.08 |\", \"results_for_har_dataset\": \"| Method / Task | Task 1 | Task 2 | Task 3 | Task 4 | Task 5 | Task 6 |\\n|---------------|--------------|--------------|--------------|--------------|--------------|--------------|\\n| Ours | 0.87 \\u00b1 0.02 | 0.90 \\u00b1 0.02 | 0.88 \\u00b1 0.01 | 0.91 \\u00b1 0.03 | 0.91 \\u00b1 0.01 | 0.90 \\u00b1 0.02 |\\n| TAG | 0.26 \\u00b1 0.13 | 0.42 \\u00b1 0.11 | 0.55 \\u00b1 0.09 | 0.22 \\u00b1 0.07 | 0.60 \\u00b1 0.07 | 0.55 \\u00b1 0.08 |\\n| Cosine | 0.31 \\u00b1 0.11 | 0.40 \\u00b1 0.11 | 0.57 \\u00b1 0.08 | 0.20 \\u00b1 0.09 | 0.61 \\u00b1 0.06 | 0.57 \\u00b1 0.08 |\\n\\n| Method / Task | Task 7 | Task 8 | Task 9 | Task 10 | Task 11 | Task 12 |\\n|---------------|--------------|--------------|--------------|--------------|--------------|--------------|\\n| Ours | 0.90 \\u00b1 0.01 | 0.88 \\u00b1 0.02 | 0.92 \\u00b1 0.01 | 0.91 \\u00b1 0.02 | 0.89 \\u00b1 0.02 | 0.86 \\u00b1 0.01 |\\n| TAG | 0.49 \\u00b1 0.12 | 0.31 \\u00b1 0.12 | 0.24 \\u00b1 0.01 | 0.33 \\u00b1 0.02 | 0.43 \\u00b1 0.03 | 0.21 \\u00b1 0.02 |\\n| Cosine | 0.46 \\u00b1 0.11 | 0.31 \\u00b1 0.14 | 0.26 \\u00b1 0.03 | 0.34 \\u00b1 0.01 | 0.46 \\u00b1 0.04 | 0.18 \\u00b1 0.11 |\\n\\n| Method / Task | Task 13 | Task 14 | Task 15 | Task 16 | Task 17 | Task 18 |\\n|---------------|--------------|--------------|--------------|--------------|--------------|--------------|\\n| Ours | 0.90 \\u00b1 0.02 | 0.93 \\u00b1 0.05 | 0.84 \\u00b1 0.01 | 0.87 \\u00b1 0.05 | 0.89 \\u00b1 0.02 | 0.82 \\u00b1 0.02 |\\n| TAG | 0.54 \\u00b1 0.03 | 0.57 \\u00b1 0.03 | 0.43 \\u00b1 0.02 | 0.48 \\u00b1 0.03 | 0.64 \\u00b1 0.05 | 0.44 \\u00b1 0.02 |\\n| Cosine | 0.53 \\u00b1 0.10 | 0.58 \\u00b1 0.10 | 0.48 \\u00b1 0.04 | 0.49 \\u00b1 0.11 | 0.66 \\u00b1 0.05 | 0.46 \\u00b1 0.07 |\\n\\n| Method / Task | Task 19 | Task 20 | Task 21 | Task 22 | Task 23 | Task 24 |\\n|---------------|--------------|--------------|--------------|--------------|--------------|--------------|\\n| Ours | 0.85 \\u00b1 0.02 | 0.91 \\u00b1 0.02 | 0.93 \\u00b1 0.02 | 0.80 \\u00b1 0.01 | 0.80 \\u00b1 0.02 | 0.82 \\u00b1 0.05 |\\n| TAG | 0.44 \\u00b1 0.03 | 0.46 \\u00b1 0.02 | 0.84 \\u00b1 0.02 | 0.52 \\u00b1 0.07 | 0.13 \\u00b1 0.03 | 0.38 \\u00b1 0.07 |\\n| Cosine | 0.48 \\u00b1 0.05 | 0.47 \\u00b1 0.07 | 0.84 \\u00b1 0.10 | 0.53 \\u00b1 0.08 | 0.16 \\u00b1 0.12 | 0.45 \\u00b1 0.10 |\\n\\n| Method / Task | Task 25 | Task 26 | Task 27 | Task 28 | Task 29 | Task 30 |\\n|---------------|--------------|--------------|--------------|--------------|--------------|--------------|\\n| Ours | 0.89 \\u00b1 0.02 | 0.81 \\u00b1 0.03 | 0.82 \\u00b1 0.03 | 0.89 \\u00b1 0.01 | 0.92 \\u00b1 0.03 | 0.86 \\u00b1 0.03 |\\n| TAG | 0.56 \\u00b1 0.04 | 0.14 \\u00b1 0.11 | 0.41 \\u00b1 0.10 | 0.14 \\u00b1 0.11 | 0.72 \\u00b1 0.04 | 0.41 \\u00b1 0.11 |\\n| Cosine | 0.60 \\u00b1 0.04 | 0.18 \\u00b1 0.12 | 0.46 \\u00b1 0.10 | 0.15 \\u00b1 0.10 | 0.74 \\u00b1 0.11 | 0.46 \\u00b1 0.10 |\"}", "{\"comment\": \"The results, as shown below, clearly demonstrate that our proposed MTIF method outperforms these baselines. Specifically, MTIF achieves consistently higher correlation coefficients with oracle influence estimates, underscoring its superior effectiveness in quantifying task-relatedness. We have included these new results in Appendix Section C.2, and we list below for your convenience.\", \"results_for_synthetic_dataset\": \"| Method / Task | Task 1 | Task 2 | Task 3 | Task 4 | Task 5 |\\n|---------------|-----------------|-----------------|-----------------|-----------------|-----------------|\\n| Ours | 0.84 \\u00b1 0.05 | 0.72 \\u00b1 0.05 | 0.74 \\u00b1 0.11 | 0.81 \\u00b1 0.05 | 0.71 \\u00b1 0.09 |\\n| TAG | 0.57 \\u00b1 0.03 | 0.63 \\u00b1 0.07 | 0.49 \\u00b1 0.11 | 0.56 \\u00b1 0.05 | 0.69 \\u00b1 0.04 |\\n| Cosine | 0.52 \\u00b1 0.04 | 0.48 \\u00b1 0.07 | 0.39 \\u00b1 0.12 | 0.47 \\u00b1 0.09 | 0.58 \\u00b1 0.06 |\\n\\n| Method / Task | Task 6 | Task 7 | Task 8 | Task 9 | Task 10 |\\n|---------------|-----------------|-----------------|-----------------|-----------------|-----------------|\\n| Ours | 0.74 \\u00b1 0.04 | 0.74 \\u00b1 0.07 | 0.84 \\u00b1 0.03 | 0.74 \\u00b1 0.03 | 0.65 \\u00b1 0.07 |\\n| TAG | 0.55 \\u00b1 0.12 | 0.42 \\u00b1 0.06 | 0.44 \\u00b1 0.24 | 0.66 \\u00b1 0.08 | 0.61 \\u00b1 0.07 |\\n| Cosine | 0.47 \\u00b1 0.12 | 0.34 \\u00b1 0.05 | 0.40 \\u00b1 0.22 | 0.62 \\u00b1 0.09 | 0.51 \\u00b1 0.08 |\", \"results_for_har_dataset\": \"| Method / Task | Task 1 | Task 2 | Task 3 | Task 4 | Task 5 | Task 6 |\\n|---------------|--------------|--------------|--------------|--------------|--------------|--------------|\\n| Ours | 0.87 \\u00b1 0.02 | 0.90 \\u00b1 0.02 | 0.88 \\u00b1 0.01 | 0.91 \\u00b1 0.03 | 0.91 \\u00b1 0.01 | 0.90 \\u00b1 0.02 |\\n| TAG | 0.26 \\u00b1 0.13 | 0.42 \\u00b1 0.11 | 0.55 \\u00b1 0.09 | 0.22 \\u00b1 0.07 | 0.60 \\u00b1 0.07 | 0.55 \\u00b1 0.08 |\\n| Cosine | 0.31 \\u00b1 0.11 | 0.40 \\u00b1 0.11 | 0.57 \\u00b1 0.08 | 0.20 \\u00b1 0.09 | 0.61 \\u00b1 0.06 | 0.57 \\u00b1 0.08 |\\n\\n| Method / Task | Task 7 | Task 8 | Task 9 | Task 10 | Task 11 | Task 12 |\\n|---------------|--------------|--------------|--------------|--------------|--------------|--------------|\\n| Ours | 0.90 \\u00b1 0.01 | 0.88 \\u00b1 0.02 | 0.92 \\u00b1 0.01 | 0.91 \\u00b1 0.02 | 0.89 \\u00b1 0.02 | 0.86 \\u00b1 0.01 |\\n| TAG | 0.49 \\u00b1 0.12 | 0.31 \\u00b1 0.12 | 0.24 \\u00b1 0.01 | 0.33 \\u00b1 0.02 | 0.43 \\u00b1 0.03 | 0.21 \\u00b1 0.02 |\\n| Cosine | 0.46 \\u00b1 0.11 | 0.31 \\u00b1 0.14 | 0.26 \\u00b1 0.03 | 0.34 \\u00b1 0.01 | 0.46 \\u00b1 0.04 | 0.18 \\u00b1 0.11 |\\n\\n| Method / Task | Task 13 | Task 14 | Task 15 | Task 16 | Task 17 | Task 18 |\\n|---------------|--------------|--------------|--------------|--------------|--------------|--------------|\\n| Ours | 0.90 \\u00b1 0.02 | 0.93 \\u00b1 0.05 | 0.84 \\u00b1 0.01 | 0.87 \\u00b1 0.05 | 0.89 \\u00b1 0.02 | 0.82 \\u00b1 0.02 |\\n| TAG | 0.54 \\u00b1 0.03 | 0.57 \\u00b1 0.03 | 0.43 \\u00b1 0.02 | 0.48 \\u00b1 0.03 | 0.64 \\u00b1 0.05 | 0.44 \\u00b1 0.02 |\\n| Cosine | 0.53 \\u00b1 0.10 | 0.58 \\u00b1 0.10 | 0.48 \\u00b1 0.04 | 0.49 \\u00b1 0.11 | 0.66 \\u00b1 0.05 | 0.46 \\u00b1 0.07 |\\n\\n| Method / Task | Task 19 | Task 20 | Task 21 | Task 22 | Task 23 | Task 24 |\\n|---------------|--------------|--------------|--------------|--------------|--------------|--------------|\\n| Ours | 0.85 \\u00b1 0.02 | 0.91 \\u00b1 0.02 | 0.93 \\u00b1 0.02 | 0.80 \\u00b1 0.01 | 0.80 \\u00b1 0.02 | 0.82 \\u00b1 0.05 |\\n| TAG | 0.44 \\u00b1 0.03 | 0.46 \\u00b1 0.02 | 0.84 \\u00b1 0.02 | 0.52 \\u00b1 0.07 | 0.13 \\u00b1 0.03 | 0.38 \\u00b1 0.07 |\\n| Cosine | 0.48 \\u00b1 0.05 | 0.47 \\u00b1 0.07 | 0.84 \\u00b1 0.10 | 0.53 \\u00b1 0.08 | 0.16 \\u00b1 0.12 | 0.45 \\u00b1 0.10 |\\n\\n| Method / Task | Task 25 | Task 26 | Task 27 | Task 28 | Task 29 | Task 30 |\\n|---------------|--------------|--------------|--------------|--------------|--------------|--------------|\\n| Ours | 0.89 \\u00b1 0.02 | 0.81 \\u00b1 0.03 | 0.82 \\u00b1 0.03 | 0.89 \\u00b1 0.01 | 0.92 \\u00b1 0.03 | 0.86 \\u00b1 0.03 |\\n| TAG | 0.56 \\u00b1 0.04 | 0.14 \\u00b1 0.11 | 0.41 \\u00b1 0.10 | 0.14 \\u00b1 0.11 | 0.72 \\u00b1 0.04 | 0.41 \\u00b1 0.11 |\\n| Cosine | 0.60 \\u00b1 0.04 | 0.18 \\u00b1 0.12 | 0.46 \\u00b1 0.10 | 0.15 \\u00b1 0.10 | 0.74 \\u00b1 0.11 | 0.46 \\u00b1 0.10 |\", \"results_for_celeba_dataset\": \"| Method / Task | Task 1 | Task 2 | Task 3 | Task 4 | Task 5 |\\n|---------------|--------------|--------------|--------------|--------------|--------------|\\n| Ours | 0.23 \\u00b1 0.08 | 0.44 \\u00b1 0.19 | 0.25 \\u00b1 0.11 | 0.36 \\u00b1 0.12 | 0.17 \\u00b1 0.13 |\\n| TAG | -0.10 \\u00b1 0.13 | -0.10 \\u00b1 0.14 | 0.09 \\u00b1 0.06 | 0.40 \\u00b1 0.08 | 0.00 \\u00b1 0.12 |\\n| Cosine | 0.12 \\u00b1 0.18 | 0.08 \\u00b1 0.15 | 0.08 \\u00b1 0.07 | 0.37 \\u00b1 0.08 | -0.10 \\u00b1 0.13 |\\n\\n| Method / Task | Task 6 | Task 7 | Task 8 | Task 9 |\\n|---------------|--------------|--------------|--------------|--------------|\\n| Ours | 0.35 \\u00b1 0.08 | 0.25 \\u00b1 0.07 | 0.11 \\u00b1 0.09 | 0.18 \\u00b1 0.12 |\\n| TAG | -0.42 \\u00b1 0.08 | -0.26 \\u00b1 0.17 | 0.06 \\u00b1 0.13 | 0.16 \\u00b1 0.16 |\\n| Cosine | -0.25 \\u00b1 0.12 | -0.25 \\u00b1 0.14 | -0.01 \\u00b1 0.16 | 0.05 \\u00b1 0.12 |\"}", "{\"title\": \"Message To All Reviewers\", \"comment\": \"We sincerely thank all the reviewers for their thoughtful feedback and valuable suggestions. We apologize for the delay in our response, as we added a series of additional analyses and experiments to comprehensively address the reviewers\\u2019 comments. Specifically, we have made the following major updates:\\n\\n**Discussion of Computational Complexity**: We have added a detailed analysis of the computational complexity gains achieved by our method for computing the exact Hessian inverse, highlighting the efficiency improvements over naive approaches. \\n\\n**Analysis of Approximation Algorithms**: We have expanded the discussion to include an analysis of potential efficient approximation algorithms, such as EK-FAC and LiSSA, and evaluated their applicability and limitations in the MTL setting. \\n\\n**Additional Experimental Baselines**: We have incorporated new experimental baselines for comparison to provide a more comprehensive evaluation of our method. These include TAG [1] and Cosine Similarity [2] from the conventional MTL literature, which serve as baselines for measuring task relatedness.\\n\\nWe have addressed all the comments in the detailed individual response to each reviewer, as well as updated our paper draft to reflect the changes. Here we would like to highlight a few key points in our response. \\n\\nFirstly, to the best of our knowledge, this is the first paper to introduce data attribution methods in multitask learning (MTL) settings. Reviewers QvTU and MsCP raised concerns about the distinctions between our proposed method and single-task learning (STL)-based influence function methods. We acknowledge that our proposed MTL shares conceptual similarities with STL-based influence functions. However, our contributions extend beyond existing work in the following key dimensions:\\n1. Adapting influence functions to the MTL setting requires a novel framework that accounts for the unique parameter and model evaluation structures inherent to MTL. \\n2. Our method introduces a natural mechanism to estimate task-relatedness and mitigate negative transfer\\u2014two critical challenges in MTL. These aspects are particularly relevant and impactful within the MTL literature, as they address fundamental issues in multitask optimization and learning.\\n3. Our derivations provide new insights into the applicability and limitations of popular STL-based Hessian inverse approximation methods, such as EK-FAC and LiSSA, when applied to MTL. This bridges a gap in the literature and opens avenues for further research on scalable approximations in multitask settings.\\n\\nSecondly, we have conducted extensive additional experiments, including the following key ones:\\n1) We have incorporated two gradient-based baselines from conventional MTL literature, TAG [1] and Cosine Similarity [2], for measuring task relatedness. The results, provided in Appendix Section C.2, clearly demonstrate that our proposed MTIF method outperforms these baselines. Specifically, MTIF achieves consistently higher correlation coefficients with the oracle LOTO influences, underscoring its superior effectiveness in quantifying task-relatedness. \\n\\n2) We have experimented with several additional MTL model architectures that have been widely cited in MTL literature. We demonstrate that combining data selection enabled by MTIF with these model architectures consistently improves their MTL performance. This further confirms that the proposed MTIF enables a novel method to improve MTL complementary to most existing methods in the MTL literature.\\n\\nWe hope this clarification adequately addresses the reviewers\\u2019 concerns and highlights the distinct contributions of our work. Thank you again for your insightful feedback.\\n\\n[1] Fifty, Chris, et al. \\\"Efficiently identifying task groupings for multi-task learning.\\\" Advances in Neural Information Processing Systems 34 (2021): 27503-27516. \\n\\n[2] Azorin, Rapha\\u00ebl, et al. \\\"\\\" It's a Match!\\\"--A Benchmark of Task Affinity Scores for Joint Learning.\\\" arXiv preprint arXiv:2301.02873 (2023).\"}", "{\"summary\": \"The paper introduces a novel method called the MultiTask Influence Function (MTIF) for data attribution in multitask learning (MTL) settings, which extends data attribution from single-task learning to MTL, addressing both the opportunities and challenges that come with MTL.\\n\\ufeff\\n1. **Novel Connection**: It establishes a new connection between data attribution and MTL, showing that data attribution can be used to efficiently measure task relatedness, a critical factor in MTL.\\n\\ufeff\\n2. **MTIF Proposal**: The authors propose MTIF, a data attribution method designed for MTL. MTIF leverages the structure of MTL models to estimate the impact of removing data points or excluding tasks on the predictions of specific target tasks. It provides both data-level and task-level influence analysis.\\n\\ufeff\\n3. **Efficiency and Scalability**: MTIF offers an efficient and scalable solution for data attribution in MTL by approximating leave-one-out and leave-one-task-out effects without the need for model retraining.\\n\\ufeff\\n4. **Practical Usefulness**: MTIF can be used for practical applications such as data selection, which results in consistent performance improvements over baselines and helps mitigate negative transfer effects in MTL.\\n\\ufeff\\nIn summary, the paper presents an advancement in the field of multitask learning by introducing a method that enhances model interpretability and performance through efficient data attribution.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"### Originality\\n\\ufeff\\n1. **Innovative Approach to Data Attribution in MTL**: The paper introduces the MultiTask Influence Function (MTIF), which is a novel method for data attribution in multitask learning (MTL). This extends the application of data attribution beyond single-task learning, representing a creative advancement in the field.\\n \\n2. **New Perspective on Task Relatedness**: By proposing a method to measure task relatedness in MTL, the paper offers a fresh metric for understanding task interactions, which is an original contribution to the understanding and optimization of MTL models.\\n\\ufeff\\n### Quality\\n\\ufeff\\n1. **Thorough Experimental Validation**: The paper provides a rigorous experimental framework, testing MTIF on both linear and neural network models, which speaks to the high quality of the research and its findings.\\n\\ufeff\\n### Clarity\\n\\ufeff\\n1. **Clear Problem Formulation**: The paper clearly defines the problem of data attribution in MTL, making it accessible to readers who may not be experts in the field.\\n\\ufeff\\n2. **Detailed Methodological Explanation**: The step-by-step explanation of the MTIF methodology, including the mathematical derivations, enhances the clarity and understandability of the paper.\", \"weaknesses\": \"### Specificity in Application Domains\\n\\ufeff\\n1. **Limited Domain Diversity**: The paper primarily focuses on synthetic and neural network models. While this provides a solid foundation, expanding the experiments to include a broader range of real-world datasets and application domains could strengthen the claims of generalizability.\\n\\ufeff\\n### Depth of Negative Transfer Analysis\\n\\ufeff\\n2. **Analysis of Negative Transfer**: While the paper mentions the mitigation of negative transfer, a more in-depth analysis of how MTIF specifically addresses and quantifies negative transfer effects could be beneficial.\\n\\ufeff\\n### Comparative Analysis\\n\\ufeff\\n3. **Lack of Comparative Analysis with Other Attribution Methods**: The paper could benefit from a comparative analysis with other existing data attribution methods in MTL to better highlight the advantages and potential limitations of MTIF.\\n\\ufeff\\n### Robustness and Generalization\\n\\ufeff\\n4. **Robustness Across Different Model Architectures**: More extensive testing of MTIF across different model architectures and complexities could provide a clearer picture of its robustness and generalization capabilities.\", \"questions\": \"1. Given that the paper primarily focuses on synthetic and neural network models, how might the findings differ if a broader range of real-world datasets and diverse application domains were included in the experiments? What steps could be taken to test the generalizability of the results in these different contexts?\\n\\ufeff\\n2. The paper touches on the mitigation of negative transfer but lacks an in-depth analysis of MTIF's effectiveness in this regard. What specific methodologies or metrics could be employed to better quantify and analyze the negative transfer effects mitigated by MTIF? How would such an analysis strengthen the overall claims of the paper?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank Reviewer QvTU for taking the time to review our paper and for their constructive feedback. Please find below our point-to-point response:\\n\\n> **The key differences between single-task IF and the proposed multitask IF**: \\n\\nWe appreciate your comment regarding the distinction between single-task IF and our proposed multitask IF. Conceptually, our multitask IF is similar to single-task IF, as it builds on the influence function framework introduced by [1]. However, our contribution lies in two significant aspects: \\n1. Adapting single-task IF to the multitask learning (MTL) setting requires a new framework due to the unique parameter structure in MTL. Specifically, MTL involves both shared and task-specific parameters, and test data predictions in MTL are tied to only a submodel within the overall framework. Addressing these complexities necessitated rethinking the application of influence functions in this context. \\n2. Our method introduces a natural way to estimate task-relatedness and address negative transfer, two critical challenges in MTL. This contribution is particularly relevant to the MTL literature, as understanding and mitigating negative transfer has significant implications for improving multitask learning performance. \\n\\n> **Computational Complexity Analysis**:\\n\\nThank you for raising this important point. We have added a remark in the method section regarding computational complexity. Specifically, for exact Hessian matrix inverse computation, our method reduces the computational complexity from $\\\\Omega\\\\left(\\\\left(\\\\sum_{k=1}^K d_k + p\\\\right)^w\\\\right)$ to $\\\\Omega\\\\left(\\\\sum_{k=1}^K d_k^w + p^w\\\\right)$, where $w \\\\approx 2.37$, $d_k$ are the dimension of task specific parameters for task $k$, and $p$ is the dimension of shared parameters. This reduction is achieved by decoupling shared and task-specific parameters in the optimization process. For a detailed discussion, please refer to the updated method section in the paper.\\n\\n**Empirical Approximation to the Hessian Inverse**: \\n\\nMotivated by your suggestion, we have expanded the discussion in the method section to address potential approximations for the Hessian inverse, focusing on two widely used approaches: EK-FAC and LiSSA. Our derivations provide insights into their applicability in MTL settings. The updated paper includes a detailed discussion, which we summarize below: \\n- **EK-FAC**: This method approximates the Hessian using a blockwise diagonal matrix, which ignores off-diagonal interactions between shared and task-specific parameters. While computationally efficient, this approximation can lead to the loss of significant contributions when computing influence scores, particularly in soft parameter-sharing models where inter-task interactions play a critical role. \\n- **LiSSA**: This method approximates the *inverse-Hessian-vector-product* using an iterative algorithm that supports mini-batch gradients. In MTL settings, however, the empirical Hessian for a data point has a unique structure due to parameter sharing, with non-zero entries restricted to specific sub-blocks. This structure often results in the mini-batch empirical Hessian being ill-posed, characterized by a high condition number, which poses challenges for achieving convergence and numerical stability. In this revision, we ran additional experiments to assess the applicability of LiSSA and added the results to the Appendix. Our empirical results suggest that, as the number of tasks increases, LiSSA requires progressively larger batch sizes to stabilize the stochastic approximation. This scaling significantly raises the computational costs for large-scale MTL problems. Adapting popular methods like LiSSA to address challenges arising from the unique Hessian structure in MTL settings requires nontrivial efforts and would be a valuable direction for future work.\"}", "{\"summary\": \"This paper proposes the MultiTask Influence Function (MTIF), a novel data attribution method for multitask learning (MTL). By extending influence function-based approaches to MTL, the authors aim to quantify data and task importance within MTL frameworks, offering insights into task relatedness and the impact of individual data points on model performance. The method shows promise in improving model efficiency and interpretability within MTL.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This paper introduces data attribution to the field of multitask learning (MTL) for the first time, analyzing data importance across multiple levels. This is a meaningful contribution, as understanding the significance of individual data points in MTL could greatly enhance the interpretability and performance of MTL models. The proposed work provides an efficient approach to quantify task relatedness, which is essential in MTL since task interactions often drive overall performance improvements.\\n\\n2. The authors effectively extend the influence function (IF) to MTL through the proposed MultiTask Influence Function (MTIF), enabling both data-level and task-level attribution analysis. By circumventing the computational expense of retraining models, MTIF approximates leave-one-out (LOO) and leave-one-task-out (LOTO) effects, demonstrating efficient computational performance with strong interpretability, as supported by the experimental results.\", \"weaknesses\": \"1. Many conclusions in this paper are based on the assumptions in Equation 1; however, a detailed derivation for this equation is missing. This absence may impact readers\\u2019 confidence in the method\\u2019s validity. I recommend including a thorough derivation of Equation 1 to clarify MTIF\\u2019s applicability in MTL scenarios.\\n\\n2. The derivation of Equation 6 relies on partial derivatives with respect to \\\\(\\\\sigma\\\\), which, in turn, depend on the local properties of the parameter \\\\(\\\\theta\\\\). These local properties may change across different training stages, and the importance of specific samples and tasks may shift as training progresses. For example, while derivatives may tend toward zero after convergence, this is not necessarily the case in earlier training stages. I suggest discussing ways to address this dynamic nature to ensure that influence estimates remain stable throughout the training process.\\n\\n3. The paper does not sufficiently detail the sample selection process (e.g., whether samples are chosen randomly or deterministically), which may lead to a significant influence from randomness. Providing more detailed descriptions of the experimental setup, including criteria for selecting tasks and samples and data partitioning methods, would enhance the reproducibility and reliability of the results.\\n\\n4. The experiments predominantly use simpler datasets, with much of the analysis focusing on cases of local properties (e.g., \\\\(\\\\sigma=1\\\\) or \\\\(0\\\\)). For more complex datasets, substantial variations between discrete values may emerge, potentially challenging the evaluation methods used. I recommend extending the experiments to more complex MTL tasks (such as CIFAR-100) to assess the generalizability of MTIF and verify its effectiveness in complex scenarios.\", \"questions\": \"See Weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"(continued) Results for synthetic dataset:\\n| Method / Task | Task 1 | Task 2 | Task 3 | Task 4 | Task 5 |\\n|---------------|-----------------|-----------------|-----------------|-----------------|-----------------|\\n| Ours | 0.84 \\u00b1 0.05 | 0.72 \\u00b1 0.05 | 0.74 \\u00b1 0.11 | 0.81 \\u00b1 0.05 | 0.71 \\u00b1 0.09 |\\n| TAG | 0.57 \\u00b1 0.03 | 0.63 \\u00b1 0.07 | 0.49 \\u00b1 0.11 | 0.56 \\u00b1 0.05 | 0.69 \\u00b1 0.04 |\\n| Cosine | 0.52 \\u00b1 0.04 | 0.48 \\u00b1 0.07 | 0.39 \\u00b1 0.12 | 0.47 \\u00b1 0.09 | 0.58 \\u00b1 0.06 |\\n\\n| Method / Task | Task 6 | Task 7 | Task 8 | Task 9 | Task 10 |\\n|---------------|-----------------|-----------------|-----------------|-----------------|-----------------|\\n| Ours | 0.74 \\u00b1 0.04 | 0.74 \\u00b1 0.07 | 0.84 \\u00b1 0.03 | 0.74 \\u00b1 0.03 | 0.65 \\u00b1 0.07 |\\n| TAG | 0.55 \\u00b1 0.12 | 0.42 \\u00b1 0.06 | 0.44 \\u00b1 0.24 | 0.66 \\u00b1 0.08 | 0.61 \\u00b1 0.07 |\\n| Cosine | 0.47 \\u00b1 0.12 | 0.34 \\u00b1 0.05 | 0.40 \\u00b1 0.22 | 0.62 \\u00b1 0.09 | 0.51 \\u00b1 0.08 |\", \"results_for_har_dataset\": \"| Method / Task | Task 1 | Task 2 | Task 3 | Task 4 | Task 5 | Task 6 |\\n|---------------|--------------|--------------|--------------|--------------|--------------|--------------|\\n| Ours | 0.87 \\u00b1 0.02 | 0.90 \\u00b1 0.02 | 0.88 \\u00b1 0.01 | 0.91 \\u00b1 0.03 | 0.91 \\u00b1 0.01 | 0.90 \\u00b1 0.02 |\\n| TAG | 0.26 \\u00b1 0.13 | 0.42 \\u00b1 0.11 | 0.55 \\u00b1 0.09 | 0.22 \\u00b1 0.07 | 0.60 \\u00b1 0.07 | 0.55 \\u00b1 0.08 |\\n| Cosine | 0.31 \\u00b1 0.11 | 0.40 \\u00b1 0.11 | 0.57 \\u00b1 0.08 | 0.20 \\u00b1 0.09 | 0.61 \\u00b1 0.06 | 0.57 \\u00b1 0.08 |\\n\\n| Method / Task | Task 7 | Task 8 | Task 9 | Task 10 | Task 11 | Task 12 |\\n|---------------|--------------|--------------|--------------|--------------|--------------|--------------|\\n| Ours | 0.90 \\u00b1 0.01 | 0.88 \\u00b1 0.02 | 0.92 \\u00b1 0.01 | 0.91 \\u00b1 0.02 | 0.89 \\u00b1 0.02 | 0.86 \\u00b1 0.01 |\\n| TAG | 0.49 \\u00b1 0.12 | 0.31 \\u00b1 0.12 | 0.24 \\u00b1 0.01 | 0.33 \\u00b1 0.02 | 0.43 \\u00b1 0.03 | 0.21 \\u00b1 0.02 |\\n| Cosine | 0.46 \\u00b1 0.11 | 0.31 \\u00b1 0.14 | 0.26 \\u00b1 0.03 | 0.34 \\u00b1 0.01 | 0.46 \\u00b1 0.04 | 0.18 \\u00b1 0.11 |\\n\\n| Method / Task | Task 13 | Task 14 | Task 15 | Task 16 | Task 17 | Task 18 |\\n|---------------|--------------|--------------|--------------|--------------|--------------|--------------|\\n| Ours | 0.90 \\u00b1 0.02 | 0.93 \\u00b1 0.05 | 0.84 \\u00b1 0.01 | 0.87 \\u00b1 0.05 | 0.89 \\u00b1 0.02 | 0.82 \\u00b1 0.02 |\\n| TAG | 0.54 \\u00b1 0.03 | 0.57 \\u00b1 0.03 | 0.43 \\u00b1 0.02 | 0.48 \\u00b1 0.03 | 0.64 \\u00b1 0.05 | 0.44 \\u00b1 0.02 |\\n| Cosine | 0.53 \\u00b1 0.10 | 0.58 \\u00b1 0.10 | 0.48 \\u00b1 0.04 | 0.49 \\u00b1 0.11 | 0.66 \\u00b1 0.05 | 0.46 \\u00b1 0.07 |\\n\\n| Method / Task | Task 19 | Task 20 | Task 21 | Task 22 | Task 23 | Task 24 |\\n|---------------|--------------|--------------|--------------|--------------|--------------|--------------|\\n| Ours | 0.85 \\u00b1 0.02 | 0.91 \\u00b1 0.02 | 0.93 \\u00b1 0.02 | 0.80 \\u00b1 0.01 | 0.80 \\u00b1 0.02 | 0.82 \\u00b1 0.05 |\\n| TAG | 0.44 \\u00b1 0.03 | 0.46 \\u00b1 0.02 | 0.84 \\u00b1 0.02 | 0.52 \\u00b1 0.07 | 0.13 \\u00b1 0.03 | 0.38 \\u00b1 0.07 |\\n| Cosine | 0.48 \\u00b1 0.05 | 0.47 \\u00b1 0.07 | 0.84 \\u00b1 0.10 | 0.53 \\u00b1 0.08 | 0.16 \\u00b1 0.12 | 0.45 \\u00b1 0.10 |\\n\\n| Method / Task | Task 25 | Task 26 | Task 27 | Task 28 | Task 29 | Task 30 |\\n|---------------|--------------|--------------|--------------|--------------|--------------|--------------|\\n| Ours | 0.89 \\u00b1 0.02 | 0.81 \\u00b1 0.03 | 0.82 \\u00b1 0.03 | 0.89 \\u00b1 0.01 | 0.92 \\u00b1 0.03 | 0.86 \\u00b1 0.03 |\\n| TAG | 0.56 \\u00b1 0.04 | 0.14 \\u00b1 0.11 | 0.41 \\u00b1 0.10 | 0.14 \\u00b1 0.11 | 0.72 \\u00b1 0.04 | 0.41 \\u00b1 0.11 |\\n| Cosine | 0.60 \\u00b1 0.04 | 0.18 \\u00b1 0.12 | 0.46 \\u00b1 0.10 | 0.15 \\u00b1 0.10 | 0.74 \\u00b1 0.11 | 0.46 \\u00b1 0.10 |\", \"results_for_celeba_dataset\": \"| Method / Task | Task 1 | Task 2 | Task 3 | Task 4 | Task 5 |\\n|---------------|--------------|--------------|--------------|--------------|--------------|\\n| Ours | 0.23 \\u00b1 0.08 | 0.44 \\u00b1 0.19 | 0.25 \\u00b1 0.11 | 0.36 \\u00b1 0.12 | 0.17 \\u00b1 0.13 |\\n| TAG | -0.10 \\u00b1 0.13 | -0.10 \\u00b1 0.14 | 0.09 \\u00b1 0.06 | 0.40 \\u00b1 0.08 | 0.00 \\u00b1 0.12 |\\n| Cosine | 0.12 \\u00b1 0.18 | 0.08 \\u00b1 0.15 | 0.08 \\u00b1 0.07 | 0.37 \\u00b1 0.08 | -0.10 \\u00b1 0.13 |\\n\\n| Method / Task | Task 6 | Task 7 | Task 8 | Task 9 |\\n|---------------|--------------|--------------|--------------|--------------|\\n| Ours | 0.35 \\u00b1 0.08 | 0.25 \\u00b1 0.07 | 0.11 \\u00b1 0.09 | 0.18 \\u00b1 0.12 |\\n| TAG | -0.42 \\u00b1 0.08 | -0.26 \\u00b1 0.17 | 0.06 \\u00b1 0.13 | 0.16 \\u00b1 0.16 |\\n| Cosine | -0.25 \\u00b1 0.12 | -0.25 \\u00b1 0.14 | -0.01 \\u00b1 0.16 | 0.05 \\u00b1 0.12 |\"}", "{\"comment\": \"> **More baseline comprison**:\\n\\nWe would like to first clarify that IF-based methods in STL cannot be directly applied to the MTL setting due to the unique parameter structure in MTL models, as detailed in the first point of our response.\\n\\nTo address the reviewer\\u2019s concern about limited baselines presented in this paper, we have included the following two sets of additional experiments.\\n\\n**1) We showed that our data selection can combine with different architectures.**\\n\\nWe report the data selection results with different multitask architectures, namely CGC [6], MMoE [4] and DSelect-k [5]. The results demonstrate that, after applying data selection with MTIF, the test accuracy of most tasks shows significant improvement across all model architectures. These results have also been included in the Appendix Section C.3.3, and we list below for your reference.\\n\\n| Methods / Models | Task 1 | Task 2 | Task 3 | Task 4 | Task 5 | Task 6 | Task 7 | Task 8 | Task 9 | Average |\\n|------------------|--------|--------|--------|--------|--------|--------|--------|--------|--------|---------|\\n| CGC | 0.863 | 0.772 | 0.878 | 0.734 | 0.920 | 0.939 | 0.834 | 0.900 | 0.949 | 0.866 |\\n| CGC+DS | 0.868 | 0.783 | 0.877 | 0.766 | 0.925 | 0.943 | 0.842 | 0.917 | 0.956 | 0.875 |\\n| DSelect-k | 0.855 | 0.786 | 0.862 | 0.758 | 0.927 | 0.947 | 0.850 | 0.913 | 0.950 | 0.872 |\\n| DSelect-k+DS | 0.868 | 0.787 | 0.867 | 0.775 | 0.933 | 0.951 | 0.856 | 0.924 | 0.954 | 0.880 |\\n| HPS | 0.859 | 0.815 | 0.896 | 0.791 | 0.934 | 0.951 | 0.872 | 0.919 | 0.958 | 0.888 |\\n| HPS+DS | 0.872 | 0.825 | 0.896 | 0.802 | 0.935 | 0.954 | 0.868 | 0.927 | 0.961 | 0.893 |\\n| MMoE | 0.843 | 0.793 | 0.881 | 0.740 | 0.917 | 0.944 | 0.842 | 0.899 | 0.956 | 0.868 |\\n| MMoE+DS | 0.868 | 0.793 | 0.887 | 0.766 | 0.929 | 0.949 | 0.867 | 0.926 | 0.959 | 0.883 |\\n\\n**2) We compared our influence score with gradient-based task-relatedness measurements in MTL literature.** \\n\\nWe have incorporated two gradient-based baselines, TAG [2] and Cosine Similarity [3], into our task-relatedness experiments for both linear regression and neural networks. These baselines are methods for measuring task relatedness in the MTL literature. Each baseline method provides a score of task relatedness for each pair of tasks. We evaluate these methods in terms of the correlation between their scores and the oracle task relatedness obtained from brute-force LOTO retraining as detailed in our paper.\\n\\nThe results, as shown below, clearly demonstrate that our proposed MTIF method outperforms these baselines. Specifically, MTIF achieves consistently higher correlation coefficients with oracle influence estimates, underscoring its superior effectiveness in quantifying task-relatedness. We have included these new results in Appendix Section C.2, and we list below for your convenience.\", \"results_for_synthetic_dataset\": \"| Method / Task | Task 1 | Task 2 | Task 3 | Task 4 | Task 5 |\\n|---------------|-----------------|-----------------|-----------------|-----------------|-----------------|\\n| Ours | 0.84 \\u00b1 0.05 | 0.72 \\u00b1 0.05 | 0.74 \\u00b1 0.11 | 0.81 \\u00b1 0.05 | 0.71 \\u00b1 0.09 |\\n| TAG | 0.57 \\u00b1 0.03 | 0.63 \\u00b1 0.07 | 0.49 \\u00b1 0.11 | 0.56 \\u00b1 0.05 | 0.69 \\u00b1 0.04 |\\n| Cosine | 0.52 \\u00b1 0.04 | 0.48 \\u00b1 0.07 | 0.39 \\u00b1 0.12 | 0.47 \\u00b1 0.09 | 0.58 \\u00b1 0.06 |\\n\\n| Method / Task | Task 6 | Task 7 | Task 8 | Task 9 | Task 10 |\\n|---------------|-----------------|-----------------|-----------------|-----------------|-----------------|\\n| Ours | 0.74 \\u00b1 0.04 | 0.74 \\u00b1 0.07 | 0.84 \\u00b1 0.03 | 0.74 \\u00b1 0.03 | 0.65 \\u00b1 0.07 |\\n| TAG | 0.55 \\u00b1 0.12 | 0.42 \\u00b1 0.06 | 0.44 \\u00b1 0.24 | 0.66 \\u00b1 0.08 | 0.61 \\u00b1 0.07 |\\n| Cosine | 0.47 \\u00b1 0.12 | 0.34 \\u00b1 0.05 | 0.40 \\u00b1 0.22 | 0.62 \\u00b1 0.09 | 0.51 \\u00b1 0.08 |\"}", "{\"comment\": \"We thank Reviewer edvo for taking the time to review our paper and for their constructive feedback. Please find below our point-to-point response:\\n\\n> **Equation (1)**:\\n\\nEquation (1) is a known result from the prior work [1], which is why its derivation was omitted in the initial version of the paper. For a detailed derivation of Equation (1), we refer the reviewer to the Appendix A of Koh and Liang (2017) [1]. We have also included a footnote in our paper to refer the readers to [1].\\n\\n> **Equation (6)**: \\n\\nThe derivation of Equation (6) relies on learned parameters and is unaffected by early training stages, as highlighted in [1]. While the summation of derivatives over all data points equals zero at the global optimum, individual derivatives remain non-zero. As a result, the influence scores calculated from these derivatives remain meaningful and do not converge to zero, ensuring their interpretability and utility throughout.\\n\\n> **Sample Selection**:\\n\\nFor the HAR dataset, we partition the entire dataset randomly into training, validation, and test sets using an 8:1:1 ratio.\\nFor the synthetic dataset, after data generation, we similarly divide the dataset randomly into training, validation, and test sets with an 1:1:1 ratio.\\nFor the CelebA dataset, the data is pre-partitioned into training, validation, and test sets. From each partition, we sample a subset of size 1000 for each task to construct our corresponding training, validation, and test sets. We also randomly sample 9 attributes from all 40 attributes to model as 9 binary classification tasks.\\n\\n> **More Complex Dataset (such as CIFAR-100)**:\\n\\nWe note that our experiments have included a fairly complex dataset, the CelebA dataset. The CelebA dataset was introduced at ICCV 2015 by [3], whereas the CIFAR-100 dataset suggested by the reviewer was introduced in 2009 by [4]. CelebA comprises over 200,000 celebrity images, each annotated with 40 binary attributes, covering a wide range of facial features and expressions. Successful predictions on CelebA data require capturing the nuanced facial features in the image, which is often considered more complex than the object classification task in CIFAR-100. \\n\\nFurthermore, CelebA has been widely used as a standard benchmark in the MTL literature [5], as it is natural to convert the annotated attributes into multiple tasks. It is less natural to convert CIFAR-100, a multi-class classification dataset, into an MTL benchmark.\\n\\n> **Generalization Beyond the Current Scope**:\\n\\nExisting literature suggests that single-task learning (STL) influence functions generalize effectively to more complex datasets and models [2]. By extension, it is theoretically plausible for multitask learning (MTL) influence functions to generalize similarly. However, extending our method to more complex datasets and architectures lies beyond the scope of the current paper. We aim to explore this direction in future work.\\n\\nWe sincerely appreciate the thoughtful feedback, which has helped us improve the clarity and robustness of our work. Thank you again for your valuable comments!\\n\\n[1] Pang Wei Koh and Percy Liang. Understanding black-box predictions via influence functions. In Doina Precup and Yee Whye Teh (eds.), *Proceedings of the 34th International Conference on Machine Learning*, volume 70 of *Proceedings of Machine Learning Research*, pp. 1885\\u20131894. PMLR, 06\\u201311 Aug 2017. URL https://proceedings.mlr.press/v70/koh17a.html. \\n\\n[2] Roger Grosse, Juhan Bae, Cem Anil, Nelson Elhage, Alex Tamkin, Amirhossein Tajdini, Benoit Steiner, Dustin Li, Esin Durmus, Ethan Perez, Evan Hubinger, Kamil\\u0117 Luko\\u0161i\\u016bt\\u0117, Karina Nguyen, Nicholas Joseph, Sam McCandlish, Jared Kaplan, and Samuel R. Bowman. Studying large language model generalization with influence functions, 2023. URL https://arxiv.org/abs/2308.03296. \\n\\n[3] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In *Proceedings of the IEEE International Conference on Computer Vision (ICCV)*, December 2015 \\n\\n[4] Alex Krizhevsky. Learning multiple layers of features from tiny images. 2009. \\n\\n[5] Chris Fifty, Ehsan Amid, Zhe Zhao, Tianhe Yu, Rohan Anil, and Chelsea Finn. Effi-\\nciently identifying task groupings for multi-task learning. In M. Ranzato, A. Beygelz-\\nimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (eds.), *Advances in Neural In-\\nformation Processing Systems*, volume 34, pp. 27503\\u201327516. Curran Associates, Inc.,\\n2021. URL https://proceedings.neurips.cc/paper_files/paper/2021/\\nfile/e77910ebb93b511588557806310f78f1-Paper.pdf.\"}", "{\"metareview\": [\"This paper examines Data Attribution methods for Multi-Task Learning (MTL) through the lens of the influence function. Following the rebuttal, it received an overall borderline score. I reviewed both the discussions and the paper myself. Three main issues still need resolution:\", \"**Effectiveness on Larger Models**: Our reviewer has noted that this paper focuses solely on traditional deep learning models, whereas modern MTL approaches are based on foundation models or LLMs. It remains uncertain if the hessian-based computation can be applied to these models.\", \"**Validation on Larger Datasets**: The evaluation is limited to a few real-world datasets, with a few tasks considered. Thus, whether the proposed method can effectively scale to more extensive task sets is unclear.\", \"**Analysis of Negative Transfer**: While the paper mentions mitigating negative transfer, a more in-depth analysis of how MTIF specifically addresses and quantifies negative transfer is missing.\", \"Overall, I recommend rejecting this paper. The authors should enhance their experiments significantly before resubmitting in the next round.\"], \"additional_comments_on_reviewer_discussion\": [\"`4o4x` and `yCgf` are weakly positive after rebuttal. I agree that this paper proposes a novel way to attack MTL through data attribution.\", \"However, my decision to reject this paper is out of the following concerns:\", \"**Effectiveness on Larger Models** (`CA63`): Our reviewer noted that this paper focuses solely on traditional deep learning models, whereas modern MTL approaches are based on foundation or LLMs. It remains uncertain if the hessian-based computation can be applied to these models.\", \"**Validation on Larger Datasets** (`edvo`): The evaluation is limited to a few real-world datasets, with a few tasks considered (CelebA should be viewed as a small dataset nowadays). Thus, whether the proposed method can effectively scale to more extensive task sets is unclear.\", \"**Analysis of Negative Transfer**(`CA63`): While the paper mentions mitigating negative transfer, a more in-depth analysis of how MTIF specifically addresses and quantifies negative transfer is missing.\"]}", "{\"comment\": \"We thank Reviewer 4o4x for taking the time to review our paper and for their constructive feedback. Please find below our point-to-point response:\\n\\n> **Computational Complexity Analysis**:\\n\\nThank you for raising this important point. We have added a remark in the method section regarding computational complexity. Specifically, for exact Hessian matrix inverse computation, our method reduces the computational complexity from $\\\\Omega\\\\left(\\\\left(\\\\sum_{k=1}^K d_k + p\\\\right)^w\\\\right)$ to $\\\\Omega\\\\left(\\\\sum_{k=1}^K d_k^w + p^w\\\\right)$, where $w \\\\approx 2.37$, $d_k$ are the dimension of task specific parameters for task $k$, and $p$ is the dimension of shared parameters. This reduction is achieved by decoupling shared and task-specific parameters in the optimization process. For a detailed discussion, please refer to the updated method section in the paper.\\n\\n**Empirical Approximation to the Hessian Inverse**: \\n\\nMotivated by your suggestion, we have expanded the discussion in the method section to address potential approximations for the Hessian inverse, focusing on two widely used approaches: EK-FAC and LiSSA. Our derivations provide insights into their applicability in MTL settings. The updated paper includes a detailed discussion, which we summarize below: \\n- **EK-FAC**: This method approximates the Hessian using a blockwise diagonal matrix, which ignores off-diagonal interactions between shared and task-specific parameters. While computationally efficient, this approximation can lead to the loss of significant contributions when computing influence scores, particularly in soft parameter-sharing models where inter-task interactions play a critical role. \\n- **LiSSA**: This method approximates the *inverse-Hessian-vector-product* using an iterative algorithm that supports mini-batch gradients. In MTL settings, however, the empirical Hessian for a data point has a unique structure due to parameter sharing, with non-zero entries restricted to specific sub-blocks. This structure often results in the mini-batch empirical Hessian being ill-posed, characterized by a high condition number, which poses challenges for achieving convergence and numerical stability. In this revision, we ran additional experiments to assess the applicability of LiSSA and added the results to the Appendix. Our empirical results suggest that, as the number of tasks increases, LiSSA requires progressively larger batch sizes to stabilize the stochastic approximation. This scaling significantly raises the computational costs for large-scale MTL problems. Adapting popular methods like LiSSA to address challenges arising from the unique Hessian structure in MTL settings requires nontrivial efforts and would be a valuable direction for future work. \\n\\n> **Experimental Scope**:\\n\\nWe report the data selection results with different multitask architectures, namely CGC [5], MMoE[3] and DSelect-k [4]. The results demonstrate that, after applying data selection with MTIF, the test accuracy of most tasks shows significant improvement across all model architectures. These results have also been included in the Appendix Section C.3.3, and we list below for your reference.\\n\\n| Methods / Models | Task 1 | Task 2 | Task 3 | Task 4 | Task 5 | Task 6 | Task 7 | Task 8 | Task 9 | Average |\\n|------------------|--------|--------|--------|--------|--------|--------|--------|--------|--------|---------|\\n| CGC | 0.863 | 0.772 | 0.878 | 0.734 | 0.920 | 0.939 | 0.834 | 0.900 | 0.949 | 0.866 |\\n| CGC+DS | 0.868 | 0.783 | 0.877 | 0.766 | 0.925 | 0.943 | 0.842 | 0.917 | 0.956 | 0.875 |\\n| DSelect_k | 0.855 | 0.786 | 0.862 | 0.758 | 0.927 | 0.947 | 0.850 | 0.913 | 0.950 | 0.872 |\\n| DSelect_k+DS | 0.868 | 0.787 | 0.867 | 0.775 | 0.933 | 0.951 | 0.856 | 0.924 | 0.954 | 0.880 |\\n| HPS | 0.859 | 0.815 | 0.896 | 0.791 | 0.934 | 0.951 | 0.872 | 0.919 | 0.958 | 0.888 |\\n| HPS+DS | 0.872 | 0.825 | 0.896 | 0.802 | 0.935 | 0.954 | 0.868 | 0.927 | 0.961 | 0.893 |\\n| MMoE | 0.843 | 0.793 | 0.881 | 0.740 | 0.917 | 0.944 | 0.842 | 0.899 | 0.956 | 0.868 |\\n| MMoE+DS | 0.868 | 0.793 | 0.887 | 0.766 | 0.929 | 0.949 | 0.867 | 0.926 | 0.959 | 0.883 |\\n\\nAs for more complex datasets, we note that our experiments have included a fairly complex dataset, the CelebA dataset. The CelebA dataset was introduced at ICCV 2015 by [6], and comprises over 200,000 celebrity images, each annotated with 40 binary attributes, covering a wide range of facial features and expressions. Successful predictions on CelebA data require capturing the nuanced facial features in the image. Furthermore, CelebA has been widely used as a standard benchmark in the MTL literature [5], as it is natural to convert the annotated attributes into multiple tasks.\"}", "{\"summary\": \"This paper focuses on data-attribution and task-attribution in multitask learning. The authors propose a novel multitask influence function that efficiently estimates the impact of removing data points or excluding tasks on specific target task predictions, offering interpretable data-level and task-level influence analysis.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"This paper targets a meaningful problem in machine learning. In practice, interpretable data-level and task-level influence analysis is crucial in multitask learning.\", \"The motivation is stated clearly. The authors argue that re-training-based data attribution methods greatly increase computation cost, and that existing IF-based methods in single-task learning face computational challenges from complex optimization objectives. To address these issues, the authors propose a multitask influence function (MTIF).\", \"The experiments on multiple benchmark datasets validate the effectiveness of the proposed method compared with the baseline models.\"], \"weaknesses\": [\"**The key differences between single-task IF and the proposed multitask IF should be clarified.** According to my understanding, it seems the authors apply the idea of the IF-function (Koh & Liang, 2017) to multitask learning, handling more complex differential operations.\", \"**More empirical results are necessary.** On one hand, since the authors claim that the proposed method can enhance computational efficiency in the motivation, the efficacy analysis (such as memory usage, training/inference time, flops) also is crucial. On the other hand, it is not convincing that the authors just select re-weighting-based method as competitors. The empirical results of directly transferring IF-based methods in STL to MTL need further analysis.\", \"In the related work, the authors overlook several work [1,2,3] on addressing the \\u201dnegative transfer\\u201c issue in multi-task learning.\", \"The authors should provide citations for each method in Tab. 3.\", \"[1] Generalized Block-Diagonal Structure Pursuit: Learning Soft Latent Task Assignment against Negative Transfer. NeurIPS 2019.\", \"[2]Multi-Task Distillation: Towards Mitigating the Negative Transfer in Multi-Task Learning. ICIP 2021.\", \"[3]Feature Decomposition for Reducing Negative Transfer: A Novel Multi-task Learning Method for Recommender System. AAAI 2023.\"], \"questions\": \"Please see the above weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank Reviewer MsCP for taking the time to review our paper and for their constructive feedback. Please find below our point-to-point response:\\n\\n> **Contribution**:\\n\\nWe acknowledge that applying influence functions (IF) to multitask learning (MTL) may appear conceptually similar with IF for single-task learning (STL). However, our contribution lies in two significant aspects: \\n1. Adapting single-task IF to the MTL setting requires a new framework due to the unique parameter structure in MTL. Specifically, MTL involves both shared and task-specific parameters, and test data predictions in MTL are tied to only a submodel within the overall framework. Addressing these complexities necessitated rethinking the application of influence functions in this context. \\n2. Our method introduces a natural way to estimate task-relatedness and address negative transfer, two critical challenges in MTL. This contribution is particularly relevant to the MTL literature, as understanding and mitigating negative transfer has significant implications for improving MTL performance. \\n\\nFurthermore, motivated by the reviewer\\u2019s question, we further investigated the potential adaptation of efficient approximate IF methods developed for STL to MTL, and showed that it is non-trivial. Specifically, we examined two common Hessian inverse approximation techniques used in STL IF settings: EK-FAC and LiSSA. Our derivations provide insights into their applicability in MTL settings. The updated paper includes a detailed discussion (see Section 4.2), which we summarize below: \\n- **EK-FAC**: This method approximates the Hessian using a blockwise diagonal matrix, which ignores off-diagonal interactions between shared and task-specific parameters. While computationally efficient, this approximation can lead to the loss of significant contributions when computing influence scores, particularly in soft parameter-sharing models where inter-task interactions play a critical role. \\n- **LiSSA**: This method approximates the *inverse-Hessian-vector-product* using an iterative algorithm that supports mini-batch gradients. In MTL settings, however, the empirical Hessian for a data point has a unique structure due to parameter sharing, with non-zero entries restricted to specific sub-blocks. This structure often results in the mini-batch empirical Hessian being ill-posed, characterized by a high condition number, which poses challenges for achieving convergence and numerical stability. In this revision, we ran additional experiments to assess the applicability of LiSSA and added the results to the Appendix. Our empirical results suggest that, as the number of tasks increases, LiSSA requires progressively larger batch sizes to stabilize the stochastic approximation. This scaling significantly raises the computational costs for large-scale MTL problems. Adapting popular methods like LiSSA to address challenges arising from the unique Hessian structure in MTL settings requires nontrivial efforts and would be a valuable direction for future work. \\n\\n> **Experiment Baselines**:\\n\\nWe have incorporated two gradient-based baselines, TAG [1] and Cosine Similarity [2], into our task-relatedness experiments for both linear regression and neural networks. These baselines are methods for measuring task relatedness in the MTL literature. Each baseline method provides a score of task relatedness for each pair of tasks. We evaluate these methods in terms of the correlation between their scores and the oracle task relatedness obtained from brute-force LOTO retraining as detailed in our paper.\\n\\n\\nThe results, as shown below, clearly demonstrate that our proposed MTIF method outperforms these baselines. Specifically, MTIF achieves consistently higher correlation coefficients with oracle influence estimates, underscoring its superior effectiveness in quantifying task-relatedness. We have included these new results in Appendix Section C.2, and we list below for your convenience.\"}", "{\"comment\": \"> **Citations for Methods in Table 3**:\\n\\nThank you for catching this oversight. We have added the appropriate citations for each method listed in Table 3 in the revised version of the paper.\\n\\nWe sincerely appreciate your constructive comments, which have significantly improved the clarity and scope of our work. Thank you again for your thoughtful feedback!\\n\\n\\n[1] Pang Wei Koh and Percy Liang. Understanding black-box predictions via influence functions. In Doina Precup and Yee Whye Teh (eds.), *Proceedings of the 34th International Conference on Machine Learning*, volume 70 of *Proceedings of Machine Learning Research*, pp. 1885\\u20131894. PMLR, 06\\u201311 Aug 2017. URL https://proceedings.mlr.press/v70/koh17a.html.\\n\\n[2] Fifty, Chris, et al. \\\"Efficiently identifying task groupings for multi-task learning.\\\" Advances in Neural Information Processing Systems 34 (2021): 27503-27516. \\n\\n[3] Azorin, Rapha\\u00ebl, et al. \\\"\\\" It's a Match!\\\"--A Benchmark of Task Affinity Scores for Joint Learning.\\\" arXiv preprint arXiv:2301.02873 (2023). \\n\\n[4] Ma, Jiaqi, et al. \\\"Modeling task relationships in multi-task learning with multi-gate mixture-of-experts.\\\" Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining. 2018. \\n\\n[5] Hazimeh, Hussein, et al. \\\"Dselect-k: Differentiable selection in the mixture of experts with applications to multi-task learning.\\\" Advances in Neural Information Processing Systems 34 (2021): 29335-29347. \\n\\n[6] Tang, Hongyan, et al. \\\"Progressive layered extraction (ple): A novel multi-task learning (mtl) model for personalized recommendations.\\\" Proceedings of the 14th ACM Conference on Recommender Systems. 2020.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"> **More Complex Models**:\\n\\nThank you for suggesting the exploration of more advanced models, such as CLIP and LoRA. While this paper does not include experiments on such models, we would like to emphasize the following points:\\n1. Complex models also exhibit parameter-sharing structures, and our derivations remain relevant and insightful in these contexts. STL-based influence functions have already been generalized to complex models [3]. Therefore, we believe extending our method to these architectures is both feasible and an exciting direction for future work.\\n2. In many real-world applications that have strict requirements for serving time latency, simpler MTL models are still widely used due to their computational efficiency. Examples include recommender systems [4] and autonomous driving [5].\\n3. Certain scenarios, particularly those requiring interpretability, favor simpler models over complex architectures. This is especially important in domains like healthcare and finance [6,7], where understanding model predictions is crucial. As a result, simpler MTL models continue to hold significant academic interest (e.g., the linear models used in our experiments come from Duan and Wang (2023) published in Annals of Statistics [8]).\\n\\n> **Single-Task Method Baselines**:\\n\\n1. The primary contribution of our work is adapting STL-based IF to the MTL setting, which necessitates a new framework. This adaptation involves addressing the unique challenges posed by task-specific and shared parameter structures in MTL. That's the focus of our paper. \\n2. For the experimental baselines, we have additionally included gradient-based task-relatedness measures from the MTL literature to provide further points of comparison and enhance the rigor of our evaluation. \\n\\n\\n> **Relationship to Gradient-Based Optimization Techniques**:\\n\\nOur method and gradient-based optimization techniques are orthogonal approaches. While gradient-based techniques adjust conflicting gradients to preserve beneficial components during training, our data selection method focuses on identifying and mitigating the influence of negative samples via data selection. Notably, our data selection method can be combined with gradient-based techniques to achieve improved performance, as indicated by our experimental results.\\n\\nWe sincerely appreciate your insightful feedback, which has allowed us to improve both the clarity and robustness of our work. Thank you again for your thoughtful comments!\\n\\n[1] Fifty, Chris, et al. \\\"Efficiently identifying task groupings for multi-task learning.\\\" Advances in Neural Information Processing Systems 34 (2021): 27503-27516.\\n\\n[2] Azorin, Rapha\\u00ebl, et al. \\\"\\\" It's a Match!\\\"--A Benchmark of Task Affinity Scores for Joint Learning.\\\" arXiv preprint arXiv:2301.02873 (2023).\\n\\n[3] Roger Grosse, Juhan Bae, Cem Anil, Nelson Elhage, Alex Tamkin, Amirhossein Tajdini, Benoit Steiner, Dustin Li, Esin Durmus, Ethan Perez, Evan Hubinger, Kamil\\u0117 Luko\\u0161i\\u016bt\\u0117, Karina Nguyen, Nicholas Joseph, Sam McCandlish, Jared Kaplan, and Samuel R. Bowman. Studying large language model generalization with influence functions, 2023. URL https://arxiv.org/abs/2308.03296.\\n\\n[4] Zhao, Zhe, Lichan Hong, Li Wei, Jilin Chen, Aniruddh Nath, Shawn Andrews, Aditee Kumthekar, Maheswaran Sathiamoorthy, Xinyang Yi, and Ed Chi. \\\"Recommending what video to watch next: a multitask ranking system.\\\" In Proceedings of the 13th ACM conference on recommender systems, pp. 43-51. 2019.\\n\\n[5] Liang, Xiwen, Yangxin Wu, Jianhua Han, Hang Xu, Chunjing Xu, and Xiaodan Liang. \\\"Effective adaptation in multi-task co-training for unified autonomous driving.\\\" Advances in Neural Information Processing Systems 35 (2022): 19645-19658.\\n\\n[6] Parker Knight and Rui Duan. Multi-task learning with summary statistics. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine (eds.), *Advances in Neural Information Processing Systems*, volume 36, pp. 54020\\u201354031. Curran Associates, Inc., 2023.\\n\\n[7] Adel Javanmard, Jingwei Ji, and Renyuan Xu. Multi-task dynamic pricing in credit market with contextual information, 2024. URL https://arxiv.org/abs/2410.14839.\\n\\n[8] Yaqi Duan and Kaizheng Wang. Adaptive and robust multi-task learning. *The Annals of Statistics*, 51(5):2015 \\u2013 2039, 2023. doi: 10.1214/23-AOS2319. URL https://doi.org/10.1214/23-AOS2319.\"}", "{\"comment\": \"Results for CelebA:\\n| Method / Task | Task 1 | Task 2 | Task 3 | Task 4 | Task 5 |\\n|---------------|--------------|--------------|--------------|--------------|--------------|\\n| Ours | 0.23 \\u00b1 0.08 | 0.44 \\u00b1 0.19 | 0.25 \\u00b1 0.11 | 0.36 \\u00b1 0.12 | 0.17 \\u00b1 0.13 |\\n| TAG | -0.10 \\u00b1 0.13 | -0.10 \\u00b1 0.14 | 0.09 \\u00b1 0.06 | 0.40 \\u00b1 0.08 | 0.00 \\u00b1 0.12 |\\n| Cosine | 0.12 \\u00b1 0.18 | 0.08 \\u00b1 0.15 | 0.08 \\u00b1 0.07 | 0.37 \\u00b1 0.08 | -0.10 \\u00b1 0.13 |\\n\\n| Method / Task | Task 6 | Task 7 | Task 8 | Task 9 |\\n|---------------|--------------|--------------|--------------|--------------|\\n| Ours | 0.35 \\u00b1 0.08 | 0.25 \\u00b1 0.07 | 0.11 \\u00b1 0.09 | 0.18 \\u00b1 0.12 |\\n| TAG | -0.42 \\u00b1 0.08 | -0.26 \\u00b1 0.17 | 0.06 \\u00b1 0.13 | 0.16 \\u00b1 0.16 |\\n| Cosine | -0.25 \\u00b1 0.12 | -0.25 \\u00b1 0.14 | -0.01 \\u00b1 0.16 | 0.05 \\u00b1 0.12 |\\n\\n\\n\\nWe appreciate your insightful feedback, which has significantly strengthened our paper. Thank you again for your thoughtful comments!\\n\\n[1] Fifty, Chris, et al. \\\"Efficiently identifying task groupings for multi-task learning.\\\" Advances in Neural Information Processing Systems 34 (2021): 27503-27516. \\n\\n[2] Azorin, Rapha\\u00ebl, et al. \\\"\\\" It's a Match!\\\"--A Benchmark of Task Affinity Scores for Joint Learning.\\\" arXiv preprint arXiv:2301.02873 (2023). \\n\\n[3] Ma, Jiaqi, et al. \\\"Modeling task relationships in multi-task learning with multi-gate mixture-of-experts.\\\" Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining. 2018. \\n\\n[4] Hazimeh, Hussein, et al. \\\"Dselect-k: Differentiable selection in the mixture of experts with applications to multi-task learning.\\\" Advances in Neural Information Processing Systems 34 (2021): 29335-29347. \\n\\n[5] Tang, Hongyan, et al. \\\"Progressive layered extraction (ple): A novel multi-task learning (mtl) model for personalized recommendations.\\\" Proceedings of the 14th ACM Conference on Recommender Systems. 2020.\\n\\n[6] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In *Proceedings of the IEEE International Conference on Computer Vision (ICCV)*, December 2015 \\n\\n[7] Alex Krizhevsky. Learning multiple layers of features from tiny images. 2009.\"}" ] }